The Implications of AI and How it is Being Regulated

By Published On: August 22, 2025

Since the emergence of ChatGPT and OpenAI, the concept of artificial intelligence has been a contentious issue. Whether it’s students cheating in school or an AI-generated call impersonating the president, AI brings with it many risks and can be harmful to society. However, AI can also be an incredibly powerful tool, for example its ability to diagnose patients more efficiently and identify health issues at earlier stages. AI is a complicated and experimental technology, which inclines governments to put restrictions and regulations on it. The government should oversee the use of AI, but should not restrict the process of innovation nor the development of advanced computer science.

There are different types of AI, including machine learning and generative AI. Machine learning consists of the AI model taking in data and making predictions based off of it. This form of  AI is able to make more efficient and accurate predictions when it takes in more data. ChatGPT is an example of generative AI, as it is able to generate responses to user-inputed questions.

In an interview with the Washington Post, Anne Neuberger, the Deputy National Security Advisor for Cyber and Emerging Technology at the White House, spoke about the “promise and peril of AI.” She described the duality of AI as having the potential to provide both immense benefits and extreme harm. Neuberger gave the example of how AI can help with ALS; with the help of AI’s ability to clone voices, an ALS patient would be able to communicate with family members, something that would not be possible without generative AI. However, this incredible ability of AI has a detrimental use as well, as voice cloning can be used to create deep fakes, highly realistic videos impersonating others. This can be quite harmful, as in January, prior to the Democratic primary elections, there was an AI-generated call replicating President Biden’s voice and telling Democrats in New Hampshire not to vote in the primaries. Deepfakes using the voices of politicians can be immensely harmful, as they can artificially change the course of elections.

In addition, generative AI is a double-edged sword in regards to education. The use of generative AI technology, such as ChatGPT, can prevent students from actually learning basic skills such as math or essay-writing.,. On the other hand, according to Neuberger, AI can be very helpful in educational settings because it can aid teachers in providing personalized teaching to students with different learning styles.

Neal Khosla, the CEO and co-founder of the company Curai, described the benefits of AI in the medical field in an interview with the Washington Post. Curai aims to cultivate an AI model that is able to give an initial diagnosis of the patient and provide recommendations for clinicians, thus allowing patients to be diagnosed and provided with the necessary treatment more efficiently. Khosla addressed the common belief that AI would be safer if it was only trained on data very specific to the field, stating that “general models outperform healthcare-specific models.” Khosla believes that AI should be regulated not in its development nor on the data that its trained with, but rather on its outputs. Overall, Neuberger and Khosla’s views illustrate that AI has both benefits and risks, and that to maximize its benefits while minimizing its harms, regulations must be placed on the outputs of generative AI, not on development and experimentation

President Biden and Congress have taken several steps towards establishing regulations on AI. Neuberger breaks down the approach of the U.S. government into three steps. First, she states that some companies have signed voluntary commitments to certain regulations, such as maintaining transparency with regard to biased data for determining loan or job selection. Secondly, President Biden made a landmark AI executive order, which consisted of a robust set of ways to evaluate risks. This order ensures that humans are involved with key decisions, especially when using AI in “critical systems” like water and electricity. Lastly, Neuberger focused on watermarks as a potential form of regulation, which could be used to indicate that the content was AI-generated, specifically deepfakes or robocalls. Neuberger states that, while the U.S. is a leader in the development of AI, the EU will be a leader in AI restrictions, as the “EU’s AI act is the most comprehensive regulation” thus far. Neuberger predicts that the U.S. will observe the successes and failures of the EU’s legislation on AI in order to create its own regulations based on that information. The key takeaway from Neuberger’s interview is that the U.S. government under President Biden has taken substantial steps to prepare to implement AI regulations when necessary. As AI continues to evolve and advance, there are many potential regulations which show promise for minimizing its risks while maximizing its benefits.