AI Legislation Woefully Inadequate

Matthew Fu ’27 in Opinions | November 8, 2024

In light of recent leaps in A.I. development, the question of what A.I. programs should be allowed to do grows ever more complex. For example, a pair of glasses developed by AnhPhu Nguyen and Caine Ardayfio, two Harvard juniors, uses an A.I. facial recognition program to immediately identify people, using reverse image searching to find addresses, phone numbers, and other personal information. Undoubtedly, this is a total violation of privacy—and yet, there are few explicit laws prohibiting the use of A.I. programs in such ways. Only 17 states regulate A.I. usage at all. Although the White House put forth an A.I. Bill of Rights, the wording in this blueprint is vague and not legally binding. Considering the potential of A.I. to cause harm—through misinformation, breaches of privacy, and theft of intellectual property—punishable legislation must be put forth establishing frameworks for its use and commissions of specialists be established to judge special cases.

The blueprint for an A.I. Bill of Rights, written by the White House Office of Science and Technology, gives criteria for guiding the “design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” These principles include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. All five of these principles are undeniably important. However, this document, which is the primary resource that the US government has shared regarding any sort of A.I. framework, fails to accomplish much at all. Its language is vague: the Bill requests A.I. to be  “lawful and respectful of our nation’s values.” Programmers learn very little about what is actually permitted. Moreover, this entire document is not legally binding. It is simply a voluntary guide with no legal repercussions if broken. It is difficult to believe that a determined programmer would look at this document and redirect their work without any legal incentive.

The American government does protect human rights from A.I. to some extent: many existing laws on privacy and intellectual property can extend to A.I. programs. For example, owners of intellectual property can copyright or trademark their work, and privacy laws like the American Privacy Rights Act protect citizens. A.I. is not exempt from these laws. Unfortunately, this does not stop A.I. programs from using copyrighted material to answer questions or even generate art. Furthermore, according to OpenAI’s privacy policy, the company sells customer data to vendors and service providers. The issue is that first of all, the lines of legality are blurry, and second, there is no clear punishable agent. It is difficult to hold programmers accountable as oftentimes, they are unable to predict the actions that the program will take. For example, during beta testing of ChatGPT’s GPT-4o voice feature, the program unintentionally started mimicking a user’s voice. The code that makes up these programs changes constantly based on human interactions and is too complicated for humans to track its steady progress. With no humans responsible, A.I. continues to get away with human rights breaches. 

What we need, then, is immediate legislation to forcefully combat these violations. Despite the difficulty of holding specific people accountable, even more extensive testing of programs fixing any coding errors may provide a solution. Additionally, it is still fairly easy to discern when companies exploit these breaches of rights for financial gain. For example, if a company were to use Nguyen and Ardayfio’s facial recognition program to personalize ads for pedestrians, this intentional action clearly flouts the law. Rules prohibiting breaches of privacy, theft of intellectual property, and intentional misinformation must be enforced on large scales instead of existing as optional guides like the current Blueprint for an A.I. Bill of Rights. Only with definitive legislation can the public know their rights will be protected.

Furthermore, a commission related to A.I. should be established to deal with certain legal cases. Judges in most courts will not understand much about the nuances of A.I. that may matter in a case. It may be difficult for a layman to discern whether there likely was intent behind a breach of privacy. However, specialists in the field would have far more accurate insights, which can play a crucial role in determining whether an action is lawful. Therefore, a panel of combined judges and specialists would be ideal, as specialists in both the law and A.I. can come together to ensure that the right people are punished and that further breaches of human rights do not occur. Regular courts simply do not have the necessary resources to deal with these cases, and considering how legally ambiguous actions associated with A.I. are becoming far more common, this is a step that we will have to take quickly.

It is clear that our government is doing very little to deal with a massive, and still exponentially growing issue. The state must tangibly take action, not just roll out vague, unbinding “bills of rights” that do nothing at all to protect our rights. Establishing concrete laws and building special commissions would be a strong first step.