Courts Take the Lead in Shaping AI Development and Usage in the US

Palais de Justice

In an increasingly AI-driven world, courts are poised to shape the future of Artificial Intelligence (AI) development and usage in the United States. Several lawsuits and investigations against major AI companies, including OpenAI, Meta, and Microsoft, are underway, with the potential to transform the AI industry.

Last week, the Federal Trade Commission (FTC) launched a probe into OpenAI, investigating whether it breached consumer protection laws by scraping online data to train its AI chatbot, ChatGPT. Simultaneously, artists, authors, and image company Getty are taking legal action against AI companies, including OpenAI, Stability AI, and Meta, claiming copyright violations for using their work as training data without consent or compensation.

If successful, these lawsuits could compel AI companies to modify their practices, making AI development more equitable. They could introduce a system of licensing and royalties, providing a new form of compensation for those whose work is used as training data for AI models.

Despite the enthusiasm for AI-specific laws among American politicians, the divided Congress and strong tech lobbying make it unlikely for such legislation to pass in the next year, according to Ben Winters, senior counsel at the Electronic Privacy Information Center. Therefore, existing laws and related lawsuits are the most straightforward path toward an AI rulebook, suggests Sarah Myers West, managing director of the AI Now Institute.

Over the past year, numerous lawsuits have been filed against AI companies, alleging rights violations. Claims include illegal scraping of copyrighted material and reliance on “software piracy on an unprecedented scale” for training models.

The FTC’s investigation into OpenAI’s data security and privacy practices could result in fines, data deletion orders, or even the removal of ChatGPT. Other government enforcement agencies, such as the Consumer Financial Protection Bureau, may also launch their own investigations.

Many lawsuits could take years to reach the court and may be dismissed for being too broad. However, they serve an essential role, forcing companies to improve data documentation practices and change their AI model development.

The US’s reactive approach to AI regulation differs from the proactive measures taken in the EU, favoring innovation. However, the class-action lawsuits over copyright and privacy could shed light on how AI algorithms operate and create new compensation methods for those whose work is used in AI models.

The lawsuits may pave the way for a licensing solution similar to the music industry’s system for song sampling. This could provide royalties for artists, authors, and other copyright holders and require companies to seek explicit permission to use copyrighted content.

Tech companies’ “fair use” argument for using publicly available copyrighted data is disputed by copyright holders. The class actions could determine the validity of this argument. As AI continues to evolve, legal battles around privacy, biometric data, product liability, and Section 230 are anticipated, potentially shaping the future of AI in the US.