OpenAI, the company behind the popular ChatGPT AI assistant, is facing scrutiny from the US Federal Trade Commission (FTC) over allegations of violating consumer protection laws. This investigation has raised concerns about potential risks to personal data and reputations. The FTC has sent a 20-page record request to OpenAI, focusing on the company’s risk management strategies regarding its AI models.
The agency is investigating whether OpenAI has engaged in deceptive or unfair practices that may have caused reputational harm to consumers. One particular area of focus is how OpenAI addresses the potential for its products to generate false, misleading, or disparaging statements about real individuals. These false generations are sometimes referred to as “hallucinations” or “confabulations” within the AI industry.
The FTC’s interest in misleading or false statements may be partly in response to incidents involving OpenAI’s ChatGPT. In one case, the AI assistant reportedly fabricated defamatory claims about a radio talk show host from Georgia named Mark Walters. The AI falsely stated that Walters was involved in embezzlement and fraud related to the Second Amendment Foundation, leading Walters to file a defamation lawsuit against OpenAI. Another incident involved the AI model falsely claiming that a lawyer had made sexually suggestive comments on a student trip to Alaska, an event that never occurred.
This regulatory inquiry poses a significant challenge for OpenAI, which has generated both excitement and concern within the tech industry since releasing ChatGPT in November. While pushing the boundaries of what many believed was possible with AI-powered products, OpenAI’s activities have also raised questions about the potential risks associated with their AI models.
As the demand for more advanced AI models grows, government agencies worldwide are taking a closer look at developments in this field. Regulators such as the FTC are grappling with how existing rules can apply to cover various aspects of AI models, including copyright issues, data privacy concerns, and specific challenges related to the training data and generated content.
In June, US Senate Majority Leader Chuck Schumer called for comprehensive legislation to oversee the progress of AI technology. He emphasized the need for necessary safeguards as AI continues to advance rapidly. Schumer plans to hold a series of forums on this subject later in the year.
This is not the first regulatory hurdle that OpenAI has faced. In March, Italian regulators blocked ChatGPT due to accusations that OpenAI had violated the European Union’s GDPR privacy regulations. However, following negotiations, OpenAI reinstated the ChatGPT service by implementing age-verification features and providing European users with an option to block their data from being used for training the AI model.
OpenAI now has two weeks from receiving the request to schedule a call with the FTC. This call will allow them to discuss any potential modifications to the request or address any compliance issues.
This investigation by the FTC highlights the growing concerns surrounding AI models and their impact on consumer protection. It also underscores the need for clearer regulations and guidelines as technology continues to evolve rapidly in this field.