OpenAI CEO Sam Altman

FTC Launches Investigation Into OpenAI Regarding Data Breach and the Alleged Inaccuracies of ChatGPT

The Federal Trade Commission (FTC) has launched an extensive investigation into OpenAI, delving into the potential violation of consumer protection laws with regards to the popular ChatGPT bot.

The agency recently issued a comprehensive 20-page request to the San Francisco-based company, demanding information on how it manages and mitigates risks associated with its AI models. This move by the FTC poses the most significant regulatory challenge to OpenAI’s operations in the United States, at a time when the company is actively engaging with global stakeholders to shape the future of AI policy.

Notably, OpenAI’s ChatGPT has been hailed as the fastest-growing consumer application in history, leading to a competitive race among Silicon Valley giants to introduce their own chatbot services. Furthermore, OpenAI’s CEO, Sam Altman, has become a prominent figure in the AI regulation discourse, testifying before Congress, engaging with legislators, and meeting with President Biden and Vice President Harris.

However, the company finds itself confronted with a fresh challenge in the political landscape of Washington, where the FTC has cautioned that prevailing consumer protection laws encompass the realm of AI, despite the ongoing struggle of the administration and Congress to establish comprehensive regulations. Senate Majority Leader Charles E. Schumer (D-N.Y.) has offered a forecast that the formulation of new legislations specifically addressing AI is likely to span several months.

The FTC approach towards OpenAI is an intriguing glimpse into their strategy for enforcing regulations. By imposing fines or imposing consent decrees that regulate data management, the FTC showcases its authority as the leading watchdog for Silicon Valley. Meta, Amazon, and Twitter have already experienced hefty penalties from the FTC for potential infringements of consumer protection laws.

OpenAI is now under scrutiny as the FTC demands comprehensive information about complaints concerning deceptive or harmful assertions made by their products. The investigation aims to ascertain whether the company’s practices have caused reputational damage to consumers.

Additionally, the FTC has requested documentation related to a security incident in March, where some users inadvertently accessed payment details and chat history due to system vulnerabilities. The FTC is keen to evaluate if OpenAI’s data security protocols align with consumer protection laws. OpenAI claims that only a minimal number of users had their data exposed in this incident, as stated in a blog post by the company.

The FTC declined to provide a comment, while OpenAI’s CEO, Sam Altman, publicly stated that the company will certainly collaborate with the agency.

Altman expressed disappointment over the FTC’s handling of the situation, starting with a leak, which he believes does not contribute to building trust. However, he emphasized OpenAI’s dedication to ensuring the safety and consumer-friendliness of its technology, expressing confidence in their adherence to the law.

Altman reiterated that user privacy remains a top priority for OpenAI, emphasizing that their systems’ purpose is to acquire knowledge about the world, rather than individuals’ private information.

The revelation of the probe coincided with a heated hearing before the House Judiciary Committee, where FTC Chair Lina Khan faced critical scrutiny from Republican lawmakers, questioning her effectiveness in managing the agency. It should be noted that Khan’s efforts to assert control over Silicon Valley have encountered significant legal setbacks. One such setback occurred on Tuesday when a federal judge rejected the FTC’s bid to impede Microsoft’s acquisition of Activision, valued at $69 billion.

FTC Chair Lina Khan (Graeme Jennings/Pool via REUTERS)
FTC Chair Lina Khan (Graeme Jennings/Pool via REUTERS)

During the congressional hearing, Rep. Dan Bishop (R-N.C.) questioned the legal authority behind the FTC demands placed on companies like OpenAI. Inquiring if the FTC was overstepping its jurisdiction, the Congressman highlighted how libel and defamation cases are typically handled under state laws rather than by the FTC.

In response, Khan clarified that the FTC’s main focus was not on libel and defamation, but rather on potential fraud or deception through the misuse of private information in AI training. Khan emphasized that the FTC’s primary concern is evaluating whether there is substantial harm inflicted upon individuals, which can manifest in various forms.

Notably, the FTC has been vocal about its intention to take action related to artificial intelligence and emerging threats. During a speech at Harvard Law School, Samuel Levine, the Director of the Bureau of Consumer Protection, affirmed the agency’s commitment to proactively addressing harmful practices in the AI domain.

While the FTC welcomes innovation, Levine emphasized that being innovative does not grant a license to be reckless, and the agency is fully prepared to utilize all available tools, including enforcement measures, to combat any detrimental practices in this arena.

FTC has taken an innovative and clever approach to address the regulation of AI. In a series of vibrant blog posts, they have employed popular science fiction movies to caution the industry against violating the law. The agency has diligently highlighted the perils of AI scams, denounced the use of generative AI to manipulate potential customers, and emphasized the importance of truthfully representing the capabilities of AI products. In a notable event, Khan, an FTC representative, participated in a news conference alongside Biden administration officials in April to raise awareness about the inherent risks of AI discrimination.

During this conference, Khan unequivocally stated that existing laws apply to AI without any exemptions.

Notably, the FTC’s proactive measures encountered immediate resistance from the tech industry. Adam Kovacevich, founder and CEO of the Chamber of Progress industry coalition, acknowledges the FTC’s authority when it comes to data security and misrepresentation. However, he questions whether the agency possesses the power to regulate defamation or evaluate the contents produced by ChatGPT.

The FTC has specifically requested Open AI to share any research, testing, or surveys that evaluate consumer comprehension regarding the “accuracy or dependability” of its AI tools’ outcomes. The agency has placed significant emphasis on records relating to OpenAI’s products potentially generating derogatory statements, urging the company to provide documentation of user complaints regarding the chatbot’s dissemination of false information.

Recognizing the potential harm caused by inaccurate information, the agency has shifted its attention towards tackling such fabrications. This redirection comes in the wake of several prominent incidents where the chatbot delivered false claims that had the potential to tarnish people’s reputations. A case in point is Mark Walters, a respected radio talk show host in Georgia, who took legal action against OpenAI, accusing the chatbot of spreading defamatory statements about him. ChatGPT, as alleged in the lawsuit, wrongly asserted that Walters, known for “Armed American Radio,” was involved in fraudulent activities and had embezzled funds from the Second Amendment Foundation. Curiously, this response was generated despite Walters having no affiliation with any legal disputes concerning the foundation, as the complaint highlights.

In a damning revelation, ChatGPT alleged that a lawyer had engaged in highly inappropriate behavior, including making sexually suggestive comments and attempting to touch a student during a class trip to Alaska. Surprisingly, the source of this accusation, an article supposedly published in The Washington Post, turned out to be non-existent. Not only did the allegations prove baseless, but it was also revealed that the class trip itself never took place, and the lawyer vehemently denied any misconduct, as previously reported by The Post.

Additionally, the FTC has demanded significant insights into the products offered by OpenAI, the company behind ChatGPT, as well as their advertising practices. The FTC’s inquiry extends to requesting information regarding the protocols and procedures OpenAI follows when introducing new products to the public. Notably, they are also seeking a comprehensive record of instances where OpenAI prioritized safety concerns and withheld the release of a large language model.

The regulatory agency has expressed the need for a comprehensive portrayal of the data utilized by OpenAI to train its innovative products. These products possess the remarkable capability to imitate human speech through the assimilation of textual content sourced predominantly from widely accessible platforms such as Wikipedia, Scribd, and various other websites. Furthermore, OpenAI has been asked to elucidate the methodology employed to refine its models, with particular emphasis on combatting the predisposition for “hallucinations” – instances where the models fabricate responses when faced with unfamiliar queries.

In addition, OpenAI is required to disclose pertinent information regarding the March security incident, including the extent of impact on users, as well as a detailed account of the measures taken to address the situation.

While the primary focus of the FTC’s inquiry centers on potential consumer protection violations, OpenAI is also requested to furnish pertinent details concerning its licensing practices with external entities.

The United States has found itself lagging behind in the realm of AI legislation and safeguarding against privacy risks that accompany this technology. In contrast, several countries within the European Union have been proactive in curbing the influence of American companies’ chatbots through the rigorous framework of the General Data Protection Regulation. A case in point is Italy, which temporarily halted the operations of ChatGPT due to apprehensions surrounding data privacy. Similarly, Google faced a setback when the launch of its chatbot Bard had to be postponed, as the Irish Data Protection Commission sought privacy assessments. It is anticipated that the European Union will further strengthen its AI legislation before the year concludes.

In Washington, a hive of activity has emerged as officials scramble to catch up with the rapidly evolving landscape of AI. Leading the charge, Senator Schumer orchestrated an informative gathering on Tuesday, inviting senators, alongside representatives from the Pentagon and intelligence community, to delve into the national security risks associated with AI. This collaborative effort, joined by a bipartisan group of senators, aims to mold new legislation that will address the intricate challenges posed by this emerging technology.

After the session, Schumer admitted to reporters that regulating AI will prove to be an arduous task. Striking a delicate balance between fostering innovation and implementing robust safeguards will undoubtedly pose a significant challenge for lawmakers.

Keeping the momentum up, Vice President Harris assumed the reins on Wednesday by convening a distinguished assembly of consumer protection advocates and civil liberties leaders at the illustrious White House. The gathering served as a platform for an extensive dialogue on the safety and security concerns accompanying AI. Steadfast in her belief, Harris dismissed the notion that safeguarding consumer interests and advancing innovation were mutually exclusive. Emphasizing the compatibility of both objectives, she declared, “It is a false choice to suggest that we either can advance innovation or we protect consumers. We can do both.”

Leave a Reply