The main U.S. competition watchdog is investigating the creator of ChatGPT due to allegations suggesting that it violated consumer protection laws by harming individuals' reputations through its responses and misusing personal information.
The Federal Trade Commission (FTC) is demanding comprehensive records regarding how OpenAI handles personal data, its potential to provide users with inaccurate information, and its “risks of harm to consumers, including reputational harm.”
FTC Investigates ChatGPT Maker
OpenAI, the AI startup behind ChatGPT, is now under investigation by the Federal Trade Commission (FTC) regarding potential harm caused to consumers through data collection and the dissemination of false information about individuals.
The FTC sent a comprehensive 20-page letter to OpenAI, expressing their concerns and stating that they are also examining OpenAI's security practices. The letter included many inquiries about OpenAI's AI model training methods, and data handling procedures. The FTC requested the company to provide relevant documents and information to the agency.
This investigation was initially reported by The Washington Post and confirmed by an individual familiar with the matter.
ChatGPT Disseminates False Information About Individuals
The FTC has raised concerns about OpenAI's measures to address the potential for its products, such as ChatGPT, to generate false, misleading, or disparaging statements about real individuals. This issue gained attention when ChatGPT falsely accused a U.S. law professor of sexual harassment and referenced a non-existent Washington Post article.
Chatbots like ChatGPT operate using predictive text models that attempt to predict the most likely word or sentence following a user's input. While this approach can lead to factual errors, the responses often appear plausible and human-like, potentially misleading users into believing they are entirely accurate. These models are trained on vast amounts of internet data.
FTC Asks OpenAI to Disclose Its Information Data
The FTC has also requested that OpenAI disclose the data used to train their large language models, including those powering ChatGPT. However, OpenAI has so far declined to provide this information. Additionally, there are ongoing legal actions against OpenAI by authors, including comedian Sarah Silverman, who claim that ChatGPT's language model has been trained on data that includes their work.
The FTC has issued a demand to OpenAI, seeking information regarding the source of the data used to train its models. The agency specifically wants to know if OpenAI obtained the data by scraping it directly from the internet or by purchasing it from third-party sources. Furthermore, the FTC is requesting details about the specific websites from which the data was taken and any measures taken by OpenAI to prevent the inclusion of personal information in the training data.
OpenAI Co-Founder’s Comments Regarding the Investigation
OpenAI's leader, Sam Altman, emphasizes the necessity of regulating the rapidly expanding AI industry. In May, Altman testified in Congress, advocating for legislation concerning AI, and has engaged with numerous lawmakers to shape a policy agenda for the technology.
In a recent tweet on Thursday, Altman expressed the utmost importance of ensuring OpenAI's technology is safe. He affirmed the company's confidence in adhering to the law and expressed willingness to collaborate with the agency in question.
“We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. we protect user privacy and design our systems to learn about the world, not private individuals,” he further tweeted.
OpenAI Came Under Regulatory Pressure Internationally
OpenAI has faced regulatory pressure on an international level. In March, the data protection authority in Italy imposed a ban on ChatGPT, citing OpenAI's unlawful collection of personal data from users and the absence of an age-verification system to prevent minors from accessing inappropriate content. OpenAI later restored access to the system the following month after implementing the changes requested by the Italian authority.
The Federal Trade Commission (FTC) is displaying notable swiftness in taking action on AI, initiating an investigation less than a year after the introduction of ChatGPT by OpenAI. Lina Khan, the chair of the FTC, has expressed the view that technology companies should be regulated during the early stages of technology development, rather than solely when they become established.
During a House committee hearing, Lina Khan reiterated the need for scrutiny in the AI industry, emphasising her stance on the matter.
Regulators Worldwide Intensify Scrutiny as AI Advances
With the rapid advancement of AI services, there has been a corresponding increase in regulatory scrutiny surrounding this transformative technology that has the potential to reshape societies and businesses.
Regulators worldwide are striving to apply existing regulations that cover a range of areas, including copyright and data privacy, to address two critical aspects of AI: the data used to train models and the content they generate. In the U.K, Prime Minister Rishi Sunak has scheduled a global AI safety summit for the coming autumn, while the country's domestic competition watchdog is also closely examining the industry.
In the United States, Senate Majority Leader Chuck Schumer has called for comprehensive legislation to promote and ensure safeguards in the field of AI. To further this goal, a series of forums will be held later this year to facilitate discussions and deliberations on the matter.
Read More
OpenAI CEO Sam Altman Opens Up About His Concerns
The Latest Breakthrough in AI: Multimodal GPT-4 is Here
Misbar’s Sources: