(Bloomberg) — The Federal Trade Commission ordered Alphabet Inc.’s Google, OpenAI Inc., Meta Platforms Inc. and four other makers of artificial intelligence chatbots to turn over information about the impacts of their technologies on kids.
The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens. The companies also include Meta’s Instagram, Snap Inc., Elon Musk’s xAI and Character Technologies Inc., the developer of Character.AI.
Chatbot developers face intensifying scrutiny over whether they’re doing enough to ensure safety of their services and prevent users from engaging in dangerous behavior.
Last month, the parents of a California high school student sued OpenAI, alleging that its ChatGPT isolated their son from family and helped him plan his suicide in April. The company said it has extended its sympathies to the family and is reviewing the complaint. Character Technologies and Google were hit with a similar suit last fall. The judge in that case allowed most of the family’s claims to proceed and rejected the app-maker’s argument that chatbot output was protected by the First Amendment.
Google and Snap didn’t have an immediate comment, while OpenAI and xAI didn’t immediately respond to requests. Meta declined to comment. The company has taken steps recently aimed at ensuring that chatbots avoid engaging with minors on topics including self-harm and suicide.
A Character.AI spokesperson said the company has invested “a tremendous amount of resources” into safety features, including a separate version for under-18 users and include disclaimers in chats that the chatbots are not real people and “should be treated as fiction.”
Under US law, technology companies are barred from collecting data about children under the age of 13 without parental permission. For years, members of Congress have sought to extend those protections to older teens, though so far no legislation has managed to advance.
The FTC is conducting the inquiry under its so-called 6(b) authority that allows it to issue subpoenas to conduct market studies. The agency generally issues a report on its findings after analyzing the information from companies, though that process can take years to complete.
Although the information is collected for research purposes, the FTC can use any details it gleans to open official investigations or aid in existing probes. Since 2023, the agency has been probing whether OpenAI has violated consumer protection laws with ChatGPT.
The agency, currently helmed entirely by Republicans after President Donald Trump sought to remove the FTC’s Democrats earlier this year, voted 3-0 to open the study. In statements, two of the GOP members emphasized that the study comports with Trump’s AI action plan by aiding policymakers in better understanding the complex technology. They also cited a number of recent news reports about teens and kids who turned to chatbots to discuss suicidal thoughts and romance or sex.
(Updates with Character.AI comment in sixth paragraph.)
More stories like this are available on bloomberg.com
©2025 Bloomberg L.P.