The Bottom Line
- While the FTC recognizes AI’s potential for innovation and economic growth, the FTC is concerned about its impact on vulnerable populations, such as children.
- As AI companion chatbots are designed to simulate communications like a friend, protecting children and teens who use this rapidly evolving technology is a high priority for the FTC.
- Providers of AI companion chatbots should adequately measure, test, and monitor the potential negative impacts of this technology on children and teens, and be cognizant of the potential effects on their social relationships, mental health, and well-being.
The Federal Trade Commission (FTC) announced an inquiry into seven major providers of consumer-facing AI companion chatbots that simulate human-like communication and interpersonal relationships. As part of its inquiry, using its 6(b) authority, the FTC sent orders to these companies seeking information on how they measure, test, and monitor the potentially negative impacts of this technology on children and teens.
The FTC’s Concerns
These recent efforts by the FTC come amidst the rapid growth in AI capabilities and the rise in kids using these bots for everyday decision-making. Today’s AI chatbots can not only simulate human-like communication and relationships, but they are also generally designed to communicate like a friend and companion.
Because AI chatbots can imitate human characteristics, emotions, and intentions, the FTC is concerned about the risk they create for children and teens who may be inclined to build trust and form relationships with them. According to media reports, some companies have deployed these AI companions without adequately evaluating, monitoring and mitigating their potential negative impacts on minors. For example, AI companions may generate outputs that instruct children on how to conduct violent, illegal, or otherwise physically harmful acts, or entice them into sharing sensitive personal information.
In the FTC’s announcement of this inquiry, Chairman Andrew N. Ferguson stated, “[a]s AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.”
The Information Request Orders
The FTC sent orders to the following AI-chatbot providers: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, Inc., and x.AI. Under these orders, the FTC is requesting information on how these companies:
- Monetize user engagement;
- Process user inputs and generate outputs;
- Develop and approve characters;
- Measure, test, and monitor for negative impacts before and after deployment;
- Employ disclosures, advertising, and other representations to inform users about features, capabilities, the intended audience, potential negative impacts, and data collection and handling practices; and
- Monitor and enforce compliance with the company’s rules and terms of service.
The FTC intends to collect this information to study what steps companies have taken to evaluate the safety of their chatbots on children and teens, as well as the potential negative effects these chatbots can have on children and teens.
What AI Chatbot Providers Can Do Now
Providers of AI companion chatbots should carefully assess, test, and monitor the potential negative impacts of this technology on children and teenagers. They must be aware of how these chatbots could affect young people’s social relationships, mental health, and overall well-being.
In addition to addressing the inquiries from the FTC, AI providers should also pay attention to state laws related to AI, as they continue to be proposed and enacted in states around the country.