
The updated regulations would require AI firms to confirm the age of their users, ensuring they are 18 or older.
The updated regulations would require AI firms to confirm the age of their users, ensuring they are 18 or older.


A recent legislative proposal might mandate AI companies to verify the age of all users of their chatbots. Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) unveiled the GUARD Act on Tuesday, which would also prohibit anyone under the age of 18 from using AI chatbots, as noted earlier by NBC News.
The legislation comes shortly after advocates for safety and parents participated in a Senate hearing emphasizing the effects of AI chatbots on children. According to the law, AI firms would need to confirm ages by asking users to provide a government-issued ID or using another “reasonable” verification method, which could include options such as face scans.
AI chatbots would be required to clarify that they are not human every 30 minutes as stipulated in the proposal. They would also need to implement measures that stop them from asserting they are human, akin to a recently passed AI safety law in California. The proposal would also make it illegal to run a chatbot that generates explicit content for minors or promotes self-harm.
“Our proposal enforces strict protections against manipulative or exploitative AI, supported by stringent enforcement through criminal and civil liabilities,” Blumenthal stated in a remark to The Verge. “Big Tech has fallen short of any assurance that we can rely on them to act responsibly when they prioritize profit over child safety.”