
The updated regulations would require AI firms to confirm if users are aged 18 or older.
The updated regulations would require AI firms to confirm if users are aged 18 or older.


A recent legislative proposal could mandate AI developers to validate the ages of all individuals using their chatbots. Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) unveiled the GUARD Act on Tuesday, which would also prohibit anyone younger than 18 from utilizing AI chatbots, as previously reported by NBC News.
The legislation comes in the wake of safety advocates and parents participating in a Senate hearing addressing the influence of AI chatbots on children. According to the proposed law, AI companies would be required to confirm ages by necessitating users to submit their government identification or validate through another “reasonable” means, potentially including methods like facial recognition.
Under the bill, AI chatbots would need to inform users that they are not human at intervals of 30 minutes. Additionally, they must implement measures that prevent them from claiming to be human, similar to a recent AI safety law enacted in California. The legislation would criminalize the operation of a chatbot that produces sexually explicit content for minors or promotes self-harm.
“This legislation enforces rigorous safeguards against deceptive or harmful AI, reinforced by strict penalties, both criminal and civil,” Blumenthal stated in a release to The Verge. “Big Tech has failed to uphold any assurances that we should trust them to act responsibly on their own, consistently prioritizing profit over the safety of children.”