Home Tech/AIWhy chatbots are beginning to verify your age

Why chatbots are beginning to verify your age

by admin
0 comments
Why chatbots are beginning to verify your age

This article initially appeared in The Algorithm, our weekly newsletter focused on AI. To receive similar stories in your inbox, subscribe here.

How do technology firms ascertain if their users are minors?

This inquiry has gained significant importance lately amidst mounting apprehension regarding the risks associated with children interacting with AI chatbots. For years, major tech companies have requested users’ birthdates (which can easily be fabricated) to comply with child privacy regulations, yet they weren’t obligated to regulate content accordingly. Two developments within the past week illustrate the rapid changes occurring in the US and how this dilemma is emerging as a new contention point among parents and advocates for child safety.

On one side stands the Republican Party, which has backed laws enacted in multiple states mandating age verification for sites featuring adult content. Detractors argue this gives a pretext to ban anything classified as “detrimental to minors,” potentially encompassing sex education. Other states, such as California, are targeting AI firms with legislation aimed at safeguarding children who engage with chatbots (by necessitating verification of who a child is). Concurrently, President Trump is striving to maintain AI regulation as a national matter rather than permitting states to establish their own guidelines. Support for various legislative proposals in Congress is in constant fluctuation.

What could unfold? The discussion is swiftly shifting from whether age verification is essential to who will be accountable for it. This obligation is a contentious issue that no corporation is eager to assume.

In a blog entry last Tuesday, OpenAI disclosed its intend to implement automatic age prediction. Essentially, the firm will utilize a model that incorporates aspects like the time of day, among others, to estimate if a person communicating is under 18. For individuals identified as teenagers or children, ChatGPT will apply filters to “minimize exposure” to content such as graphic violence or sexual role-play. YouTube introduced a similar feature last year. 

If you’re in favor of age verification but concerned about privacy, this may seem like a positive development. However, there’s a caveat. The system isn’t flawless, of course, so it might misclassify a child as an adult or the other way around. Individuals misidentified as under 18 can authenticate their identity by providing a selfie or government-issued ID to a company called Persona. 

Selfie verifications have drawbacks: They tend to fail more frequently for individuals of color and those with certain disabilities. Sameer Hinduja, co-director of the Cyberbullying Research Center, indicates that the necessity for Persona to retain millions of government IDs and extensive biometric data poses another vulnerability. “When those are breached, we’ve put large populations at risk all at once,” he asserts. 

Hinduja instead promotes device-level verification, where parents indicate a child’s age when initially setting up the child’s phone. This data is then stored securely on the device and shared with apps and websites safely. 

This aligns closely with what Tim Cook, Apple’s CEO, recently advocated to US lawmakers. Cook was opposing legislators who sought to compel app stores to verify ages, which would place substantial liability on Apple. 

More indicators of the direction this is headed will emerge on Wednesday, when the Federal Trade Commission—the body responsible for enforcing these new regulations—is conducting an all-day workshop on age verification. The head of government affairs at Apple, Nick Rossi, will be present. He will be accompanied by senior officials in child safety from Google and Meta, along with a company specializing in marketing to children.

The FTC has become progressively politicized under President Trump (his dismissal of the only Democratic commissioner was overturned by a federal court, a ruling that is currently under evaluation by the US Supreme Court). In July, I discussed signs indicating that the agency is softening its position towards AI companies. Indeed, in December, the FTC rescinded a ruling from the Biden administration against an AI firm that permitted users to inundate the internet with counterfeit product reviews, stating that it conflicted with President Trump’s AI Action Plan.

Wednesday’s workshop might illuminate how partisan the FTC’s stance on age verification will be. Republican states support laws mandating age verification for pornographic websites (though critics caution this could be misused to restrict a much broader range of content). Bethany Soye, a Republican state representative spearheading efforts to pass such a bill in South Dakota, is scheduled to present at the FTC meeting. The ACLU generally opposes laws requiring ID verification to access websites and has instead promoted expanding existing parental controls.

While all this is under discussion, AI has ignited significant controversies surrounding child safety. We’re facing escalated production of child sexual abuse material, concerns (and lawsuits) regarding suicides and self-harm linked to chatbot interactions, and alarming signs of children developing attachments to AI companions. Clashing opinions on privacy, politics, free expression, and surveillance will complicate any attempts to find a resolution. Share your thoughts with me. 

You may also like

Leave a Comment