Home Tech/AIParents urge New York governor to endorse significant AI safety legislation

Parents urge New York governor to endorse significant AI safety legislation

by admin
0 comments
Parents urge New York governor to endorse significant AI safety legislation

STK485_STK414_AI_SAFETY_B

A coalition of over 150 parents addressed a letter to New York governor Kathy Hochul on Friday, imploring her to endorse the Responsible AI Safety and Education (RAISE) Act without modifications. The RAISE Act is a trending legislation that mandates developers of significant AI systems — including Meta, OpenAI, Deepseek, and Google — to formulate safety strategies and adhere to transparency protocols regarding safety incident reporting.

The bill gained approval in both the New York State Senate and Assembly in June. However, this week, Hochul allegedly introduced a nearly complete overhaul of the RAISE Act that would tilt it more in favor of technology firms, similar to some of the adjustments made to California’s SB 53 after significant AI companies weighed in.

Not surprisingly, several AI corporations are vehemently opposed to the proposed legislation. The AI Alliance, which includes
Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, dispatched a letter in June to New York legislators expressing their “severe concerns” about the RAISE Act, labeling it as “impractical.” Additionally, Leading the Future, the pro-AI super PAC supported by Perplexity AI, Andreessen Horowitz (a16z), OpenAI president Greg Brockman, and Palantir co-founder Joe Lonsdale, has focused on New York State Assemblymember Alex Bores, who co-sponsored the RAISE Act, with recent advertisements.

Two groups, ParentsTogether Action and the Tech Oversight Project, collaborated on the letter sent to Hochul, which claims that some signers have “lost children due to the dangers of AI chatbots and social media.” The signatories characterized the RAISE Act in its current form as “minimalist guardrails” that ought to be enacted into law.

They also emphasized that the legislation, as passed by the New York State Legislature, “does not regulate all AI developers – only the largest firms that are investing hundreds of millions yearly.” These firms would be mandated to report large-scale safety incidents to the attorney general and disclose their safety strategies. Additionally, these developers would be barred from launching a frontier model “if doing so would pose an unreasonable risk of critical harm,” which is defined as the death or severe injury of 100 individuals or more, or $1 billion or more in damages to money or property resulting from the production of a chemical, biological, radiological, or nuclear weapon; or an AI system that “operates without significant human oversight” and “would, if carried out by a person,” constitute certain criminal acts.

“Big Tech’s financially backed resistance to these essential safeguards feels familiar as we have
observed this trend of avoidance and evasion previously,” the letter asserts. “Extensive harm to youth —
including their mental well-being, emotional stability, and capacity to excel in education — has been
well-documented since major technology companies opted to promote algorithm-driven
social media platforms lacking transparency, oversight, or accountability.”

You may also like

Leave a Comment