New York Moves to Regulate Frontier AI and Prevent Catastrophic Risks

New York lawmakers have approved groundbreaking legislation designed to safeguard against potential disasters caused by advanced artificial intelligence systems. The recently passed RAISE Act seeks to place transparency obligations on major AI labs, such as OpenAI, Google, and Anthropic, to help detect and prevent scenarios that could result in widespread harm—including incidents causing over 100 deaths or damages exceeding $1 billion.

Leading the Way on AI Transparency

NY Lieutenant Governor Kathy Hochul at a public event, representing New York's leadership on AI policy.

The RAISE Act positions New York as a pioneer in AI regulation, offering the first set of mandated transparency standards for top-tier AI labs in the United States. This comes after a period where the AI safety community lost influence as industry and government prioritized rapid innovation over regulation. Notable figures like Nobel laureates Geoffrey Hinton and Yoshua Bengio have endorsed the RAISE Act, recognizing its potential impact on responsible AI development. While comparable to California's debated SB 1047—ultimately vetoed over concerns about hindering startups—the RAISE Act is crafted to avoid stifling smaller players or academic research.

Main Provisions and Enforcement

The legislation specifically targets companies whose AI models are trained using more than $100 million in computing resources and made available in New York. Key requirements include the publication of detailed safety and security reports and mandatory reporting of critical incidents, such as model misuse or theft. If companies neglect these responsibilities, the New York attorney general is empowered to impose civil penalties totaling up to $30 million per violation.

Unlike some earlier proposals, the RAISE Act does not require a “kill switch” for AI systems, nor does it make businesses that fine-tune existing models liable for catastrophic outcomes. These elements are intended to address prior criticisms that strict regulations could impede technological progress or disadvantage smaller, innovative companies.

Reactions and Criticism

The bill has been met with strong opinions from both advocates and critics. While AI safety proponents see it as an overdue step, some industry leaders—particularly from Silicon Valley and investment firms—worry it could slow progress or lead tech giants to exclude New York from launching their most advanced models. Anthropic, a leading AI lab, has acknowledged the bill's broad scope and raised concerns about unintended impacts on smaller firms. However, the bill's sponsors argue that its provisions are narrowly focused and should not affect startups or limit access to AI products within the state. Assemblymember Alex Bores emphasized that with New York's massive market size, most companies would not risk pulling out, despite regulatory changes.

What Happens Next?

The RAISE Act now awaits the governor's decision to sign it into law, amend it, or veto it. If enacted, it will mark a significant shift in how AI is governed at the state level and may set the stage for broader U.S. policies on AI safety and transparency.

DeepFounder AI Analysis

Why it matters

The RAISE Act represents a significant inflection point for the tech ecosystem. Startups and established companies alike are now reminded that the regulatory landscape for AI is rapidly evolving. New York’s push for transparency and accountability signals that states are willing to lead where federal action lags, especially on emerging technologies with broad societal risks. For founders, this shift means that AI-driven products and platforms may soon face stricter oversight, particularly regarding safety testing and public disclosures. Proactive compliance and ongoing dialogue with policymakers will become crucial strategic considerations.

Risks & opportunities

While the law’s primary aim is to mitigate catastrophic risks, it also introduces a new compliance burden for the largest AI developers and potential uncertainty for companies operating at the edge of the regulatory threshold. However, these requirements may translate into trust-building opportunities—companies excelling in transparency and safety could enjoy a competitive edge with enterprise clients, governments, and the public. Historically, sectors like fintech and healthtech have seen similar patterns, where early adaptation to regulation yielded long-term leadership.

Startup idea or application

One actionable direction: a SaaS platform specializing in AI risk and compliance management for growth-phase startups and enterprise AI teams. This solution would automate the production of required safety reports, monitor for model misuse or security breaches, and keep track of compliance with evolving state and federal AI policies. By lowering operational barriers to regulatory adherence, such a tool could attract early adopters and position itself as essential infrastructure as AI regulations multiply worldwide.

AI Safety AI Regulation New York Startup Policy Frontier Models

Visit DeepFounder AI to learn how to start your own startup, validate your idea, and build it from scratch.

📚 Read more articles in our DeepFounder AI blog.

Subscribe to Deep Founder Ai

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe