Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

New York Assembly considers AI Consumer Protection Act

AI Regulations

New York Assembly considers AI Consumer Protection Act

AI Regulations

New York Assembly considers AI Consumer Protection Act

AICPA takes a risk-based approach to combating potential discrimination

Belfast

Belfast

3 min read time

Regulation

Regulation

NY AICPA

NY AICPA

Topic

Topic

Discrimination Risk

Discrimination Risk

Topics

US AI Policy
Political Transitions
Regulatory Philosophy
Historical Comparison

Topics

Despite uncertainty about the Trump administration’s approach to AI policy, states are powering ahead with new AI laws. Some of these are focused on specific use cases, like Georgia’s SB 164, which bans the use of automated decision-making tools to set employee wages. Others are more focused on the risks of frontier AI systems, such as Illinois’ HB 3506, which introduces assessment and audit requirements for developers of cutting-edge models.

Lawmakers are also increasingly looking at AI use from the lenses of consumer protection and combating discrimination. Though Colorado is the only state to date to have enacted consumer protection legislation related to AI, the Texas legislature has taken up a comprehensive proposal, with similar bills under consideration in a growing number of states.

In January, similar legislation was introduced in the New York Assembly (A768). The New York AI Consumer Protection Act (AICPA), which is primarily concerned with combating discrimination, follows the general structure of the Colorado law and the Texas proposal: it divides requirements between developers and deployers, focuses on the risks of high-risk AI systems, and requires organizational policies for responsible AI use.

Though some commentators have welcomed the bill, especially considering recent policy reversals at the federal level, other have criticized it as overly focused on procedure. For example, Jeffrey Sonnenfeld and Stephen Henriques believe that lawmakers should focus more on applying existing consumer protection laws to AI systems.

What does AICPA cover?

AICPA would regulate developers and deployers of “high-risk” AI systems. It defines high-risk systems as those that, when deployed, make or are a substantial factor in making “consequential decisions,” which include those that have a “material legal or similar effect” in fields including:

-Education enrollment or educational opportunity

-Employment or employment opportunity

-Financial or lending service

-Essential government service

-Health care service

-Housing or housing opportunity

-Insurance

-Legal service

Several AI uses are carved out from this definition of high-risk systems as exceptions.

Requirements for Developers

AICPA would require developers to exercise “reasonable care” to protect consumers from risks related to algorithmic discrimination and to publicly describe their AI systems and approaches to preventing algorithmic discrimination. It would also require them to provide deployers with documentation for high-risk systems, which includes intended uses, harmful and inappropriate uses, training data, and expected outputs.

Developers will have a “rebuttable presumption” of reasonable care if they undergo independent third-party audits by a government-approved auditor. Such audits would be focused on the potential for discrimination against protected classes.

Requirements for Deployers

Requirements for deployers would include publishing similar statements of AI use, putting in place a risk management policy and program for high-risk AI systems (equivalent to the NIST AI Risk Management Framework or ISO/IEC 42001), completing annual AI impact assessments for high-risk AI systems, and notifying consumers when an AI system is a substantial factor in decision-making.

In the case of an adverse decision, deployers would also have to provide the main reason for the decision and provide an opportunity for the impacted consumer to update any incorrect personal information used during decision-making. In certain circumstances, deployers would be able to contract with developers, so that the latter take on some of these compliance requirements.

Next Steps

If the bill is enacted, it would become effective on January 1, 2027. Though AICPA is still in an early stage of the legislative process, is likely only the first in a series of AI-related bills that New York will consider.

Enzai is here to help

Enzai’s AI GRC platform can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF and ISO/IEC 42001. To learn more, get in touch here.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

Request more information

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.