Explore Enzai’s full suite of AI governance products designed to help organizations manage, monitor, and scale AI with confidence. From structured intake and centralized AI inventories to automated assessments and real-time oversight, Enzai provides the building blocks to embed governance directly into everyday AI workflows—without slowing innovation.

Enzai

AI Regulations

What Recent FTC Actions Mean for Businesses Using AI

AI Regulations

What Recent FTC Actions Mean for Businesses Using AI

AI Regulations

What Recent FTC Actions Mean for Businesses Using AI

Recent FTC actions and guidance have significant implications for organizations

Belfast

Belfast

3 min read time

Regulation

Regulation

FTC Guidance

FTC Guidance

Topic

Topic

Consumer Protection

Consumer Protection

Topics

UK AI Policy
Political Change
Regulatory Direction
National Governance

Topics

American regulation of AI is a patchwork. To start with, existing laws at all levels apply to AI systems. As for general rules for AI, NIST’s authoritative AI Risk Management Framework is voluntary, the Biden administration’s Trustworthy AI Executive Order and follow-on OMB guidance apply only to the federal government, and Colorado is currently the only state to broadly regulate AI systems. Other jurisdictions– like New York City and Illinois – have regulated specific AI use cases, like automated hiring.

Currently, U.S. Federal Trade Commission (“FTC”) guidance and actions provide the clearest picture of how the federal government expects businesses to deploy AI systems. Recent FTC guidance has attempted to address current events with speed. In February, the FTC finalized a rule banning the use of AI deepfakes to impersonate organizations or government agencies. The same month, it warned AI companies against surreptitiously changing their terms of service with retroactive effect. These activities are in addition to its scrutiny of investments and acquisitions by large technology companies from an antitrust lens. In May, FTC Chair Lina Khan wrote an opinion piece in the New York Times, describing her proposed approach to AI regulation.

Why was December's Rite Aid case so important?

In December 2023, the FTC announced a settlement with the Rite Aid pharmacy chain. Rite Aid had “used facial recognition technology in hundreds of its retail pharmacy locations to identify patrons that it had previously deemed likely to engage in shoplifting or other criminal behavior,” according to an FTC complaint. In the settlement, Rite Aid agreed to refrain from deploying facial recognition systems for five years at its retail stores or online and to delete photos and videos of customers improperly used in facial recognition systems, including “data, models and algorithms” derived from such use.

What are the Implications for Organizations Using AI?

The FTC’s settlement with Rite Aid was notable for two reasons. First, it demonstrated the FTC’s interpretation of AI discrimination. Second, it marked the continued use of model disgorgement as a remedy.

AI Discrimination

On AI discrimination, the FTC noted that Rite Aid’s deployment of facial recognition systems did not account for high false-positive rates across demographic groups. It also suggested that the pattern of stores chosen for deployment of facial recognition would have disproportionate discriminatory impact on minority groups.

Model Disgorgement

The FTC also required model disgorgement, a relatively new remedy that it pioneered in 2019, which requires an organization to delete data, models and algorithms derived from improperly used AI systems. Since 2019, the FTC has previously required model disgorgement in many episodes, including those related to Cambridge Analytica, Everalbum, WW (formerly Weight Watchers) and Ring. The FTC will likely require model disgorgement in future AI enforcement actions.

Lessons

Some lessons that organizations should take away from the Rite Aid episode include: 

·     Conduct AI risk assessments

·     Put in place comprehensive training programs for employees overseeing high-risk AI systems

·     Consider the context of AI deployment, including potential discriminatory impacts

·     Scrutinize datasets for accuracy, bias and fitness for purpose

·     Be transparent with people about how their data will be used

·     Be thoughtful about how, and to what extent, to describe AI systems to the public and to regulators

·     Evaluate the privacy and responsible AI practices of vendors

Enzai is here to help

Enzai’s product can help your company deploy AI in accordance with best practices and emerging regulations, standards and frameworks, such as EU AI Act, the Colorado AI Act, the NIST AI RMF andISO/IEC 42001. To learn more, get in touch here.

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Join our Newsletter

By signing up, you agree to the Enzai Privacy Policy

Compliance by Design

Compliance by Design

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

ISO 42001

Enterprise grade AI governance. Providing the auditable oversight and operational resilience required for trusted innovation.

GDPR

Request more information

AI Governance

AI Governance

Infrastructure

Infrastructure

engineered for Trust.

engineered for Trust.

Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.

Seamlessly connect your existing systems, policies, and AI workflows — all in one unified platform.