Blog
March 19, 2026

What Are the Legal Consequences of Not Disclosing AI Use to Customers in California

California's AI disclosure laws carry penalties that accumulate daily. A single month of non-compliance under AB 853 creates minimum exposure of $150,000. Private lawsuits add a second layer of risk that does not require the government to act first.

What Are the Legal Consequences of Not Disclosing AI Use to Customers in California

California AI laws that require disclosure

California has passed several AI transparency laws with distinct obligations and enforcement paths. AB 2013 is already in effect as of January 2026. AB 853 takes full effect August 2, 2026. The penalties range from $5,000 per violation per day under AB 853 to class action exposure under AB 2013, where enforcement runs through California's Unfair Competition Law rather than a direct fine structure. City attorneys and county counsel can bring cases alongside the state Attorney General.

AB 853 AI Transparency Act

AB 853 amends California's original AI Transparency Act (SB 942) and pushes the compliance deadline to August 2, 2026. The law applies to generative AI systems with over one million monthly users that are publicly accessible within California.

Covered providers face requirements for both "latent" and "manifest" disclosures. Latent disclosures are embedded in content metadata. Manifest disclosures are visible to users viewing the content. Providers also have obligations to offer free detection tools.

The threshold matters here. If your AI system doesn't reach one million California users monthly, AB 853's specific requirements don't apply to you directly. However, general consumer protection law still does.

AB 2013 Training Data Transparency Law

AB 2013 requires developers of generative AI systems to publicly disclose detailed information about their training data sources. The law took effect January 1, 2026.

Here's what makes AB 2013 different from AB 853: the statute contains no explicit per-violation penalty. Instead, enforcement runs through California's Unfair Competition Law (UCL). That distinction matters because UCL enforcement opens the door to both government action and private lawsuits, including class actions.

AB 2013 is also under active legal challenge. xAI filed a federal lawsuit against the California Attorney General arguing the law compels disclosure of trade secrets in violation of the Fifth Amendment. A federal court denied the request to block enforcement, but the challenge is ongoing.

SB 243 Companion Chatbot Law

SB 243 regulates "companion chatbots," which are AI systems designed to meet users' social needs through human-like, relationship-building interactions. The law requires specific disclosures about the AI nature of companion chatbots.

Most business chatbots used for customer service or sales are explicitly excluded from SB 243. If your chatbot handles support tickets or answers product questions, this law likely doesn't apply. The focus is on AI systems that simulate ongoing personal relationships.

Penalties and fines for violating California AI disclosure laws

The financial exposure from non-compliance adds up faster than most business owners expect. Understanding the penalty structure helps you assess your actual risk.

Civil penalties under state law

AB 853 carries a civil penalty of $5,000 per violation, plus attorneys' fees and costs. The critical detail: each day a provider remains in violation counts as a separate discrete violation.

Run the math on that. One month of non-compliance creates minimum exposure of $150,000. Three months? $450,000. The daily accumulation turns what looks like a manageable fine into potentially unlimited liability.

For frontier AI developers with annual revenue exceeding $500 million, SB 53 creates penalties up to $1,000,000 per violation. Most small and mid-market businesses won't hit that threshold.

FTC enforcement for deceptive AI practices

Federal oversight operates alongside California law, not in sequence with it. The FTC treats undisclosed AI use as a potentially deceptive practice under existing consumer protection authority. A company can face California state penalties and FTC enforcement for the same conduct at the same time.

A company could face California state penalties and FTC enforcement simultaneously for the same conduct. The FTC has been increasingly active on AI transparency, particularly around AI-generated content that consumers might mistake for human-created material.

Consumer lawsuits and private legal action

Beyond government enforcement, California's UCL creates a private right of action. Consumers and competitors can file lawsuits directly. They don't have to wait for the Attorney General to act.

Class action exposure is real. When a plaintiff's attorney files under the UCL, they'll examine your self-reported disclosures for gaps. Any inconsistency between what you claim and what you actually do becomes evidence.

Who enforces California AI transparency laws

Knowing who can take action against your business helps you understand where enforcement pressure comes from.

California Attorney General authority and contact

The California Attorney General has primary enforcement authority for AB 853 and related AI transparency laws. The AG can bring civil actions seeking penalties, injunctive relief, and attorneys' fees.

Enforcement isn't limited to the state level, though. City attorneys and county counsel also have authority to bring civil actions. That's broader than many businesses assume. Local prosecutors can act independently of the state AG.

Federal Trade Commission oversight

The FTC maintains concurrent jurisdiction over deceptive AI practices under Section 5 of the FTC Act. If your AI use misleads consumers about whether they're interacting with a human or about how their data is being used, federal enforcement becomes possible.

The FTC has issued guidance specifically addressing AI marketing claims and synthetic content. Their position: existing consumer protection law already covers AI deception, even without new AI-specific statutes.

How violations get reported

Violations typically surface through consumer complaints, competitor reports, or regulatory investigations triggered by public attention. A viral social media post about misleading AI use can accelerate enforcement timelines significantly.

Which businesses are subject to California AI law

Not every business faces the same obligations. The laws contain specific thresholds and exemptions.

Large online platforms

AB 853's primary obligations fall on "covered providers," meaning generative AI systems with over one million monthly users accessible within California. If you're below that threshold, the law's specific disclosure requirements don't apply directly.

Smaller businesses aren't entirely off the hook, however. General consumer protection law still applies, and the FTC's deceptive practices authority has no size threshold.

Generative AI providers and hosting platforms

AB 2013 applies to developers of generative AI systems made available to Californians, regardless of whether the terms of use include compensation. Free tools aren't exempt.

If you're giving away an AI-powered tool to California users and you developed or substantially modified it, AB 2013's training data disclosure requirements apply.

Businesses with customer-facing AI tools

Companies using AI chatbots, content generation, or automated decision-making with California customers face disclosure obligations even if they didn't build the underlying AI system. The question is whether your use of AI affects customer interactions in ways that warrant disclosure under general consumer protection principles.

Internal AI systems available only within your corporate family, such as affiliates and subsidiaries, are exempt. Internal-only use doesn't trigger disclosure. The obligations kick in when AI touches customer-facing interactions.

What California law requires you to disclose about AI

The specific disclosure requirements vary by law, but they share a common theme: customers deserve to know when they're interacting with AI.

Law What to Disclose Where to Display
AB 853 AI-generated content labeling Latent and manifest disclosures in content
AB 2013 Training data sources and documentation Publicly accessible website
SB 243 AI nature of companion chatbots Before and during user interaction

AI-generated content labeling

AB 853 requires visible labeling for synthetic media and AI-generated content. Latent disclosures are embedded in the content itself as metadata. Manifest disclosures are visible to users viewing the content.

Training data documentation

AB 2013 requires publicly accessible documentation of data sources used to train generative AI systems. The documentation includes information about the types of data, sources, and any known limitations.

Chatbot and automated interaction notices

When customers interact with AI systems, particularly companion chatbots designed for ongoing engagement, disclosure of the AI nature is required. The timing matters: disclosure before interaction begins, not buried in terms of service.

Exemptions under California AI disclosure requirements

Several carve-outs exist that may reduce your compliance burden.

Small business thresholds

AB 853's primary obligations apply to systems with over one million monthly California users. Smaller operations fall below this threshold, though general consumer protection law still applies.

Internal business operations

AI systems used purely for internal operations, not customer-facing, don't trigger disclosure requirements. If your AI tools are only accessible to employees and affiliates, the public disclosure obligations don't apply.

Research and development exceptions

Limited R&D exemptions exist for AI systems in development that aren't yet publicly deployed. Once a system becomes publicly accessible, exemptions typically end.

How California AI laws compare to other US AI regulations

California isn't operating in isolation, lawmakers in 45 states have introduced 1,561 AI-related bills in 2026 alone. Understanding the broader regulatory landscape helps you build compliance that works across jurisdictions.

Colorado AI Act

Colorado's AI Act creates similar disclosure requirements with a June 2026 enforcement deadline. Companies operating in both states face overlapping obligations. Building compliance for one often satisfies the other.

Upcoming federal AI transparency requirements

Federal preemption is a live issue. In December 2025, an executive order proposed establishing a uniform federal policy framework for AI that could preempt inconsistent state laws and condition federal funding on compliance. The House has also passed a 10-year moratorium on state AI regulation as part of budget reconciliation.

The strategic implication: third-party certification insulates your business from legal uncertainty. If state law is upheld, you're compliant. If federal law preempts it, you have documentation of good-faith effort. Either way, certification creates a defensible position.

How to prove compliance with California AI disclosure laws

Self-reported compliance creates exposure in litigation. When a plaintiff's attorney examines your disclosures, gaps become evidence. Building defensible documentation now protects you later.

Internal documentation requirements

Maintain a complete AI inventory that covers your AI use comprehensively:

  • AI systems in use: What tools, what purposes, what customer touchpoints

  • Disclosure practices: When and how you notify customers about AI use

  • Training data sources: For any AI you've developed or substantially modified

  • Decision-making processes: Who approved AI deployments and when

Third-party certification and public registries

Independent verification creates documentation that self-attestation cannot replicate. When your compliance is certified by a third party and listed in a public registry, you have defensible proof that goes beyond "we said we're compliant."

A verified AI transparency certification, published usage policy, and public trust signal demonstrate compliance in a way that's verifiable by customers, regulators, and courts.

Building audit-ready transparency records

Treat compliance documentation as a living system, not a one-time project. Update records when you add new AI tools, change disclosure practices, or modify customer-facing AI interactions. The companies that build this discipline now won't scramble when enforcement arrives.

Turn AI transparency into a competitive advantage

The companies building disclosure infrastructure now gain something beyond compliance. They get documentation, credibility, and a trust signal that competitors without certification cannot replicate. Customers in regulated markets increasingly check before buying. They look for visible accountability. They find it or they do not.

Compliance does not have to be a cost center. Done right, it becomes the answer to a question your buyers are already asking.

Get certified for AI transparency with SiteTrust.

Frequently asked questions about California AI disclosure laws

Is it illegal to not disclose AI use in California?

Yes, under specific California laws like AB 853 and AB 2013, certain businesses face civil penalties for failing to disclose AI use. Whether a violation applies depends on the size of your business, the type of AI in use, and how it interacts with California customers.

What is the 30% rule for AI?

The "30% rule" refers to content thresholds in some AI labeling contexts, but California's current AI transparency laws don't use this specific standard. Disclosure requirements focus on whether AI was used, not on percentage thresholds.

Can California customers sue businesses for undisclosed AI use?

Yes. California's Unfair Competition Law creates a private right of action that allows consumers and competitors to file lawsuits directly. Class action exposure exists for businesses with systematic disclosure failures.

Do California AI transparency laws apply to out-of-state businesses?

If you serve California customers with AI-powered tools or content, California's AI transparency laws likely apply regardless of where your business is headquartered. The jurisdictional trigger is serving California users, not physical presence in the state.

When does the California AI Training Data Transparency Law take effect?

AB 2013 took effect January 1, 2026. Businesses that develop or substantially modify generative AI systems available to California users face immediate disclosure obligations for training data sources.

Ready to become a founding member?

Apply for certification today
Damjan Stankovic

Damjan Stankovic

Growth Marketing Lead