Understanding California’s New AI Disclosure Laws
California shifts to mandatory AI transparency with SB 942 and SB 53. Discover the new requirements for AI providers, from latent disclosures to incident reporting.

Understanding California’s New AI Disclosure Laws
California has established some of the most comprehensive frameworks for artificial intelligence (AI) transparency in the United States through two landmark pieces of legislation. Senate Bill 942 (SB 942), known as the California AI Transparency Act, requires providers of generative AI systems to implement content watermarking and disclosure mechanisms. Senate Bill 53 (SB 53), the Transparency in Frontier Artificial Intelligence Act (TFAIA), establishes safety and reporting requirements for developers of advanced AI models.
These laws mark a significant shift from voluntary self-disclosure to mandatory legal obligations. While California is not the only state to enact AI legislation, the effectiveness of these transparency mandates has yet to be observed across the nation, as many are not yet in effect or have only recently become valid law.
Overview of California’s AI Transparency Laws
California AI Transparency Act (SB 942)
Enacted in September 2024, SB 942 takes effect on August 2, 2026, following a delay from its original January 1, 2026 date. The law originally applied to “covered providers,” which are persons that “create, code, or otherwise produce generative AI systems with over one million monthly visitors or users and are publicly accessible” in California. However, AB 853 expanded the law’s scope to also include “large online platforms, generative AI system hosting platforms, and capture device manufacturers.”
For covered providers, the law requires three main types of compliance. First, providers must offer free, publicly accessible AI detection tools that allow users to verify whether content was generated by AI. Second, AI-generated or substantially AI-altered content must include “latent disclosures” which are hidden metadata containing the provider name, system details, creation timestamp, and a unique identifier. Third, providers must give users the option to include “manifest disclosures,” in content created or altered by their GenAI system, which are visible labels indicating AI generation. The label must be clear, conspicuous, appropriate for the type of content, and make sense to a reasonable person.
Large online platforms, defined as public-facing social media, file-sharing, or mass-messaging platforms with over 2 million unique monthly users, face different obligations starting January 1, 2027. These platforms must detect whether provenance data is embedded in content, provide a user interface to disclose when content was AI-generated, and allow users to inspect available metadata. Similarly, GenAI system hosting platforms must refrain from knowingly distributing AI systems that lack required disclosures. This also takes effect on January 1, 2027.
And last, capture device manufacturers must provide users with the ability to include latent disclosures, and the manufacturer must embed latent disclosures in content taken by the device starting January 1, 2028.
The California Attorney General, city attorneys, or county counsel may enforce the law with civil penalties up to $5,000 per violation, plus attorneys’ fees and costs. Each day of violation constitutes a discrete violation.
Transparency in Frontier Artificial Intelligence Act (SB 53)
On September 29, 2025, Governor Gavin Newsom signed SB 53 into law and it became effective on January 1, 2026. This law regulates “large frontier developers,” people who train foundational models and collectively exceed an annual gross revenue of $500 million. Large frontier developers must publish a frontier AI framework describing their approach, national and international standards, and industry best practices. The law also establishes incident reporting requirements. Developers must report critical safety incidents to the California Office of Emergency Services within 15 days, or within 24 hours if an imminent risk of death or serious injury exists. Critical safety incidents include unauthorized access to model weights resulting in injury or death, harm from the materialization of catastrophic risks, loss of control causing death or injury, and models using deceptive techniques to subvert controls. Any failure to publish or transmit required documents by a large frontier developer may result in civil penalties up to $1 million per violation.
The Challenge of Self-Certification
While these laws establish clear compliance obligations, they rely heavily on companies' self-reporting their adherence, creating challenges around verification and credibility. SB 942 only mentions third-parties in the context of licensees and SB 53 refers to third-parties in catastrophic risk assessments.
California’s AI transparency laws establish requirements but do not mandate independent auditing or third-party verification of compliance claims. Companies determine whether their practices meet statutory definitions of adequate disclosure, appropriate risk assessment, and sufficient incident reporting.
This structure differs from established regulatory frameworks in other domains.
For example, publicly traded companies must have their financial statements audited by independent certified public accountants under SEC rules. The absence of mandated third-party verification in AI transparency regulations means compliance claims rest on the company’s own assessment and disclosure.
Strategic Considerations for Organizations
Companies subject to California’s AI transparency laws should consider several factors in their compliance approach beyond the minimum legal requirements.
Maintaining detailed records of AI system development, deployment, risk assessments, and disclosure practices creates the foundation for demonstrating compliance. This includes technical documentation of AI systems and their capabilities, records of risk identification and mitigation processes, evidence of disclosure implementation and user accessibility, incident logs and response procedures, and training records for personnel involved in AI governance. Different stakeholders require different forms of compliance communication, regulators need detailed evidence of legal compliance, consumers need accessible explanations of AI use and protections, and business partners may require contractual assurances.
As AI transparency requirements expand, companies must not only consider the law on the books but also the best practices to reduce the mishandling of AI as a business risk.
Organizations should anticipate additional laws in the coming years as technology evolves, potential federal regulation that could supersede or complement state laws, and international regulatory developments. Particularly in the EU, where the AI Act was passed in 2024, establishing a risk-based framework for AI systems.
Conclusion
California’s AI transparency laws establish mandatory disclosure obligations that move beyond voluntary industry practices. The effectiveness of these requirements depends significantly on how compliance is verified and communicated to stakeholders, including regulators, consumers, business partners, and investors.
While self-certification fulfills the letter of current legal requirements, organizations developing or deploying AI systems in California face strategic decisions about whether to pursue independent verification mechanisms. These mechanisms may provide stronger assurance to stakeholders and reduce the chance of liability in the long term.
The question is not whether AI transparency laws will continue to rise in the United States, but whether they will prove effective and additionally require independent verification beyond mandated disclosure.
This article provides general information about California AI transparency laws and should not be construed as legal advice. Organizations should consult qualified legal counsel regarding their specific compliance obligations.
Ready to become a founding member?
Apply for certification todayStay ahead on AI transparency
Join the SiteTrust newsletter to receive updates on AI transparency, new regulations, and practical guides straight to your inbox.
