Blog
March 19, 2026

What Happens When You Don't Disclose AI to Customers

Most companies using AI right now say nothing about it to customers. Some believe it does not matter. Others think disclosure will cost them sales. Both are wrong.

What Happens When You Don't Disclose AI to Customers

And the cost of being wrong is rising. When customers find out on their own, trust breaks fast. Regulators are building enforcement tools. Other companies are already turning transparency into a sales argument. This piece covers what actually happens when AI use goes undisclosed, where the legal lines sit today, and what separates companies that are winning on this from those that will have to catch up.

How hiding AI destroys customer trust

The damage from undisclosed AI use does not start with a complaint. It starts with a feeling. A chatbot that sounds too smooth. An email that reads personal but lands hollow. A recommendation that seems to know too much. Customers pick up on these signals before they can name them. Some eventually figure out what bothered them, while others just stop buying from a vendor.

The 2025 KPMG and University of Melbourne study surveyed 48,000 people across 47 countries. The findings paint a clear picture:

  • 54% are already skeptical of AI systems before any deception occurs

  • 70% can't reliably tell whether content is AI-generated or human-made

  • 83% would trust AI more if companies had visible accountability measures in place

That last number says the most. Customers aren't anti-AI. They're anti-surprise.

Why companies avoid disclosing AI use

Most companies aren't hiding AI out of malice. The hesitation usually comes from three places, and each one makes sense on the surface.

Fear of negative customer reactions

The worry goes something like this: "If customers know a bot wrote that email, they'll think we don't care about them."

University of Arizona research found that disclosure can reduce trust by roughly 20 percent. That drop only happens when the disclosure feels defensive or forced. When companies frame AI as a benefit to the customer, the penalty shrinks or disappears. The problem is not disclosure itself. It is how companies handle it.

Competitive pressure from industry norms

When competitors stay quiet, speaking up feels risky. Why be the first to admit something that might cost you?

Two years ago, that logic held. It doesn't anymore. Regulations are approaching in the EU and several US states. Companies that build disclosure practices now will look like leaders. Companies that wait will look like they got caught.

Confusion about what disclosure requires

No universal standard exists yet. Many businesses genuinely don't know what to disclose, where to put it, or how detailed to be. Is a grammar checker AI? What about a recommendation engine? The lines blur quickly.

That confusion creates opportunity. Companies that figure out disclosure first—and make their transparency verifiable—will own the credibility position in their market while competitors are still debating what counts.

Is it illegal to not disclose AI to customers

The legal landscape is moving faster than most companies realize. Some rules are already enforceable. Others take effect within the next year.

FTC guidelines on AI marketing claims

The FTC doesn't need a specific AI law to take action. Under existing deceptive practices rules, if a company uses AI to generate customer communications without disclosure—and a court finds that information is material to the purchase decision, that qualifies as deception.

In 2025, the FTC issued fines up to $51,000 per violation for fake AI-generated testimonials. The FTC's Operation AI Comply enforcement pattern is clear: regulators are paying attention, and they're using tools they already have.

EU AI Act transparency requirements

The EU AI Act takes effect in August 2026. It applies to any company serving EU customers—not just companies based in Europe. A US business with EU website traffic is already in scope.

High-risk AI systems face the strictest requirements: disclosure to users, documentation of how decisions are made, and human oversight mechanisms. Noncompliance penalties reach €15 million or 3% of global turnover. Even lower-risk systems have labeling obligations.

Colorado AI Act and US state regulations

The Colorado AI Act becomes enforceable in June 2026. It's the first US state law of its kind, requiring companies to notify consumers when AI influences decisions that affect them, with penalties up to $20,000 per violation counted separately for each affected consumer.

Regulation Effective Date Key Requirement
FTC Guidelines Now No deceptive AI marketing claims
EU AI Act August 2026 Disclose high-risk AI to users
Colorado AI Act June 2026 Notify consumers of AI decision-making

California, Oregon, Texas, and other states are advancing similar legislation. This isn't a one-state issue—it's a pattern spreading across jurisdictions.

Business consequences of not disclosing AI

Legal exposure is real, but business consequences often arrive first. They show up in metrics that are harder to connect to their cause.

Revenue loss from eroded trust

The KPMG study found that confidence in commercial organizations to use AI responsibly sits at just 60% in the US, UK, Canada, France, and Australia. That's the baseline trust floor—before any deception happens.

When customers discover undisclosed AI use, they don't just lose trust in that interaction. They feel confirmed in suspicions they already held. One bad experience validates a broader skepticism that affects future purchases, referrals, and willingness to share data.

Regulatory fines and legal liability

Beyond FTC enforcement, companies face class action exposure when undisclosed AI affects consumer decisions. The legal theory is straightforward: if AI influenced a purchase and the customer wasn't informed, that's a potential claim for damages.

The cost of defending against such claims, even unsuccessful ones, often exceeds the cost of building proper disclosure practices from the start.

Reputation damage when hidden AI is exposed

Exposure rarely comes from regulators first. It comes from employees who talk, partners who notice, journalists who investigate, and social media that amplifies.

Consider what happened when tech reviewer MKBHD (Marques Brownlee) posted about companies using AI voice clones of him without consent. That single post received 35,000 likes and 3.7 million views. One person with reach, one discovery, one post.

The pattern is actually predictable. Undisclosed AI gets found by a competitor, a partner, or a journalist. They post. The algorithm picks it up. News outlets cite the post. The brand gets attached to the story. No PR response fully removes it.

What customers actually want to know about your AI use

Good disclosure isn't complicated. Customers want answers to four specific questions:

  • Where AI is used: Which parts of the product or service involve AI

  • What data AI accesses: What personal information the AI processes

  • How AI affects their experience: Whether AI makes decisions that impact them directly

  • What humans still control: Whether people oversee important decisions

That last point carries more weight than most companies expect. The KPMG research found that 84% of people would be more willing to trust an AI system if human intervention to correct or override AI recommendations was possible.

Customers aren't opposed to AI. They want to know where the human stops and the machine starts.

How to disclose AI use the right way

Disclosure done well becomes a competitive advantage. Done poorly, it creates the exact backlash companies feared in the first place. The difference comes down to execution.

1. Specify where and how you use AI

"We use AI" is too vague to build trust. Specificity matters.

Compare two approaches: "We use AI in our operations" versus "Our customer service chat uses AI to answer common questions. Complex issues go to human agents within two minutes." The second version tells customers exactly what to expect. The first raises more questions than it answers.

2. Explain how AI benefits the customer

Frame disclosure as a feature, not a warning label. Show customers what they gain from AI involvement.

"AI helps us respond to your questions in under 30 seconds, any time of day" lands differently than "This response was generated by artificial intelligence." Both are honest. Only one builds confidence.

3. Make your AI disclosures easy to find

Burying disclosure in terms of service defeats the purpose. Customers don't read terms of service—and they know companies hide things there.

Put disclosure where customers naturally look: product pages, checkout flows, help centers, and anywhere AI touches the customer experience directly. The goal is accessibility, not legal cover.

4. Verify your transparency through independent certification

Self-reported disclosure has limits. Customers have learned to be skeptical of claims companies make about themselves. "We value your privacy" has become meaningless through overuse.

Third-party verification changes the dynamic. When an independent organization certifies a company's AI transparency practices, customers get proof rather than promises. The SiteTrust certification, for example, links directly to a published AI policy that anyone can read and verify.

How customers verify a company's AI transparency

Informed customers now check before buying. They look for visible trust signals, published policies, and independent verification—especially for purchases involving personal data or significant money.

The KPMG study found that 74% of people would be more willing to trust an AI system if it was assured by an independent third party. That's not a preference. That's a documented shift in buying behavior.

Public registries let consumers verify a company's certification status in seconds. The SiteTrust registry, for instance, lists certified companies and links to their published AI policies. Verification takes moments. The trust it builds compounds over time.

Why transparent companies win more customers

Companies that disclose AI use first build trust that late-movers can't easily replicate, a 2025 consumer survey found 76% of consumers would switch brands for better AI transparency. Early transparency becomes part of the brand identity. Late transparency looks like damage control.

The trend is accelerating. According to KPMG's longitudinal tracking, the importance of organizational assurance mechanisms for trust rose from 72% to 83% between 2022 and 2024. Each year, more customers expect visible accountability. Each year, silence becomes more costly.

Get certified for AI transparency with SiteTrust.


Frequently asked questions about AI disclosure

What is the 30% rule for AI?

The "30% rule" is an informal guideline suggesting that content with significant AI involvement warrants disclosure. No official law defines this threshold. It's a community norm that emerged in creative and academic circles, not a legal standard.

Can customers detect when a company uses AI without disclosing it?

Detection tools exist but remain unreliable. The bigger risk is not algorithmic. It is human. Employees talk. Partners notice patterns. Journalists investigate tips. Social media amplifies whatever they find.

Do companies need to disclose AI used only for internal operations?

Generally no, unless internal AI affects customer-facing decisions or outcomes. If AI touches the customer experience, even indirectly through pricing, recommendations, or service routing, disclosure is the safer path.

How can companies disclose AI without hurting conversion rates?

Frame AI as a benefit rather than a disclaimer. Customers respond well to transparency when the disclosure explains how AI improves their experience. "AI helps us serve you faster" works. "Warning: AI-generated content" does not.

Ready to become a founding member?

Apply for certification today
Damjan Stankovic

Damjan Stankovic

Growth Marketing Lead