What the EU AI Act Requires Businesses to Disclose About AI Systems
The EU AI Act requires businesses to disclose how their AI systems work, what data they use, and when people are interacting with AI instead of humans.

These disclosure requirements vary based on risk level, from simple user notifications for chatbots to comprehensive documentation and individual notification for high-risk systems like hiring tools and credit scoring.
This article breaks down exactly what you are required to disclose, who the rules apply to, and the timeline for compliance.
What is the EU AI Act
The EU AI Act is the world's first comprehensive AI regulation. It entered into force on August 1, 2024, and it uses a risk-based framework to classify AI systems into four tiers: unacceptable, high, limited, and minimal risk. Each tier carries different disclosure obligations.
It doesn't just apply to companies based in Europe. If your AI system's output reaches EU residents, whether you're headquartered in Texas or Tokyo, you're covered. Most enforcement provisions take effect August 2, 2026, though some provisions are already active - Finland activated full enforcement powers in January 2026.
Who must comply with EU AI regulation
The EU AI Act has extraterritorial reach, similar to how GDPR works. A US company using AI for loan approvals, hiring decisions, or product recommendations that serves EU residents falls under this regulation. Your physical location doesn't matter. What matters is where your AI's decisions affect people.
Companies based in the EU
Any company operating within EU member states falls under the Act's jurisdiction. This includes businesses of all sizes, from startups to enterprises.
Non-EU companies serving EU markets
If your AI output is used by people in the EU, you are in scope. This catches many US companies off guard. Your servers can sit entirely outside Europe, and you're still covered.
AI providers, deployers, importers, and distributors
The Act defines four distinct roles, each with specific obligations:
Provider: Develops or places an AI system on the market
Deployer: Uses AI systems in business operations
Importer: Brings non-EU AI products into the EU market
Distributor: Makes AI systems available without substantially modifying them
A US reseller of an EU AI product can be classified as a distributor with compliance obligations. Many companies don't realize this role exists.
EU AI Act risk levels explained
The Act's four-tier classification system determines what you're required to disclose. Understanding where your AI systems fall is the first step.
| Risk Level | Description | Requirements | Who It Affects Most |
|---|---|---|---|
| Unacceptable | Banned entirely | Prohibited | Social scoring, manipulation systems |
| High | Strict oversight | Full compliance obligations | HR tech, credit scoring, critical infrastructure |
| Limited | Transparency required | Disclosure to users | Chatbots, content generators |
| Minimal | No specific rules | Voluntary codes | Most business AI tools |
Unacceptable risk
Certain AI systems are banned outright. No exceptions, no workarounds.
The prohibited categories include social scoring by public authorities, cognitive manipulation techniques that exploit vulnerabilities, real-time biometric identification in public spaces, and emotion recognition used to infer protected characteristics in employment or education contexts.
High risk
High-risk AI systems face the strictest requirements. The categories include AI used in critical infrastructure like energy, transport, and water systems. Education and vocational training AI falls here too. So does employment AI, CV screening, interview analysis, performance monitoring. Essential services like credit scoring and insurance pricing are covered. Law enforcement and border control round out the list.
Limited risk
Limited-risk systems interact directly with people. Think chatbots, virtual assistants, deepfake generators, and emotion recognition outside banned contexts. The main requirement is transparency, users need to know they're dealing with AI.
Minimal risk
Most business AI falls into this category. There are no mandatory requirements, though the EU encourages voluntary codes of conduct. Spam filters, AI-powered search, and recommendation engines typically land here.
AI disclosure requirements by risk level
What exactly do you need to tell users, regulators, and affected individuals?
High-risk AI system disclosures
High-risk systems carry the heaviest disclosure burden. Providers and deployers face documentation and disclosure requirements across several areas:
Risk management documentation: Continuous assessment and mitigation records throughout the system's lifecycle
Data governance records: Proof that training data is representative, accurate, and free from bias
Technical documentation: System capabilities, limitations, and intended use cases
Event logging: Automatic recording of system decisions for audit purposes
User instructions: Clear guidance on proper use, limitations, and potential risks
Human oversight design: Documentation of how humans can intervene in AI decisions
Conformity assessment: Registration in the EU database before market release
Deployer notification to affected individuals: When a high-risk AI system makes or assists in decisions about a person, that person has the right to know
The deployer notification requirement is the one most businesses miss. Article 26 creates a direct obligation to inform the individuals affected by AI decisions, not just to document the system for regulators.
Limited-risk AI system disclosures
The requirements here are simpler but still mandatory:
AI interaction notice: Tell users they're interacting with AI, not a human
Synthetic content labeling: Mark AI-generated images, audio, video, and text
Deepfake disclosure: Clearly identify content that's been artificially created or manipulated
General purpose AI transparency obligations
GPAI models like ChatGPT, Claude, and Gemini have their own transparency requirements. Providers of general purpose AI face obligations around technical documentation about model capabilities and limitations, copyright law compliance for training data, and transparency about training data sources and model behavior.
In August 2025, 26 major providers, including Microsoft, Google, OpenAI, and Anthropic, signed the GPAI Code of Practice, developed through a process involving nearly 1,000 participants.
EU AI Act compliance timeline
Some provisions are already enforceable. Others are coming soon.
February 2025 prohibited AI systems
Already in effect since February 2, 2025. Banned AI practices and AI literacy requirements are now enforceable. If you're using any prohibited AI systems, you're already non-compliant.
August 2025 GPAI requirements
Already in effect since August 2, 2025. Governance rules and general purpose AI model obligations are now active.
August 2026 full enforcement
This is the main deadline most businesses face. Almost all provisions become mandatory on August 2, 2026.
The European Commission proposed in late 2025 that high-risk Annex III obligations could be delayed to December 2027 if harmonized standards aren't available. However, this extension isn't guaranteed. Given that conformity assessment alone takes 6–12 months, planning for August 2026 remains the prudent approach.
Penalties for EU AI Act non-compliance
The penalty structure exceeds GDPR fines. Here's how violations break down:
| Violation Type | Maximum Penalty |
|---|---|
| Prohibited AI practices | €35 million or 7% of global annual turnover |
| High-risk non-compliance | €15 million or 3% of global annual turnover |
| Supplying incorrect information | €7.5 million or 1% of global annual turnover |
Both EU and non-EU companies face these penalties. The extraterritorial reach applies to enforcement, not just obligations.
How SMEs can meet EU AI compliance requirements
The Act contains explicit provisions to reduce the compliance burden for smaller providers. A 50-person company isn't expected to build the same infrastructure as Google.
Regulatory sandboxes
Regulatory sandboxes are controlled environments where businesses can test AI systems with regulator guidance before full market launch. However, sandboxes won't be available until 2028—after the August 2026 deadline. They're useful for future development, not current compliance.
Simplified documentation
Smaller providers face reduced paperwork requirements. The Act recognizes the resource constraints that smaller companies operate under.
Third-party certification
The EU AI Act favors documented, verifiable practices over self-attestation.
Most SMEs can't run conformity assessments or build internal compliance teams. Third-party certification fills that gap.
Independent certification creates audit-readiness and defensible proof. When a regulator or customer asks how you handle AI, you have documentation that stands on its own. Get certified for AI transparency with SiteTrust.
How to prove AI transparency to customers
Meeting disclosure requirements is one thing. Making transparency visible to customers is another.
Compliance documentation satisfies regulators. It does not move buyers. What customers want to know about your AI use isn't buried in technical documentation.
Public registries and certification badges turn compliance into a visible trust signal. Instead of asking customers to trust your claims, you give them something they can verify independently.
Why AI transparency wins more customers
Companies that verify their AI practices stand out from those that only claim good ones. In a market where every business posts an AI policy, third-party certification is the difference between a claim and a proof. The KPMG global trust study found that the importance of organizational assurance mechanisms rose from 72 percent to 83 percent between 2022 and 2024. That movement is in one direction. Each year more buyers expect visible accountability. Each year silence costs more. The businesses building transparency infrastructure now will not be scrambling to prove compliance when enforcement arrives. They will already have the documentation, the certification, and the customer signal that their competitors will still be trying to build.
Get certified for AI transparency with SiteTrust.
FAQs about EU AI Act disclosure requirements
Does the EU AI Act apply to companies based in the United States?
Yes. The EU AI Act has extraterritorial reach. If your AI system's output is used by people in the EU, you fall under its jurisdiction regardless of where your company is headquartered. This mirrors how GDPR applies to US companies serving EU customers.
What types of AI systems are completely banned under the EU AI Act?
The Act bans social scoring by public authorities, cognitive manipulation techniques, real-time biometric identification in public spaces, and emotion recognition used to infer protected characteristics in employment or education contexts. These prohibitions have been enforceable since February 2, 2025.
Can businesses voluntarily comply with EU AI Act standards even if not required?
Yes. Companies can choose to follow the Act's transparency and documentation standards voluntarily. This builds customer trust, creates audit-ready documentation, and prepares your organization for future regulations in other markets like California's AI disclosure laws and emerging UK rules.
How do customers verify whether a company meets AI transparency standards?
Customers can check public registries and look for independent certifications that verify a company's AI transparency practices. Unlike self-reported claims, third-party certification provides proof that customers can verify before making purchase decisions.
What disclosures are required when customers interact with AI chatbots?
Businesses using chatbots, virtual assistants, or similar conversational AI are required to clearly inform users that they're interacting with an AI system, not a human. This applies to any AI system that interacts directly with people.
Ready to become a founding member?
Apply for certification today
Damjan Stankovic
Growth Marketing Lead