Site Trust
    Certification
    Registry
    About
    LoginGet Started
    Blog
    May 11, 2026

    How to Audit Where AI Touches Your Customer's Sensitive Data

    If a customer, regulator, board member, or enterprise buyer asked you tomorrow where AI touches your users and what data it sees, could you answer clearly?

    How to Audit Where AI Touches Your Customer's Sensitive Data
    Damjan Stankovic photo

    Damjan Stankovic

    Growth Marketing Lead

    Approving the AI Is Enough to Be Liable

    Derek Mobley applied to more than a hundred jobs. Most of them ran the same HR platform, Workday, and most of them rejected him within minutes of his application landing in the system. He never spoke to a human at any of those companies. Mobley is a Black man in his forties living with a disability, and the federal court that allowed his case against Workday to move forward agreed that the pattern raised a legitimate question. The AI doing the screening may have learned from past hiring data what kind of person to filter out, and it did the filtering at machine speed before any human in those companies knew his name.

    That case is one of several that arrived in 2025 and 2026. A separate class action against Eightfold AI, the recruitment platform used by Microsoft, PayPal, and Starbucks among others, claims that the system scored more than a billion workers on a hidden zero to five scale and rejected lower-scored applicants before any human reviewer saw their files. The companies that bought Eightfold did not write the algorithm. They did approve the system, integrate it into their hiring funnel, and let it make decisions about people who were never told the system existed.

    The lesson is the one most companies have not absorbed yet. Approving an AI tool is enough to be named in the lawsuit. Knowing what you have approved, and where it touches a customer, is not optional anymore. This guide walks through what an AI policy actually covers, how to audit your AI systems for data exposure, and how to turn that work into something a customer, a procurement officer, or a regulator can verify.

    What an AI policy is and why it protects customer data

    An AI policy is a written set of rules that governs how your company develops, deploys, and uses artificial intelligence. It defines which tools your employees may use, what data may flow through those tools, and who owns the decision when an AI system produces an outcome that affects a customer, an employee, or an applicant.

    The connection to customer data is direct, and it is broader than most policies recognize. A chatbot answers a support question and retains the conversation for model training. A lead scoring system pulls behavioral data across sessions and stores it in a vendor's environment. A resume screener pulls demographic signals out of unstructured text without anyone asking it to. Each one is a place where customer or applicant data moves into a system that the policy needs to account for, and each one is a place where the company has typically said nothing publicly about the AI involved.

    A working AI policy covers three things at minimum.

    • The purpose, which states what the policy governs and why it exists.

    • The scope, which names the AI systems the policy applies to, including embedded AI inside vendor tools the company did not procure directly.

    • The protection, which sets out how the policy keeps sensitive customer information from misuse, exposure, or undisclosed processing.

    Why the policy you already wrote does not cover what you are exposed to

    The majority of companies have some sort of policy right now. Usually it lives in the security wiki, was drafted by IT or legal sometime in the last two years, and governs what employees are permitted to do with tools like ChatGPT, Copilot, or Claude. Acceptable use policies are real and useful. They prevent the obvious problem of someone pasting a customer contract into a public model.

    The harder problem is the AI your company already approved, already integrated, and never told anyone about. The chatbot in support that customers assume is a person. The recommendation engine on the homepage. The resume screen in HR. The lead qualifier in sales. The AI-drafted comparison content marketing published last quarter. None of those tools usually appear in any customer-facing statement, because none of them were treated as a disclosure question when they were procured. They were treated as a productivity question.

    Procurement teams have started asking for the disclosure document anyway. Plaintiffs' lawyers have started naming it in litigation. Customers have started asking why a chatbot did not identify itself. The exposure your acceptable use policy was designed to prevent is not the same as the exposure your customers, your regulators, and your business partners are now asking you to manage. The gap between internal AI policies and verified transparency is what the audit work in the rest of this article is designed to close.

    The four people who will ask, and what your audit answers

    Every AI policy conversation eventually arrives at the same question. Who is actually going to come asking. The honest answer is four different people, and they will not arrive in the order you expect.

    The procurement officer arrives first. Enterprise buyers have added AI usage disclosure to vendor security questionnaires across the last year. If your company sells software, professional services, or anything else that touches an enterprise buyer's data, the questionnaire will reach you. Without a documented AI usage statement, the deal stalls in legal review, sometimes for weeks, sometimes permanently.

    The plaintiff's lawyer arrives next. Mobley and Eightfold are not the only cases. Class actions and individual claims have moved forward across hiring, lending, healthcare, and customer service automation. The companies named are not always the AI vendors. Often they are the companies that purchased the AI and deployed it on real people without telling them.

    The board director arrives third. Directors are increasingly asking management for AI exposure briefings, prompted by their own outside counsel and by the news cycle. An acceptable use policy is not the document they were asking for. They wanted to know what AI the company is running, where it touches people, and what has been disclosed.

    The customer arrives last, but the customer's arrival is the one that does the most damage. By the time a customer is asking on social media why a support agent was an AI, or why a recommendation seemed to know something they never shared, the disclosure conversation is happening in public. The audit, done internally and in advance, is what prevents that conversation from being your first one.

    The four pillars every working AI policy covers

    A policy is only as useful as the boundaries it sets. The four areas below are what every working AI policy needs to address explicitly, in language that an outside reader, not just an internal one, could understand and verify.

    Transparency and customer disclosure. The policy needs to state when and how the company tells customers that AI is involved in something they are experiencing. This includes automated decisions, AI-generated content, and AI-mediated interactions. Some leading organizations, including the MacArthur Foundation, now require staff to disclose AI use in work products as a matter of internal policy. The same expectation is moving rapidly into customer-facing interactions, and the companies that have prepared for it are the ones who can answer the procurement officer's questionnaire in one pass.

    Data handling and consent. The policy defines what categories of data may enter AI systems and under what conditions. Sensitive, proprietary, or personal information should never flow into a public model. Consent is the other side of the same coin, because customers increasingly expect to know how their data is used and to have a meaningful way to say yes or no.

    Vendor and third-party AI use. This is where most policies fall short. Your CRM, your support platform, your marketing automation, and your hiring tools are all candidates for embedded AI features that turned on without an explicit decision on your side. The policy needs to cover those tools, not only the ones your company directly built or directly procured as AI.

    Human oversight and accountability. High-stakes decisions, including hiring, credit, healthcare, and termination, need a human reviewer empowered to override the AI output — yet seven in ten companies allow AI tools to reject candidates without any human involvement. Equally important, the policy assigns ownership. Someone in the organization owns AI decisions. If nobody owns them, everybody owns the risk, and the litigation will find that out before the company does.

    How to run an AI data audit

    The audit turns the policy from a document in a wiki into a practice that holds up under questioning. Each step builds on the last. The point of the sequence is to end with a written inventory that someone outside the company could read and understand.

    Step 1: Inventory every AI tool in use across the organization. Start by listing every AI tool your organization has approved, plus every AI tool you suspect employees are using without approval. The unapproved category is the one that surprises most leaders. Generative AI tools, browser extensions, embedded features in SaaS platforms, and personal accounts on consumer AI products all count. Ask department leads to walk through their actual workflow rather than their authorized one. The first version of the inventory is almost always missing between five and fifteen tools, and the distance between the authorized list and the real one is its own finding.

    Step 2: Map AI to sensitive customer data flows. For each tool on the inventory, trace where customer data enters the system, what the system does with it, and where it exits. A support chatbot that retains the full conversation for training is doing something different than a chatbot that processes the message and discards it. Both may be acceptable, but only one is being accurately disclosed in most cases. The output of this step is a table that names the tool, the data it touches, whether the touchpoint is customer-facing, and the current disclosure status.

    AI Touchpoint Data Accessed Customer-Facing Disclosure Status
    Support chat AI agent Name, email, account history, conversation content Yes Not disclosed in current policy
    Lead scoring model Form fills, page behavior, firmographic data Indirectly (decisions affect outreach) Not disclosed
    Marketing copy generation Customer testimonials, product usage data Yes (output published) Not disclosed
    Hiring resume screener Applicant resume, inferred demographic signals Yes (applicants are in the process) Partially disclosed (boilerplate only)
    Recommendation engine on pricing page Session behavior, prior account data Yes Not disclosed

    The disclosure column is usually the one that creates the most uncomfortable conversation. Most companies can name the AI involvement in every row. Almost none can confirm clean disclosure for any of them.

    Step 3: Classify customer data by risk. Not all data carries the same exposure. A working classification has four categories. Public data, which is already external. Internal data, which the company uses operationally but does not share. Confidential data, which includes most customer information and requires controls. And restricted data, which includes health information, financial records, identity documents, and similar sensitive categories. Higher classifications carry stricter AI rules. Restricted data should not flow into a third-party AI system at all without explicit consent and a specific business case that legal has reviewed.

    Step 4: Review vendor AI disclosures and contracts. Check what your software vendors have disclosed about their AI use. Many vendors added AI features to existing products over the last 18 months without re-papering customer contracts. The result is that your contract may not reflect what the product is now doing. Read the current version of each vendor's documentation, not the version you onboarded with. Ask procurement to send written confirmation of what AI is enabled by default, what is opt-in, and what customer data the AI uses for training or model improvement. This step often produces the most surprising findings. The marketing platform you have used for years may now route customer data through models you never explicitly approved.

    Step 5: Document findings and close what the audit exposed. Create a written record of what the audit found, prioritized by risk. The documentation does two jobs. It guides the work of closing the most exposed touchpoints first, and it creates the record that a procurement officer, a regulator, or a court may eventually request. Companies that have done this work well end up with a single document, typically eight to twenty pages depending on size, that names every customer-facing AI touchpoint, the data involved, the disclosure status, and the remediation plan for anything not yet disclosed. That document is the artifact that cannot be faked. The company either ran the audit or it did not. The inventory either exists or it does not. The disclosure column is either filled in or it is blank.

    Where to anchor your audit in established frameworks

    Several established frameworks can give your audit external credibility without requiring you to start from scratch. The NIST AI Risk Management Framework is the most widely adopted standard in the United States, developed by the National Institute of Standards and Technology to give organizations a structured approach to identifying and managing AI-related risk. ISO 42001 is the international equivalent for AI management systems and is particularly useful for companies operating across multiple jurisdictions. The OECD AI Principles form an international baseline that many national policies reference.

    Frameworks are useful for showing your work internally and for satisfying a sophisticated questionnaire. They do not, however, satisfy a customer who is asking a simpler question, which is whether you have told them, in plain language, where AI is touching their experience with your company. A framework is internal credibility. A public AI usage statement is external credibility. The audit produces the raw material for both, but only the second answers the question your customer is actually asking. Certification programs like SiteTrust were built specifically for that second category, providing third-party verification of public AI disclosure rather than internal risk management documentation.

    The regulatory pressure is broader than the headline laws

    Most coverage of AI regulation focuses on a handful of high-profile laws. The European Union has passed comprehensive AI legislation that affects companies serving European customers, with fines up to €35 million for higher-risk AI applications. At least three US states have enacted their own AI transparency and risk assessment requirements, each with distinct compliance deadlines in 2025 and 2026. New rules continue to emerge at the state, sector, and international level.

    Treating regulation as the primary driver of your audit, however, understates the pressure you are actually under. Most companies will hear from a procurement officer, a customer, or a plaintiff before they hear from a regulator. The reason to run the audit now is not that one specific law has put a date on the calendar. It is that the people who buy from you, the people who could sue you, and the directors who oversee you are all asking the same set of questions, and the regulator's questions will follow theirs. Companies that have already done the audit answer all four conversations with the same document.

    Five places we find undisclosed AI in almost every audit

    After enough audits, certain patterns repeat. The five below appear in almost every company that runs a thorough inventory for the first time.

    • The marketing platform that quietly turned on AI features. Email subject line testing, send-time optimization, content suggestions, and lookalike audience modeling are common AI features that vendors enabled over the last year without prominent customer notification. The marketing team usually knows the features are on. The disclosure policy does not.

    • The support tool that summarizes conversations with AI. Many support platforms now use AI to generate ticket summaries, suggest replies, or route conversations. The summaries are often retained, the suggestions often use prior conversations as training input, and the customer is rarely told.

    • The hiring software with embedded scoring. Even when a company does not buy AI hiring tools intentionally, the applicant tracking system or the resume parser may include AI ranking features that turn on by default. Mobley and Eightfold both turned on hiring tools whose scoring was not adequately disclosed to applicants.

    • The personalization layer on the website. Recommendation engines, dynamic pricing, and content variation based on session behavior are AI by another name. They affect what the customer sees and what they pay. The disclosure question for these tools is more nuanced than for a chatbot, but it still applies.

    • The internal AI that produces customer-facing output. Sales decks drafted with AI. Support replies generated by AI. Customer-facing content written by AI and published without disclosure. The AI is internal. The output is external. The disclosure obligation follows the output, not the tool.

    How to prove your AI policy and audit to customers

    The audit closes the internal question. The remaining question is external. Customers, procurement officers, and business partners want to verify that the work was done. A written policy alone is not enough, because anyone can publish a policy. The companies that earn trust in this category are the ones whose practices have been verified independently.

    This is the role SiteTrust certification was built to fill. SiteTrust verifies that a company's AI usage policy, disclosure statement, and audit practices meet a defined standard, and publishes verified companies in a public registry where customers and procurement teams can confirm certification independently. For the companies that have done the audit work, certification is the credential that turns the work into a competitive position. For the companies that have not, certification is the structure that walks them through the audit in the first place.

    Get certified for AI transparency with SiteTrust → https://sitetrust.com/get-certified

    Frequently asked questions about AI policy and audits

    What is the difference between an AI use policy for employees and an AI disclosure to customers?

    An AI use policy governs what your employees may do with AI tools. It is an internal document, usually owned by IT or legal, that prevents misuse of consumer AI products and protects company data from leaking into public models. An AI disclosure to customers is a different document entirely. It tells the people on the other side of the business where AI is touching their experience and what data is involved. Most companies have the first document and not the second. Procurement, regulators, and plaintiffs are increasingly asking for the second.

    Does our privacy policy cover how we use AI on customers?

    In most cases, no. A privacy policy is about data practices, what is collected, stored, and shared. It rarely addresses whether AI is making decisions or producing customer-facing output, and it almost never names specific AI systems. Terms of service are a liability shield, drafted to protect the company in court rather than to disclose to a customer. A real AI usage statement is a third document, sitting alongside the privacy policy and the terms of service, and answering a question neither was designed to answer.

    How do I audit where AI touches our customers?

    Walk through your own product and customer journey as a stranger. For every place a customer interacts with your company, ask three questions. What is the customer interacting with. Is AI involved. Has the customer been told. Then expand outward to internal processes that produce customer-facing output, such as marketing content, support responses, hiring decisions, and sales outreach. The five-step audit in this article gives you a fuller version. The first walk-through, however, is the fastest way to find the first ten touchpoints, and the leadership conversation that follows is usually where the work gets resourced.

    Closing

    We have a policy for what our employees do with AI. We do not have one for what our company does with AI.

    That is the line worth carrying out of this article. The policy you already wrote is a real and useful document. The one your customers, your procurement officers, your board, and the courts are now asking for is the second one. The audit is how you produce it. The certification is how you prove it. Both of them start with the same first step, which is opening up the inventory you have been treating as a question for someone else to answer.

    Ready to become a founding member?

    Apply for certification today

    Latest posts

    How Secure Checkout Badges Increase Conversion Rates in 2026

    How Secure Checkout Badges Increase Conversion Rates in 2026

    BLOGBy Damjan Stankovic
    How to Answer AI Governance Questions on a Vendor Security Questionnaire

    How to Answer AI Governance Questions on a Vendor Security Questionnaire

    BLOGBy Damjan Stankovic
    Trust Badge Costs: Pricing Guide for 2026

    Trust Badge Costs: Pricing Guide for 2026

    BLOGBy Damjan Stankovic
    MEDVi AI Scandal: Deepfake Doctors, Spam Lawsuits & FDA Warning

    MEDVi AI Scandal: Deepfake Doctors, Spam Lawsuits & FDA Warning

    NEWSBy Abril Lespade
    How to Prove Your Business Uses AI Responsibly

    How to Prove Your Business Uses AI Responsibly

    BLOGBy Damjan Stankovic
    AI Transparency Certification vs. Self-Attestation: What Most Companies Get Wrong

    AI Transparency Certification vs. Self-Attestation: What Most Companies Get Wrong

    BLOGBy Damjan Stankovic
    What the EU AI Act Requires Businesses to Disclose About AI Systems

    What the EU AI Act Requires Businesses to Disclose About AI Systems

    BLOGBy Damjan Stankovic
    Why Responsible AI Is Now a Competitive Advantage

    Why Responsible AI Is Now a Competitive Advantage

    NEWSBy Abril Lespade
    Can AI Fix Swipe Fatigue and Rebuild Trust in Dating Apps Like Tinder?

    Can AI Fix Swipe Fatigue and Rebuild Trust in Dating Apps Like Tinder?

    NEWSBy Abril Lespade
    How Is AI Beating the Market With Real Money and Why Transparency Finally Matters

    How Is AI Beating the Market With Real Money and Why Transparency Finally Matters

    NEWSBy Abril Lespade
    Damjan Stankovic photo

    Damjan Stankovic

    Growth Marketing Lead

    SiteTrust

    The Trust Layer for the AI Era

    2725 Abington Road, Suite 202

    Fairlawn, Ohio 44333

    info@sitetrust.com

    (c) 2026 SiteTrust All rights reserved.

    Product

    • Companies
    • Professionals
    • Advisor Program
    • Registry
    • Certification

    Resources

    • AI Trust Badge
    • Why Responsible AI
    • Blog

    Company

    • About
    • Contact
    • Partner
    • FAQ

    Social Links

    • LinkedIn
    • Instagram
    • TikTok
    • Youtube
    • Facebook
    • X
    AI Policy•Privacy Policy•Terms of Service
    AI Policy•Privacy Policy•Terms of Service