Back to Blog
August 15, 2025
News

AI Transparency Listening Session with the White House Office of Management and Budget

The White House Office of Management and Budget (OMB) is leading a series of listening sessions to learn more from industry about their approaches to AI transparency and auditable risk management. Participants in this series include major large language model (LLM) developers as well as third party deployers who integrate LLMs into their products.

Purpose and Context

These sessions will inform OMB's forthcoming guidance pursuant to Executive Order 14319, Preventing Woke AI in the Federal Government. The executive order emphasizes the need for AI systems used by federal agencies to maintain ideological neutrality and truth-seeking principles, while ensuring transparency and accountability.

The listening sessions represent a collaborative approach between government and industry, recognizing that effective AI governance requires input from those developing and deploying these technologies in real-world contexts.

Key Discussion Topics

Topics discussed throughout the sessions include:

Risk Monitoring and Assessment

Organizations are sharing the specific categories of risk that they monitor for, including how they identify, categorize, and prioritize different types of AI-related risks. This includes both technical risks (such as model failures or security vulnerabilities) and societal risks (such as bias, misinformation, or harmful outputs).

Pre-Training and Post-Training Risk Mitigation

Participants are discussing pre-training criteria and methods to reduce identified risks, as well as the post-training classifiers or similar rules that have been integrated into their systems. This includes data curation practices, model training methodologies, and safety mechanisms built into AI systems.

Continuous Monitoring Capabilities

The sessions explore how organizations detect unwanted model behavior in production, what documentation and intervention processes look like when issues are identified, and how monitoring systems are designed to catch problems early.

Ideological Neutrality and Truth-Seeking

A key focus is how organizations currently address the topics of ideological neutrality and truth-seeking, as presented in EO 14319. This includes discussions about:

  • Whether any "instructions" are shared with a model when producing information about sensitive or political topics
  • How organizations balance neutrality with accuracy and safety
  • Processes for ensuring factual accuracy in AI-generated content

Regulatory Compliance and Adaptation

Organizations are sharing whether they have needed to update or alter any products to meet compliance with new state-level or national-level AI regulation. This includes alterations to:

  • Risk criteria and assessment frameworks
  • Training approaches, including data curation and model training methodologies
  • New classifiers and safety mechanisms
  • Documentation requirements (e.g., via the EU's AI Act)

Transparency for Downstream Integrators

The sessions explore whether downstream integrators of AI models are satisfied with the level of transparency they receive from developers to meet regulatory reporting requirements. This is crucial for organizations that build products on top of foundation models and need to demonstrate compliance themselves.

Implications for Federal AI Policy

The insights gathered from these listening sessions will directly inform OMB's guidance on AI transparency and risk management for federal agencies. This guidance is expected to establish standards for:

  • How federal agencies should evaluate AI systems before deployment
  • What transparency requirements should be met by AI vendors serving the federal government
  • How agencies should monitor and audit AI systems in production
  • What documentation and reporting requirements should be established

Industry Participation

Organizations with experience in any of these topics are invited to participate in the listening sessions. Participation is limited, and OMB may be unable to accommodate all requests; however, OMB aims to obtain input from a reasonable range of industry representatives.

Organizations interested in participating can email EO14319Outreach@omb.eop.gov with their organization name, contact information, and a short summary of the experiences or knowledge relevant to the topics above that they would like to share.

What This Means for Organizations

The OMB listening sessions signal a significant shift toward mandatory transparency and risk management requirements for AI systems, particularly those used in federal contexts. Organizations should:

  • Document their current AI transparency and risk management practices
  • Prepare for potential federal procurement requirements related to AI transparency
  • Ensure their transparency frameworks can meet both state and federal expectations
  • Consider how their practices align with emerging federal guidance

How SiteTrust Certification Helps

SiteTrust certification provides organizations with a framework for demonstrating AI transparency and risk management practices that align with emerging federal expectations. Our certification process helps organizations:

  • Document their risk assessment and mitigation processes
  • Establish transparency reporting capabilities
  • Create auditable records of AI governance practices
  • Prepare for federal procurement requirements
  • Demonstrate compliance with emerging transparency standards

As federal guidance evolves based on these listening sessions, SiteTrust-certified organizations will be well-positioned to meet new requirements, whether they're serving federal agencies directly or operating in regulated industries.

Ready to prepare for federal AI transparency requirements?

Get certified today

Vinnie Fisher

Founder of BeyondYourShadow, Mentor Academy, and SiteTrust | Attorney