Independent AI governance scoring reveals enterprise vendors range from A+ to D+ on privacy, exposing blind spots traditional risk assessments consistently miss.
By Diego Monteiro | CEO of TrustThis.org | Open Platform for Privacy Scoring and AI Governance
Third party risk management has traditionally relied on security questionnaires, SOC 2 reports, and compliance certifications. These instruments were designed for a world where vendors stored and processed data in relatively predictable ways. But as AI becomes embedded in every enterprise tool, from email platforms to collaboration suites to messaging applications, these legacy assessments fail to capture a critical dimension: how vendors handle your data in the context of AI training, automated decisions, and algorithmic governance. The tools your employees use every day may be learning from your proprietary data, and your risk framework may have no way to detect it.
This blind spot is not theoretical. An independent benchmark conducted by TrustThis.org in February 2026 evaluated 14 major digital platforms using the AITS (AI Trust Score) methodology, which analyzes 20 criteria across privacy governance and AI ethics. The results expose a striking gap between what organizations assume about their vendors and what documented policies actually reveal.
The Scoring Gap That Risk Teams Are Missing
Consider the range: Anthropic Claude earned an A+ grade, with documented opt out mechanisms, explicit AI training policies, and a 30 day retention period for deleted data. Microsoft Copilot scored an A, with perfect compliance on all 12 base privacy criteria. Google Gemini received a B, reflecting specific gaps in AI ethics documentation and the absence of a dedicated opt out mechanism for model training. On the opposite end, WhatsApp Business received a D+ grade, failing 8 of 20 criteria, with particularly concerning gaps in AI governance transparency.
This disparity matters for GDPR and CCPA compliance. Article 28 of the GDPR requires data controllers to use only processors providing sufficient guarantees of data protection. CCPA mandates that businesses disclose the categories of personal information collected and the purposes for processing. When a vendor scores D+ on AI governance, the question is not whether compliance risk exists. The question is whether your current risk assessment framework captured it.
What Traditional Assessments Fail to Ask
Standard vendor risk questionnaires cover encryption, access controls, and incident response. They rarely ask: does the vendor use customer data to train AI models? Is there a documented opt out mechanism? Does the platform provide a pathway for contesting automated decisions? Can users request human review of AI driven outcomes? These are not hypothetical concerns. Every time an employee pastes a contract clause into an AI assistant or asks a collaboration tool to summarize a confidential memo, data flows into systems whose governance practices may never have been evaluated by the organization’s risk team.
The AITS methodology evaluates precisely these dimensions, and the findings are revealing. Among the 14 platforms analyzed, 29% do not offer any AI training opt out option. This means nearly one in three enterprise tools could be feeding confidential business data into models that competitors and other users access, with no documented mechanism for organizations to prevent it.
The practical impact is already measurable. In January 2026, a federal lawsuit was filed against Meta alleging that WhatsApp communications were being accessed in ways that contradicted the platform’s privacy promises to users and businesses alike. TrustThis had already flagged WhatsApp Business with a D+ governance rating months before the headlines broke. The scoring methodology had identified documentation gaps, missing opt out mechanisms, and weak AI governance disclosures that foreshadowed the legal challenge. Standardized AI trust scoring identified the risk before litigation confirmed it.
From Reactive Compliance to Predictive Risk Management
The shift from traditional vendor assessments to AI trust scoring represents a fundamental change in how organizations approach third party risk. Instead of reacting to incidents after they occur, compliance teams can now evaluate vendors against standardized governance criteria before onboarding them into critical workflows. A platform that lacks documented AI ethics principles, fails to offer training opt out controls, or provides no mechanism for contesting automated decisions represents a quantifiable risk that should factor into procurement decisions.
For CISOs and compliance officers, the path forward requires integrating AI governance metrics into existing third party risk frameworks. This means evaluating vendors not just on whether they encrypt data in transit, but on whether they document AI training practices, provide accessible opt out controls, maintain transparent data retention policies, and offer mechanisms for human review of automated decisions. Organizations that adopt this approach can identify compliance exposure before regulators or courts do.
Regulatory momentum reinforces this urgency. GDPR enforcement actions increasingly scrutinize automated decision making under Article 22, which grants individuals the right not to be subject to decisions based solely on automated processing. CCPA amendments continue expanding consumer rights around profiling and data sharing. Organizations that wait for regulatory guidance before updating their vendor assessment criteria risk discovering gaps the same way Meta’s partners did: through litigation rather than due diligence.
Building AI Governance into Your Vendor Scorecard
Organizations looking to strengthen their third party risk programs should consider adding specific AI governance criteria to their vendor evaluation process. The AITS framework offers a starting point: does the vendor clearly disclose how AI processes user data? Is there a documented and accessible mechanism for opting out of AI model training? Does the vendor publish data retention periods specific to AI interactions? Are there stated principles around ethical AI, bias mitigation, and algorithmic fairness? Does the platform offer a mechanism for users to contest automated decisions and request human review?
These five questions alone would have differentiated an A+ vendor like Anthropic Claude, which documents clear policies across all five dimensions, from a D+ vendor like WhatsApp Business, which fails to address most of them. The difference between these grades is not academic. It represents the distance between a defensible compliance posture and potential regulatory exposure. For enterprise procurement teams, integrating these criteria into vendor scorecards transforms AI governance from an abstract concern into a measurable evaluation metric.
The data that separates a proactive compliance posture from a reactive one already exists. Independent scoring methodologies can quantify what questionnaires miss. The question is whether your organization is using it.
What criteria does your organization use to evaluate AI governance in third party risk assessments? Share your approach in the comments below.
Diego Monteiro is CEO of TrustThis.org, an open platform for privacy scoring and AI governance of software applications. TrustThis.org provides independent evaluation using the AITS methodology to help enterprises evaluate vendor AI Privacy and Security.