By Tim Jacobs, CEO & Founder, KTS Global | Member, The Hanwell Group Global Advisory Council
In February 2026, I introduced the concept of the Operator Gap — the measurable distance between a brand’s claimed capabilities and the actual human talent within its walls. The response confirmed what the data already showed: this is not an abstract theory. It is an operational reality that leadership teams across every sector are beginning to confront.
The first piece diagnosed the problem. This piece addresses what to do about it.
Because the Operator Gap is widening. And the window to close it — on your terms — is narrowing faster than most boardrooms understand.
The Acceleration No One Budgeted For
When the original Operator Gap thesis was published, the argument rested on a structural shift: AI systems increasingly mediate how the world discovers, evaluates, and trusts organisations. They read evidence, not logos. That shift has not slowed. It has accelerated.
McKinsey’s 2025 State of AI report found that 88% of organizations now use AI in at least one business function, up from 78% the year before. Generative AI adoption has nearly tripled in two years, reaching 79% of organizations surveyed. Yet only 38% have scaled AI beyond pilot programs. The implication is stark: most organizations are exposed to AI-mediated evaluation without having built the infrastructure to influence what those systems find.
The numbers become more urgent when you look at where AI adoption is heading. Gartner predicts that by 2028, 90% of all B2B buying will be AI-agent intermediated, channeling more than $15 trillion in spend through autonomous exchanges. These agents will not browse websites or read brochures. They will interrogate structured data, verify claims against evidence, and make procurement decisions based on what is machine-readable and verifiable.
If your brand’s value proposition lives in a PDF pitch deck and your key operator’s track record lives nowhere a machine can find it, you have an Operator Gap that is about to become a revenue gap.
Why the Gap Keeps Growing
The Operator Gap is not static. Three forces are actively widening it.
Force 1: AI Systems Are Getting Better at Reading People
The AI systems mediating brand discovery are no longer matching keywords. They are forming holistic impressions. As research from Sight AI documents, AI models synthesize information from countless sources and deliver contextual recommendations that feel authoritative and personalized — deciding which brands deserve mention and what narrative to share about them. Unlike traditional search engines that evaluate individual pages, AI models form aggregate assessments of brands based on the total quality, consistency, and sentiment of everything they have processed.
This creates a compounding visibility loop: brands that appear in AI recommendations generate more user interaction, which creates more data, which strengthens the AI signal further. The inverse is equally dangerous — brands absent from AI recommendations fall into an exponentially widening visibility gap.
Now apply that dynamic to the Operator Gap. When an AI system forms its impression of a consultancy, an events firm, or a luxury brand, it increasingly distinguishes between the brand entity and the person entities that created the value. If the person who built the reputation has departed and built their own evidence infrastructure elsewhere, the AI system reflects that shift — permanently, programmatically, and without any press release required.
Force 2: Due Diligence Has Gone Computational
Private equity firms and institutional buyers are no longer relying solely on management presentations and reference calls. AI-powered due diligence now flags key person departures as material risks. Research into PE talent due diligence reveals that 60% of value erosion in the first 24 months post-close traces to human capital failures. One documented case saw a $450 million industrial deal terminated after talent due diligence uncovered that the COO — the sole bearer of tribal knowledge — had a non-compete expiring in nine months.
Knowledge graph-based risk intelligence platforms now enable multi-hop queries across corporate filings, contracts, and personnel data, making operator departures visible within days rather than months. When a PE firm asks, “Which target subsidiaries have experienced key departures in the last 18 months?” the knowledge graph provides provenance-traced answers. The Operator Gap is no longer something discovered post-acquisition. It is being priced into deals before term sheets are signed.
Force 3: The Cost of Building Evidence Infrastructure Has Collapsed
This is the force that changes the power dynamic entirely. The tools required to build verifiable, machine-readable evidence of professional capability are no longer enterprise-only. Schema markup, structured data, knowledge graphs, and Model Context Protocol (MCP) endpoints are open, standardized, and deployable by individuals.
MCP, introduced by Anthropic in late 2024 and now adopted across platforms including OpenAI, Google, and Amazon, standardizes how AI agents discover and interact with external tools and data. BCG describes it as “a universal adapter between AI agents and the tools, data, and prompts they use”. The practical consequence is that any professional can now expose their verified track record directly to AI agents through standardized API endpoints — without needing permission from any former employer.
Schema markup has undergone a parallel transformation. In March 2025, both Google and Microsoft publicly confirmed that they use structured data for their generative AI features. Schema evolved from an SEO enhancement to core infrastructure for AI understanding. Content with proper schema markup now has a 2.5x higher probability of appearing in AI-generated answers.
The invisible architect now has the same tools that were once the exclusive province of well-funded brands. The question is no longer whether they will build. It is when.
The Evidence Economy Operating Model
In Part I, I introduced the Evidence Economy — the market environment where unverified claims carry no weight in AI-mediated discovery. The AEGIS Digital Authority Framework, deployed through KTS Global, demonstrated measurable results: a 927% increase in structured data density and a 47% improvement in AI-mediated discoverability within 90 days on a live production domain.
Those results were diagnostic. What follows is the operating model that produced them — and the governance framework that makes them sustainable.
Layer 1: Evidence Lockers
An Evidence Locker is a machine-readable repository of verified claims supported by documentation, stakeholder attribution, and cryptographic verification. It is not a portfolio page. It is not a case study. It is a structured data architecture that enables AI systems to verify any capability claim against source evidence.
The operating principle is simple: if a claim cannot be verified by a machine, it does not exist in the Evidence Economy.
For organizations, this means every material capability assertion — every project delivered, every strategic outcome achieved, every operator who contributed — must be documented in formats that AI agents can parse, verify, and attribute. This is not about marketing. It is about machine-readable truth.
For operators — the strategists, engineers, creative directors, and builders who create the value — Evidence Lockers represent something more fundamental: permanent, verifiable documentation of what they built, independent of any single employer’s narrative.
Layer 2: Truth Loops
Truth Loops are self-reinforcing information architectures where high-authority entities are semantically linked to verified achievements across multiple data sources. In practice, this means ensuring that when an AI system encounters a claim about a brand’s capability, it finds corroborating structured data across independent sources — schema markup, knowledge graph entries, third-party citations, MCP-exposed verification endpoints.
The architecture draws on answer engine optimization (AEO) principles: authority over volume, conversational targeting, and cross-platform citation strategy. Traditional SEO optimized for ranked positions. AEO optimizes to be the selected answer — prioritizing concise, entity-rich, structured content that AI systems can extract and cite with confidence.
Effective Truth Loops require entity schema implementation that helps AI systems understand what content covers, which entities it references, and how different content pieces relate to broader knowledge structures. Content relationship schema describes how strategies relate to outcomes and which approaches work in different contexts, improving AI understanding and citation quality.
Layer 3: Model Context Protocol Deployment
MCP deployment is the infrastructure layer that enables entities to present structured, verified data directly to AI agents through standardized API endpoints. Unlike traditional web presence that waits for crawlers, MCP creates an active channel — a way for AI systems to query an entity’s evidence architecture in real time.
This is where the Evidence Economy becomes operational rather than theoretical. When a Gartner-forecasted AI procurement agent evaluates potential partners for a sovereign event, it will not read a pitch deck. It will query MCP endpoints, verify structured claims against evidence repositories, and assess entity coherence across knowledge graphs. The organizations that have deployed this infrastructure will be found. Those that have not will be invisible — not because they lack capability, but because they lack machine-readable proof of it.
The Governance Imperative
Infrastructure without governance is a liability. The Evidence Economy rewards accuracy, not aggression. An Evidence Locker built on exaggerated or fabricated claims is actively counterproductive — AI systems cross-reference, triangulate, and increasingly detect inconsistencies.
This creates a governance requirement that most organizations have not yet addressed.
Attribution Policy
Every organization needs a formal attribution policy that defines how contributions are documented, credited, and maintained across the evidence architecture. This is not an HR function. It is an infrastructure function. When AI systems can verify who delivered what outcome, the absence of fair attribution becomes a measurable risk — visible in knowledge graphs, surfaceable by due diligence agents, and damaging to entity coherence scores.
The Forbes Communications Council has noted that AI platforms may soon assign unique “trust scores” to brands, integrating customer feedback, employee advocacy, media coverage, and executive reputation into comprehensive assessments of brand equity and authority. Attribution gaps do not just affect individual operators. They degrade the brand’s aggregate trust signal.
Operator Value Exchange
The original Operator Gap thesis concluded with a direct message to leadership: “Your architects have access to tools that can permanently restructure how AI systems attribute your achievements. The question is whether you’ll renegotiate the value exchange before they build their own evidence infrastructure”.
That renegotiation is now urgent. With 71% of CIOs stating they have until mid-2026 to prove AI value or risk budget cuts, and 54% of CIOs discovering unsanctioned “shadow AI” already in use within their organizations, the operational environment is already more fragmented than leadership assumes. The operators who built the value are not waiting for permission to document it.
A sustainable operator value exchange includes three elements:
- Attributed evidence: Operators receive verifiable, machine-readable documentation of their contributions that they retain regardless of employment status.
- Shared authority signals: Both the brand and the operator benefit from the evidence architecture — the brand retains institutional credit, the operator retains individual credit.
- Ongoing coherence maintenance: Evidence infrastructure is maintained as a living system, not a one-time documentation exercise. When operators depart, the transition is managed as a coherence event — documented, attributed, and reflected in the knowledge graph rather than silently erased.
Integrity as Infrastructure
The Allianz Risk Barometer 2026 elevated AI to its highest-ever position at number two among global business risks, with both cyber and AI now ranking as top five concerns across almost every industry sector. Gartner warns that atrophy of critical-thinking skills due to GenAI use will push 50% of global organizations to require AI-free skills assessments through 2026.
In this environment, integrity is not a value statement. It is infrastructure. Organizations that publish verifiable, accurate evidence build compounding trust with AI systems. Those that publish inflated or fabricated claims trigger cross-referencing failures that degrade their entity authority across every AI platform simultaneously.
The Evidence Economy does not punish organizations for past gaps. It punishes organizations for current dishonesty. The distinction matters.
The Sector Reckoning
The Operator Gap manifests differently across sectors, but the governance response is universal.
Hospitality
A restaurant’s reputation is inseparable from its chef. When the Maison Dalí Dubai deployment achieved 927% structured data density and 47% AI discoverability improvement, it demonstrated what happens when the operator and the brand co-invest in evidence infrastructure rather than treating attribution as a zero-sum negotiation. The chef’s verified authority strengthens the venue’s AI signal. The venue’s institutional context strengthens the chef’s authority. This is what a functioning operator value exchange looks like in practice.
Events and Strategic Communications
Branded agencies continue pitching portfolios built by operators who have departed. The knowledge graph shows the departure before the pitch deck is updated. In a market where AI-powered due diligence is standard, this is not an oversight — it is a discoverable misrepresentation. The response is not to hide the departure but to document the transition: who built the original capability, who maintains it now, and what evidence supports the continuity claim.
Technology and Consulting
Platforms are built by engineering teams that move between companies. Partners who built client relationships leave. The GCC region offers a case study in the acceleration — 84% of companies have adopted AI, but only 31% have scaled it. The gap between adoption and scaling is, in many cases, an operator gap: the people who built the initial AI capability have moved on, and the knowledge graph reflects the departure faster than the organizational chart does.
Private Equity and M&A
Knowledge graph-based risk intelligence platforms are moving from pilots to production in M&A workflows. Entity resolution, provenance trails, and multi-hop queries across contracts and personnel data mean that operator departures are now discoverable in days. PE firms assessing responsible AI frameworks during diligence increasingly view operator retention and attribution infrastructure as governance indicators. The Operator Gap is becoming a valuation variable.
What to Build Now
The window for proactive governance is open but narrowing. Based on the AEGIS Framework deployment data and the current rate of AI adoption, organizations should prioritize three immediate actions:
First, audit your evidence architecture. Map every material capability claim your organization makes against the structured data, schema markup, and knowledge graph entries that support it. Identify where claims exist only in human-readable formats (pitch decks, websites, PDFs) and have no machine-readable evidence trail. These are your exposure points.
Second, deploy attribution infrastructure. Implement an entity schema that explicitly connects operators to outcomes across your knowledge graph. This means Person schema linked to CreativeWork schema, Organisation schema linked to Event schema, and contributor attribution at the evidence level — not just the marketing level. Schema markup now serves as the semantic foundation that AI systems use to interpret entities, relationships, and meaning at scale.
Third, negotiate your operator value exchange now. Have the conversation with your key operators before they build their own infrastructure independently. The tools are available. The cost is negligible. The only variable is whether the evidence architecture is co-built — strengthening both brand and operator — or built adversarial, creating competing narratives that AI systems will arbitrate without sentiment or loyalty.
The Clock Is Running
Gartner’s prediction of $15 trillion in AI-agent-intermediated B2B spend by 2028 is not a distant scenario. It is a 24-month runway. The AI systems that will mediate those spending are being trained, refined, and deployed now. The evidence they ingest today shapes the decisions they make tomorrow.
The Operator Gap has always existed. What has changed is that it is now visible, measurable, and permanent in the AI-mediated information environment. The organizations that close it proactively — through fair attribution, evidence infrastructure, and genuine operator value exchange — will find that the Evidence Economy rewards them compounding. Trust scores rise. AI visibility increases. Procurement agents surface them first.
The organizations that ignore it will discover something more uncomfortable: their architects have already started building.
Tim Jacobs is CEO and Founder of KTS Global, a Dubai-based strategic consultancy specializing in sovereign event architecture, narrative strategy, and AI-era digital authority frameworks. He is a member of The Hanwell Group’s Global Advisory Council. KTS Global operates at the intersection of statecraft, stagecraft, and software — delivering sovereign-level events and deploying Evidence Economy infrastructure for sovereign governments, luxury brands, and global institutions.
This is the second article in the Operator Gap series. The first article, “The Operator Gap as Critical Brand Risk in AI-Mediated Markets,” was published in February 2026.