On December 12, 2025, Strategy CPO Saurabh Abhyankar watched Claude Code connect to Mosaic through the Model Context Protocol (MCP) and build a complete analytics application. No dashboard tool. No BI platform. Just an AI coding agent querying a semantic layer and generating a working app.
At Strategy World 2026, Abhyankar told a room of industry analysts and practitioners that this was the moment he knew BI was dead. Not dying. Dead. Twenty years of building BI tools, he said — billions of dollars of engineering — and an AI coding agent replicated it in minutes.
That's a bold claim from a CPO whose company sells BI tools. But the technical architecture underneath it is worth understanding, regardless of whether you buy the marketing. Because what's actually happening at the protocol level — MCP connecting AI agents to governed semantic layers — changes how data engineers and developers will build analytics for the foreseeable future.
The Abstraction Gap That Explains Everything
Abhyankar's keynote built on a technical argument that developers and data engineers should find familiar: the history of productivity in software is the history of abstraction layers.
Physical circuits gave way to assembly language, which delivered roughly a 50x productivity improvement. High-level languages and eventually Python and AI-assisted coding have pushed that number to approximately 22,000x over six decades. Each layer of abstraction let engineers work at a higher level while the toolchain handled the complexity underneath.
Data engineering followed a different trajectory. The move from flat files to relational databases and SQL delivered about a 600x improvement. And then it stopped. The Big Data era actually regressed — MapReduce and Java-based processing were less productive than SQL for most analytical workloads. Modern lakehouse platforms like Snowflake and Databricks abstracted compute from storage, which was a meaningful architectural advance, but the query interface is still SQL. The abstraction layer didn't move up.
That's the gap. Software engineering productivity improved 22,000x. Data engineering productivity improved 600x and stalled. Abhyankar's argument is that this gap explains why data engineering projects take so long relative to software development, why data teams are perpetually backlogged, and why the BI industry has been building incrementally better versions of the same thing for 30 years.
Mosaic, in this framing, is the next abstraction layer. Instead of working at the SQL and table level, data engineers work at the business semantics level — defining what entities, metrics, and relationships mean. The platform handles the SQL generation, table creation, and query optimization underneath, the same way a compiler handles the conversion from high-level code to machine instructions.
How MCP Changes the Architecture
The Model Context Protocol is what makes the December 12 demo more than a parlor trick. MCP provides a standardized way for AI agents to discover and interact with external tools and data sources. When Claude Code connected to Mosaic via MCP, it didn't need to know the underlying database schema, the SQL dialect, or the table relationships. It queried the semantic layer, which translated business-level questions into the correct technical operations.
This is architecturally significant for three reasons.
First, the AI agent works at the same level of abstraction as a business user. It asks for revenue by region or inventory by store, and the semantic layer handles the joins, filters, and aggregations. The agent doesn't generate raw SQL against source tables. It generates queries against governed business definitions. That means the output is consistent regardless of which agent, which LLM, or which interface is asking.
Second, governance travels with the data. Because the agent queries through the semantic layer, every access control, metric definition, and business rule applies automatically. An AI agent building an inventory management app gets the same governed data as a dashboard, a notebook, or a human asking questions through conversational BI. The governance isn't bolted on after the fact. It's baked into the query path.
Third, the application layer becomes disposable. If an AI coding tool can generate a working analytics app from a requirements document in minutes, the traditional build cycle — requirements gathering, development, testing, deployment, maintenance — collapses. The app itself stops being the valuable artifact. The semantic layer underneath it becomes the durable asset.
Abhyankar demonstrated this live on stage with three different AI platforms. Gemini pulled data from Mosaic to create a product management workflow across CRM and ticketing systems. Claude generated multi-sheet Excel analyses with visualizations and built executive presentations from the same governed data source. OpenAI's Codex took a requirements document and built a complete inventory management application that worked on iPad and iPhone — from specification to working app in minutes.
The common thread: all three used Mosaic as the data layer through MCP. None of them needed to know anything about the underlying databases.
Open Semantic Interchange and Git Integration: What Matters to Engineers
Abhyankar announced that Strategy has joined the Open Semantic Interchange initiative. The practical implication: semantic models defined in Mosaic can be exported and imported in a common YAML format. On paper, this means enterprises aren't locked into a single semantic layer vendor. They can move definitions between tools or run multiple semantic layers that stay synchronized.
More immediately useful for engineering teams: Mosaic now supports Git integration. Semantic models can be version-controlled, run through CI/CD pipelines, and managed with standard diff and merge workflows. If you're a data engineer who's been maintaining metric definitions in documentation or tribal knowledge, this is the shift toward treating semantic models as code — versioned, tested, reviewed, and deployed through the same pipelines as application code.
Abhyankar was candid that enterprises will end up with multiple semantic layers. Companies acquire other companies, different teams adopt different tools, and the notion of a single universal layer across a large enterprise is aspirational. The OSI standard and Git integration are Strategy's answer to that reality: if you're going to have multiple semantic layers, at least make them interoperable and manageable through standard engineering workflows.
From Descriptive Semantics to Business Ontology
The most technically ambitious part of the Mosaic roadmap is the expansion from a semantic layer into a full business ontology. Traditional semantic layers are descriptive: they define what entities and metrics mean. Mosaic is adding verbs, rules, subtypes, workflows, and operational descriptions.
In concrete terms, that means encoding not just that a "customer" entity exists, but that a customer buys products, returns products, and must buy before returning. Premium customers inherit from the base customer entity with additional properties. Stores operate according to defined workflows. Factory material flows follow specific sequences.
During the analyst briefing, one industry researcher drew the distinction between descriptive and prescriptive semantics. A descriptive layer tells you what your data looks like. A prescriptive layer tells agents what to do with it. That distinction matters for the AI use case. If you want an agent to take action on behalf of the business — not just answer questions but execute workflows — it needs prescriptive rules, not just descriptive metadata.
Abhyankar described a system where Mosaic can suggest relationships automatically when it sees entities. If it encounters a customer entity and a product entity, it can propose that the relationship is "customer buys product" based on common patterns. Companies that have existing business process documentation can feed that into Mosaic to accelerate the ontology build. And over time, metadata from existing Strategy deployments across industries can be aggregated into vertical templates — a default semantic layer for financial services, retail, healthcare, and so on.
This is where the vision gets ambitious and the execution risk increases. Encoding an entire business into a semantic ontology is a massive undertaking. It's also where AI has real limitations.
What Production Looks Like Today
At the same conference, Sachin Bhatta, BI Director at the North Texas Tollway Authority, described what it's actually like to run Mosaic in production. NTTA processes over 300 billion toll transactions and moves 1.7 terabytes of data daily. They were one of the earliest Mosaic pilot customers and are now live in dark mode alongside their existing infrastructure.
Bhatta's experience offers useful data points for engineering teams evaluating the platform.
On the data warehouse question: Bhatta is keeping his BigQuery environment. He sees the direction Mosaic is heading, but he's waiting for the technology to prove out at his data volumes before migrating. For new projects and new data sources, he starts in Mosaic. For existing infrastructure, he runs both in parallel.
On AI limitations: Bhatta offered a practical corrective that developers need to hear. AI handles textual queries well, but it still struggles with mathematical calculations and computations. That's why the metrics layer in Mosaic matters separately from the LLM integration. Calculated metrics need to be defined and computed by the engine, not generated by a language model that might hallucinate the math. Mosaic addresses this with two types of agents — one that operates on the semantic model and one that operates on dashboards — keeping computation separate from natural language interpretation.
On integration with existing Strategy environments: this is the gap Bhatta flagged most directly. Organizations that have invested years in building semantic layers with thousands of attributes and metrics on Strategy's traditional Intelligence Server platform need a bridge to Mosaic. That bridge doesn't fully exist yet. New projects work well in Mosaic. Migrating legacy semantic models is still a work in progress.
On preparatory work: Bhatta was blunt about what has to happen before Mosaic or AI delivers value. Your data dictionary, data catalog, and naming conventions need to be in order. Mosaic reads your metadata. If the metadata is inconsistent, the LLM answers will be inconsistent. No amount of AI sophistication fixes dirty metadata.
NTTA also provided a useful cost comparison. Before Mosaic, Bhatta's team ran customer clustering through Google Vertex AI — 32 separate processes involving knowledge graphs and ML models on BigQuery. Mosaic's built-in clustering feature could have handled much of that work within a single tool at significantly lower cost. For engineering teams doing cost-benefit analysis, that's a concrete data point.
What Data Engineers Should Pay Attention To
Strip away the conference marketing and three technical developments from Strategy World 2026 are worth tracking.
The first is MCP as the standard interface between AI agents and governed data layers. This isn't specific to Strategy. Any semantic layer or data platform that exposes its capabilities through MCP becomes accessible to any AI agent that supports the protocol. The competitive question isn't which semantic layer is best. It's which one your AI toolchain can talk to most effectively.
The second is semantic models as code. Git integration, CI/CD pipelines, version control, diff and merge — these are the workflows that data engineers already use for everything else. Bringing semantic model management into the same toolchain removes a category of manual governance work and makes semantic definitions testable and auditable in the same way application code is.
The third is the shift from descriptive to prescriptive semantics. If your organization is planning for agentic AI — agents that don't just answer questions but execute business workflows — your semantic layer needs to express business rules, relationships, and constraints, not just metric definitions. That's a significantly larger modeling effort than what most organizations have undertaken, and it's worth starting the planning now even if the tooling isn't fully mature.
Abhyankar gave the room a timeline for the data warehouse transition: some elements in six months, substantial capability within five years. That's an honest range from a CPO who watched his own product category get replicated by an AI agent in real time.
For data engineers, the practical move is the same one NTTA is already making: start new projects in the semantic layer, keep existing infrastructure running, invest in metadata quality, and watch how MCP integration with your AI toolchain evolves over the next two quarters. The architecture is shifting. The timeline depends on your data volumes, your governance maturity, and how much of your current pipeline is adding value versus just moving bytes.