Salesforce shifts developer focus from building data pipelines to orchestrating AI agents at scale.

BackerLeader posted 3 min read

Salesforce Targets Data Fragmentation With New Platform Tools for AI Developers

Salesforce is expanding its platform architecture with new tools designed to help developers build and manage AI systems that work with unified data structures.

The company announced updates to Data Cloud, MuleSoft, Tableau, and embedded security features. The changes address a common problem: most enterprise AI projects fail because data quality is poor, governance is inconsistent, and systems don't connect properly.

According to a RAND study cited by Salesforce, more than 80% of AI projects fail to deliver value. The company's response focuses on giving developers better tools to manage metadata, govern AI agents, and standardize how business terms are defined across systems.

What Developers Get

The platform updates include five main components that change how developers work with enterprise data and AI.

Data Cloud Context Indexing helps AI agents understand unstructured content like contracts, diagrams, and tables. The new indexing pipeline interprets this content through business rules, making it easier to extract specific information from large, disconnected datasets.

For example, a field engineer could upload a technical schematic and have an AI agent walk through a troubleshooting process step by step. What used to take hours can now happen in minutes.

Data Cloud Clean Rooms are now generally available. They let companies share and analyze data without exposing raw information. The feature uses zero-copy connectivity, which means you don't duplicate datasets. This reduces security risks, cuts storage costs, and helps with compliance.

Banks are already using this to compare transaction patterns and detect fraud faster. They can spot fraud rings in hours instead of weeks without exposing customer records to each other.

Tableau Semantics is a new semantic layer that connects to Data Cloud. It translates raw data into business language that both humans and AI can understand. Salesforce is releasing a Customer 360 Semantic Data Model that unifies data and metadata across different Salesforce clouds.

The bigger news here is that Tableau is working with industry partners to create a universal semantic interchange. This could standardize how different platforms define business terms.

If your marketing team defines "annual contract value" differently than your sales team, that inconsistency flows into every report and every AI model. Tableau Semantics aims to fix that by enforcing one definition across all systems.

MuleSoft Agent Fabric addresses what Salesforce calls "agent sprawl." As companies build more AI agents across different teams and platforms, coordination becomes a problem. Different agents do redundant work, create compliance gaps, and don't share information.

Agent Fabric gives you a single registry to manage every AI agent, no matter where it was built. A retailer could have separate agents for inventory tracking, price updates, and fraud detection that now work together. Price adjustments happen automatically, and fraud checks run in real time.

Embedded AI Security and Compliance adds security monitoring across the platform. The updates include integrations with security vendors to identify threats and manage compliance proactively.

Why This Matters for Developers

These updates shift what developers do day-to-day. Instead of building custom data pipelines that break when source systems change, you're working with a unified data foundation. Instead of writing integration code between every system, you're orchestrating agents that already have access to standardized data.

The semantic layer is particularly important. When AI agents pull from the same business definitions, their outputs become more consistent and easier to explain. That matters when you need to show why a model made a specific recommendation.

The Agent Fabric addresses a real problem that's emerging as companies deploy more AI. Without a way to register and manage agents centrally, you end up with duplicated work and security blind spots. Having one place to see what every agent does and how they interact helps you avoid those issues.

Clean Rooms solve the collaboration problem when working with external partners. You can build applications that analyze shared datasets without ever moving sensitive information. That opens up use cases that weren't possible before due to privacy and compliance concerns.

The Bigger Picture

Salesforce is positioning these tools as the foundation for what it calls the "Agentic Enterprise"—where humans and AI agents work together across every workflow. The company's approach focuses on three principles: ensuring AI outputs use unified business data, embedding security and compliance into every workflow, and keeping systems open to avoid vendor lock-in.

Two enterprise customers highlighted in the announcement show how this works in practice. AAA Washington is using the unified data foundation to create a complete view of their members, improving roadside assistance, insurance, and travel services. UChicago Medicine emphasized that healthcare AI must be built on trust, with reliable data ensuring accurate patient interactions.

For developers building enterprise AI systems, these updates provide infrastructure that wasn't available before. You get standardized semantics, unified data access, centralized agent management, and built-in security—all designed to work together.

The announcements come ahead of Dreamforce, which runs October 14-16 in San Francisco. More details about implementation and availability will surface during the event.

If you read this far, tweet to the author to show them you care. Tweet a Thanks

Nice breakdown of Salesforce’s push toward unified AI data management. These updates feel like a big step toward making enterprise AI actually reliable instead of fragmented.

Thanks for the thoughtful question. I think it could help, but it's not a silver bullet.

The failure rate problem has multiple layers. Data fragmentation is a big one, and Salesforce is addressing that with the unified foundation. But companies also struggle with unclear use cases, poor change management, and teams that don't understand how AI actually works.

What stands out to me about this approach is the semantic layer. When you have different definitions of the same business term across departments, that inconsistency compounds as you scale AI. Fixing that at the platform level removes a major source of error.

The Agent Fabric is interesting too. We're already seeing companies hit "agent sprawl" problems—multiple teams building agents that don't talk to each other or that duplicate work. Having a central registry helps, but it also requires discipline. Someone has to actually maintain that registry and enforce governance.

I think the shift from pipeline builder to agent orchestrator is real, but it won't happen overnight. Most enterprises still have legacy systems and custom integrations that aren't going away. The developers who can bridge both worlds—understanding data engineering fundamentals while orchestrating AI agents—will be the most valuable.

Will this reduce the failure rate? Probably, for companies that commit to the unified approach. But you still need the fundamentals: clear business problems, clean data practices, and teams that understand the limitations of AI. Better tools help, but they don't replace good execution.

More Posts

HPE unveils agentic AI, smart infrastructure, and developer tools at Discover 2025 conference.

Tom Smith - Jun 24

Building an AI-Powered Restaurant Management System with OpenAI Agents SDK

Ramandeep Singh - Jun 30

The Future of Human-Machine Collaboration: How AI Agents Will Redefine Workflows in 2025

Code Inception - Aug 3

AI agents now autonomously protect, recover, and manage enterprise data without human intervention.

Tom Smith - Aug 19

My AI Learning Journey: From .NET Developer to AI Enthusiast

Kamalnath S - Aug 8
chevron_left