Overview
There’s a lot of noise surrounding "MCP (Model Context Protocol)," and much like the advent of blockchain, image generators, LLMs, and autonomous agents before it, people are once again eager to proclaim "the next big thing." Often, despite widespread skepticism toward AI hype cycles, a bit of careful examination can still yield genuinely useful insights.
What is MCP
MCP is a protocol - meaning it only has value when multiple parties agree to adhere to it. Much of the groundwork was laid when OpenAI introduced tool use APIs for ChatGPT, which employed JSON-based interfaces and explicit function definitions. What matters now is the standardization effort led by Anthropic (creators of the Claude models and tools), and the broader push within the industry toward adoption.
Beyond its publications, MCP includes server/client implementations in C#, Python, and JavaScript - making it easier and faster for developers to build, integrate, and iterate using the protocol.
It should be immediately obvious to any experienced developer that the "tool use" API concept (e.g., OpenAI's) naturally suggests automatic code generation, simplified function definitions, and the creation of custom tooling. MCP offers a plug-and-play implementation, letting you deploy a standards-based server in the language of your choice.
MCP and LLM
But here’s a key question: does MCP actually have anything to do with AI or LLMs? Not directly. What it really promotes is something software systems should have already embraced - being reflective and externally accessible. Here’s a pun to consider: it requires existing systems to become more "reasonable."
Critical Reflection
Traditionally, if a developer doesn’t expose useful APIs, power users are left with limited automation options - often relying on brittle GUI hacks or workarounds. Now, under pressure from AI integration demands, vendors are compelled to expose meaningful interfaces.
Thanks to LLMs and their natural language capabilities, selecting and invoking these interfaces becomes easier. Developers and users are more likely to adopt this route over bespoke scripting languages or fixed REST APIs - especially when the MCP server/client pattern is clear and uses a familiar, structured format like JSON-RPC.
In some ways, it feels like a return to SOAP or other early networked protocol standards. The fundamental idea remains: software should be reflective. It should be able to communicate what it can do and how it can be interacted with. From that standpoint, MCP isn’t fundamentally about LLMs - it’s about reasserting old principles: software should be reusable, programmable, and capable of exposing clean, structured APIs.
But will this catch on? Will the unified "natural language access" paradigm fundamentally reshape the role of scripting languages or traditional API models?
I’m skeptical. If developers truly wanted their tools to be accessible, they’d already provide at least a Python or Lua API. Similarly, if service providers were committed to openness, they would already expose a REST API. The ones most likely to adopt MCP are those who already believe in programmability and composability. The laggards - those still struggling with infrastructure, security, or scaling - are unlikely to embrace MCP any time soon.
Conclusion
MCP is not magic. It's another attempt to formalize a programming interface model - this time one that works well with LLMs. But its core principles are older than LLMs themselves. Like many revolutions in computing, the value won’t come from novelty alone, but from thoughtful, widespread implementation. Whether MCP becomes ubiquitous or not depends less on its cleverness, and more on who decides to build with it - and why.