Opinion: Some Reflections on MCP

BackerLeader posted Originally published at medium.methodox.io 2 min read

Overview

There’s a lot of noise surrounding "MCP (Model Context Protocol)," and much like the advent of blockchain, image generators, LLMs, and autonomous agents before it, people are once again eager to proclaim "the next big thing." Often, despite widespread skepticism toward AI hype cycles, a bit of careful examination can still yield genuinely useful insights.

What is MCP

MCP is a protocol - meaning it only has value when multiple parties agree to adhere to it. Much of the groundwork was laid when OpenAI introduced tool use APIs for ChatGPT, which employed JSON-based interfaces and explicit function definitions. What matters now is the standardization effort led by Anthropic (creators of the Claude models and tools), and the broader push within the industry toward adoption.

Beyond its publications, MCP includes server/client implementations in C#, Python, and JavaScript - making it easier and faster for developers to build, integrate, and iterate using the protocol.

It should be immediately obvious to any experienced developer that the "tool use" API concept (e.g., OpenAI's) naturally suggests automatic code generation, simplified function definitions, and the creation of custom tooling. MCP offers a plug-and-play implementation, letting you deploy a standards-based server in the language of your choice.

MCP and LLM

But here’s a key question: does MCP actually have anything to do with AI or LLMs? Not directly. What it really promotes is something software systems should have already embraced - being reflective and externally accessible. Here’s a pun to consider: it requires existing systems to become more "reasonable."

Critical Reflection

Traditionally, if a developer doesn’t expose useful APIs, power users are left with limited automation options - often relying on brittle GUI hacks or workarounds. Now, under pressure from AI integration demands, vendors are compelled to expose meaningful interfaces.

Thanks to LLMs and their natural language capabilities, selecting and invoking these interfaces becomes easier. Developers and users are more likely to adopt this route over bespoke scripting languages or fixed REST APIs - especially when the MCP server/client pattern is clear and uses a familiar, structured format like JSON-RPC.

In some ways, it feels like a return to SOAP or other early networked protocol standards. The fundamental idea remains: software should be reflective. It should be able to communicate what it can do and how it can be interacted with. From that standpoint, MCP isn’t fundamentally about LLMs - it’s about reasserting old principles: software should be reusable, programmable, and capable of exposing clean, structured APIs.

But will this catch on? Will the unified "natural language access" paradigm fundamentally reshape the role of scripting languages or traditional API models?

I’m skeptical. If developers truly wanted their tools to be accessible, they’d already provide at least a Python or Lua API. Similarly, if service providers were committed to openness, they would already expose a REST API. The ones most likely to adopt MCP are those who already believe in programmability and composability. The laggards - those still struggling with infrastructure, security, or scaling - are unlikely to embrace MCP any time soon.

Conclusion

MCP is not magic. It's another attempt to formalize a programming interface model - this time one that works well with LLMs. But its core principles are older than LLMs themselves. Like many revolutions in computing, the value won’t come from novelty alone, but from thoughtful, widespread implementation. Whether MCP becomes ubiquitous or not depends less on its cleverness, and more on who decides to build with it - and why.

If you read this far, tweet to the author to show them you care. Tweet a Thanks

MCPs are the future of LLMs. Virtually all LLMs now have MCP integration.

Solid analysis! Your point about MCP being 'another attempt to formalize a programming interface model' cuts through a lot of the hype. I've been covering MCP for Techstrong, and what strikes me is how it's essentially solving the API discoverability problem that's plagued enterprise integration for decades.

You're absolutely right that this isn't fundamentally about LLMs - it's about making systems 'reasonable' and reflective. The real breakthrough is that natural language interfaces finally make these APIs accessible to non-developers, which could drive adoption where previous standardization efforts failed.

Your skepticism about laggards is well-founded, but I'm seeing early enterprise interest precisely because MCP offers a path to AI integration without ripping and replacing existing systems. The companies that already expose good APIs are indeed the early adopters, but the business pressure for AI capabilities might finally force the laggards' hands.

The SOAP comparison is apt - let's hope MCP learns from that complexity trap!

Hi Tom, thanks for your comments!

The AI pressure is indeed something quite unique this time and let's see where it can lead us!

More Posts

Reflections on My Second LangChain Session: Hype, Substance, and Agentic Potential

Methodox - May 16

DevLog 20250820: Towards Unified Chat Gateway

Methodox - Aug 20

LangChain4J musings, six months after

Nicolas Fränkel - May 1

Designing APIs for the AI Era with Spring AI and MCP

David Lopez Felguera - Sep 22

CFC: Context Flow Control – The Next Step Beyond MCP

Grenish Rai - Aug 15
chevron_left