This writeup will show you, how I built smart agents using Microsoft Agent Framework (and didn’t need GPT at all!) :)

Guess what? I didn’t use any large language model (LLM) like GPT. No hallucinations. No prompt engineering. Just clean logic, modular agents, and Python code that actually makes sense.
If you’re curious how I did it, I made a full video walkthrough that shows everything step-by-step. You can watch it here — trust me, it’s way cooler when you see it in action!
Before we start!
If you like this topic and you want to support me:
Follow me on Medium and subscribe to get my latest article for Free
Subscribe to the YouTube channel
What Is Microsoft Agent Framework?
Imagine you want to build an AI system that can answer questions, convert units, or plan tasks. Most people jump straight into using GPT or some other LLM. But that’s not always the best idea — especially if you want control, transparency, and speed.
Microsoft Agent Framework is a toolkit that lets you build modular agents — like little bots that each do one job — and then connect them together into a workflow. You can use it with LLMs if you want, but you don’t have to. That’s what makes it awesome.
It’s part of Microsoft’s bigger AI ecosystem, along with AutoGen and Semantic Kernel, but Agent Framework is the one that gives you full control over how agents talk to each other, pass messages, and get stuff done.
My Agent Setup: Parser → Converter → Explainer
I built a simple workflow to convert temperatures from Fahrenheit to Celsius. Sounds easy, right? But I wanted to do it using agents — each with a clear role.
Here’s how I set it up:
- ParserAgent: Takes the input like “Convert 100°F to Celsius” and extracts the numbers and units.
- ConverterAgent: Does the actual math using the formula.
- ExplainerAgent: Builds a nice explanation like “100°F is 37.78°C. Subtract 32, multiply by 5/9.”
Each agent is an executor in the framework. They receive messages, process them, and send results to the next agent. It’s like passing notes in class — but smarter.
How It Works (Behind the Scenes)
Let’s go deeper into the code and architecture. Microsoft Agent Framework uses a concept called executors — these are basically functions that act like agents. You define them using decorators like @executor, and you specify what input and output types they handle.
Here’s the signature for my parser agent:
@executor(id="parser_executor")
async def parse_text(input_text: str, ctx: WorkflowContext[dict]) -> None:
result = {"value": 100, "from": "F", "to": "C"}
await ctx.send_message(result)
Notice the input_text: str, ctx: WorkflowContext[dict] part? That tells the framework:
- The input is a string
- The output is a dictionary
This is super important because the framework checks type compatibility between agents.
Connecting the Agents
Once you’ve defined your executors, you use a WorkflowBuilder to connect them:
WorkflowBuilder()
.add_edge(parse_text, convert_text)
.add_edge(convert_text, explain_text)
.set_start_executor(parse_text)
This creates a directed graph of agents. The message flows from parser → converter → explainer. You can even visualize this as a flowchart — which I did in my video.
Debugging and Logging
One of the coolest things about Agent Framework is how easy it is to debug. You can log every message that gets passed between agents.
You can also trace the entire workflow using ExecutorInvokedEvent logs. These show which agent was called, what data it received, and what it sent out. It’s like having a built-in debugger for your AI system.
Why No LLM?
Here’s the thing: LLMs are cool, but they can be unpredictable. Sometimes they make stuff up (hallucinate), and sometimes they’re just slow. For this task, I didn’t need fancy language generation — I needed accuracy and traceability.
By using pure Python logic and the Agent Framework, I could:
- See exactly what each agent was doing
- Log every message and result
- Debug easily when something went wrong
- Scale the system without
worrying about token limits or API costs
Also, I didn’t need to worry about prompt engineering or model drift. My agents were deterministic — they always gave the same output for the same input. That’s a huge win for reliability.
How It Compares to AutoGen and Semantic Kernel
AutoGen is another Microsoft tool that lets you build multi-agent systems. It’s more focused on LLMs and conversational agents. You define agents that talk to each other using messages, and you can plug in GPT or other models.
Semantic Kernel is more about plugins, memory, and orchestration. It’s great for building copilots and integrating with external tools. You can take a quick look at my Semantic Kernel playlist here.
Agent Framework is like the glue that holds everything together. It gives you the runtime, the message routing, and the execution flow. You can use it with AutoGen and Semantic Kernel — or just use it on its own like I did.
According to Microsoft’s official blog, Agent Framework is an open-source engine for agentic AI apps, and it’s designed to support everything from retrieval agents to compliance agents.
Watch My Full Demo
If you want to see this in action — with code, visuals, and real-time execution — check out my video:
Agent Workflows Made Simple — No LLM, Just Logic
I walk through the whole setup, explain each agent, and show how the messages flow. It’s beginner-friendly but also deep enough for devs who want to build serious AI systems.
Final Thoughts
You don’t need to be a senior engineer or have a PhD to build smart agents. You just need the right tools — and Microsoft Agent Framework is one of the best I’ve found.
It’s clean. It’s powerful. And it doesn’t rely on LLMs unless you want it to.
So go build something. And if you get stuck, watch my video — I promise it’ll help.