If you’re building AI features like chatbots, content generators, AI agents, or streaming assistants, you’ll quickly realize that the “LLM call” is only 20% of the work.
The other 80% is:
✅ streaming responses
✅ UI updates in real-time
✅ tool calling
✅ handling errors & retries
✅ server/client integration
✅ building a clean developer experience
That’s where the Vercel AI SDK shines.
In this blog, you’ll learn how to get started with the Vercel AI SDK and build a simple streaming chat app using Next.js.
What is the Vercel AI SDK?
The Vercel AI SDK is a set of libraries and utilities to build AI-powered apps with great UX:
- Streaming responses (token-by-token)
- React hooks for chat and completion
- Easy server routes (Next.js App Router)
- Tool/function calling support
- Provider integrations (OpenAI, etc.)
It’s designed for modern full-stack apps, especially in the Next.js ecosystem.
What You’ll Build
By the end, you’ll have:
✅ A Next.js app
✅ A chat UI
✅ An API route that streams AI responses
✅ Working end-to-end in minutes
Step 1: Create a Next.js App
Run:
npx create-next-app@latest vercel-ai-sdk-demo
cd vercel-ai-sdk-demo
npm run dev
Choose:
- TypeScript ✅
- App Router ✅
- Tailwind (optional but recommended) ✅
Step 2: Install the Vercel AI SDK
Install the SDK:
npm install ai
Then install an AI provider package (OpenAI is common):
npm install @ai-sdk/openai
Step 3: Add Your API Key
Create .env.local:
OPENAI_API_KEY=your_api_key_here
Restart the dev server after adding env vars.
Step 4: Create the Streaming Chat API Route
In Next.js App Router, create:
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai("gpt-4o-mini"),
messages,
system: "You are a helpful assistant. Keep answers short and clear.",
});
return result.toDataStreamResponse();
}
What’s happening here?
messages contains the full chat history
streamText() streams the model output
toDataStreamResponse() returns a streaming response to the UI
Step 5: Build the Chat UI with useChat()
Now create a UI page:
app/page.tsx
"use client";
import { useChat } from "ai/react";
export default function Home() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({ api: "/api/chat" });
return (
<main style={{ maxWidth: 700, margin: "40px auto", padding: 16 }}>
<h1 style={{ fontSize: 24, fontWeight: 700 }}>Vercel AI SDK Chat</h1>
<div style={{ marginTop: 20, border: "1px solid #ddd", padding: 16 }}>
{messages.map((m) => (
<div key={m.id} style={{ marginBottom: 12 }}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
</div>
<form onSubmit={handleSubmit} style={{ marginTop: 16, display: "flex" }}>
<input
value={input}
onChange={handleInputChange}
placeholder="Ask something..."
style={{ flex: 1, padding: 10, border: "1px solid #ccc" }}
/>
<button
type="submit"
disabled={isLoading}
style={{ marginLeft: 8, padding: "10px 14px" }}
>
Send
</button>
</form>
</main>
);
}
Step 6: Run It
Start your app:
npm run dev
Open:
http://localhost:3000
Now type a message and watch the AI respond in real-time.
Understanding messages
messages is an array like:
[
{ role: "user", content: "Hello" },
{ role: "assistant", content: "Hi! How can I help?" }
]
The SDK manages:
- appending new messages
- streaming assistant messages
- updating UI automatically
Add a Better UI (Optional)
You can style messages by role:
{messages.map((m) => (
<div
key={m.id}
style={{
padding: 12,
borderRadius: 12,
marginBottom: 10,
background: m.role === "user" ? "#f2f2f2" : "#e7f5ff",
}}
>
<strong>{m.role}:</strong> {m.content}
</div>
))}
Tool calling allows your AI to call functions like:
- search database
- fetch weather
- calculate tax
- call APIs
Example concept:
tools: {
getTime: {
description: "Get current server time",
execute: async () => new Date().toISOString(),
},
}
This is where Vercel AI SDK becomes powerful for agentic workflows.
Common Issues & Fixes
❌ “OPENAI_API_KEY missing”
Fix:
- Ensure
.env.local exists
- Restart dev server
- Check variable name
❌ No streaming happens
Fix:
- Use
useChat() properly
- Ensure API route returns
toDataStreamResponse()
❌ Build fails on deployment
Fix:
- Add env var in Vercel dashboard
- Ensure route uses server runtime properly
Production Tips (Best Practices)
✅ Keep prompts stable
Use a clear system prompt and avoid changing it too often.
✅ Limit message history
For performance, send only last N messages.
✅ Add safety
Filter harmful content or enforce policies.
✅ Add logging
Log prompts, latency, and tool usage.
What to Build Next
Once you have a basic chat working, you can extend it into:
AI Customer Support Assistant
RAG-powered Knowledge Bot
AI Agent for Jira / GitHub tasks
Meeting Summary Generator
AI Form Auto-Fill Assistant
Conclusion
The Vercel AI SDK is one of the fastest ways to build AI apps with excellent streaming UX and clean developer experience.
If you’re building in Next.js, this is a must-try toolkit.
Note: Create agents using Vercel
https://emilyxiong.medium.com/create-an-ai-agent-with-vercel-ai-sdk-e690b807eb2a