In this video, we explore how to run Large Language Models (LLMs) locally using Docker — and connect the Qwen 3 model with .NET Semantic Kernel to build intelligent AI applications.
At CodeStreet, we focus on practical, developer-friendly tutorials that combine real-world tools with cutting-edge AI. In this hands-on session, you’ll learn exactly how to run Qwen 3 inside Docker, wire it up with .NET Semantic Kernel, and create AI-powered apps — all locally and for free.
What You’ll Learn:
- ✅ How to run Large Language Models (LLMs) locally using Docker
- ✅ How to install Qwen 3 inside a Docker container
- ✅ How to integrate Qwen 3 with .NET Semantic Kernel
- ✅ How to build intelligent apps using Qwen 3 + .NET
- ✅ How to use free Qwen 3 models for AI code generation and local experimentation
⚙️ Tools & Technologies Used:
- Docker for containerized model deployment
- Qwen 3 Model inside Docker
- .NET 9 SDK
- C# /.NET Console App for integration demo
- Visual Studio Code
Why You Should Watch:
Running AI models locally gives you more control, privacy, and flexibility — no cloud dependencies or API rate limits.
This tutorial shows you how to bring Qwen 3 to life inside Docker and connect it seamlessly to .NET Semantic Kernel, enabling you to build local AI apps that understand, reason, and generate — without external API calls.
Resources Mentioned:
Who This Video Is For:
- .NET Developers exploring AI integration
- AI enthusiasts who want to run models locally
- Engineers interested in open-source LLMs
- Developers building AI apps with Semantic Kernel + Qwen3