Talk twice as fast as you type, and create richer prompts with less effort.
TL;DR: Dictate your prompts instead of typing them to speak twice as fast and give more context.
Common Mistake ❌
You write detailed prompts by hand, word by word, staying at your desk.
You restrict yourself to slow keyboard-based interaction with AI tools because that's how you have always worked.
You assume voice input is still unreliable because of syntax complexity (brackets, semicolons, language-specific quirks).
Problems Addressed
Typing speed limits the depth and context of your prompts, hurting AI output quality.
You spend 80% of your job thinking, designing, and communicating, yet you treat writing as a bottleneck instead of optimizing it.
Staying chained to your desk during focused work reduces mobility, creativity, and wellbeing.
You miss opportunities to rubber-duck ideas by speaking them aloud to an AI agent.
You can dictate in seconds complex prompts that would take minutes to type.
How to Do It ️
Enable voice mode in your tool /voice or similar.
Start dictating your prompt with accuracy in a long prompt with context.
Speak naturally in English, explaining your request with full context, constraints, and examples.
Release the key when you finish, and your tool will transcribe and process your input.
If your tool doesn't support voice mode, install Whispr Flow or similar and configure your preferred voice hotkey.
Use Speech-To-Text in any application (terminal, email, code editor, documents) by holding your hotkey and speaking.
Start with one scenario: dictating a single prompt.
Expand voice into your entire workflow as it becomes natural: specs, documentation, brainstorming, thinking aloud.
Benefits
Speed: You speak at 150 words per minute versus typing at 80 words per minute, nearly doubling your prompt composition speed.
Richer Prompts: Speaking encourages thinking aloud, producing more nuanced context and backstory than typed prompts, which improves AI output quality.
Rubber-Ducking: Explaining a problem verbally to an AI agent often surfaces solutions without explicit effort, just like talking to a colleague.
Mobility: You can draft specs, think through architecture, or brainstorm ideas while walking, commuting, or in environments that boost creativity.
Health: You spend less time hunched over your keyboard.
Your posture improves and you move more.
- Cognitive Flow: You stay in thinking mode instead of switching to typing mode.
Your mental focus stays on the problem.
Context
Why Voice Works Now
Historically, voice-to-code failed because speech recognition couldn't parse syntax.
Today, AI understands your intent in natural language and translates it into precise code.
Modern AI assistants like Claude can infer what you mean even when you speak conversationally ("can you fix the bug where users see a blank screen after login") without needing you to type angle brackets or semicolons.
Prompt Reference
Bad Prompt
Fix the code
# (Typing it in the console)
Good Prompt
# Dictated by /voice
I have a user authentication function that's failing.
It is highlighted in the editor
When users try to log in with valid credentials
The system returns a 500 error instead of creating a session.
The error logs show it's happening in the password validation step.
I need you to review the authentication logic
identify why the validation is failing
and provide a fix that includes proper error handling
for invalid credentials and missing database connections.
Considerations ⚠️
Dictation may feel awkward the first time because you aren't used to speaking technical requests aloud.
You may ramble or repeat yourself slightly when dictating, but the AI agent will extract what it needs.
Voice works best for high-level requests, architecture decisions, and design thinking, not for precise syntax correction.
You still need to review AI output before committing, just as you would with typed prompts.
Type
[X] Semi-Automatic
Limitations ⚠️
Voice input works best in quiet environments; background noise reduces accuracy.
Some very large or precise code changes may be faster to type than dictate.
You need to speak them in English, or change the overall language.
Most tools won't let you have multiple voice languages.
You need to invest time learning which voice tool works best for your workflow and hardware.
Level
[X] Beginner
https://coderlegion.com/11538/ai-coding-tip-006-review-every-line-before-commit
https://coderlegion.com/9922/ai-coding-tip-003-force-read-only-planning
https://coderlegion.com/11001/ai-coding-tip-005-keep-context-fresh
https://coderlegion.com/9545/ai-coding-tip-002-prompt-in-english
Claude Code with /voice command
Whispr Flow
ChatGPT Voice Mode
Conclusion
Typing is no longer the bottleneck.
You speak twice as fast as you type, and AI understands natural language.
Use voice to dictate prompts, unlock mobility, improve wellbeing, and create richer input for better code generation.
Start with one prompt today.
https://www.youtube.com/v/7a1IVtGAKNI
https://claude.com/claude-code
https://wisprflow.ai
https://openai.com/chatgpt/voice/
https://en.wikipedia.org/wiki/Rubber_duck_debugging
Also Known As
- Voice-To-Code
- Dictation-Driven-Development
- Speech-Based-Prompting
Disclaimer
The views expressed here are my own.
I am a human who writes as best as possible for other humans.
I use AI proofreading tools to improve some texts.
I welcome constructive criticism and dialogue.
I shape these insights through 30 years in the software industry, 25 years of teaching, and writing over 500 articles and a book.
This article is part of the AI Coding Tip series.
https://coderlegion.com/12469/ai-coding-tips