← All posts

Dictating Prompts to ChatGPT, Claude, and Cursor on Mac

· 5 min read mac dictation speech-to-text ai productivity

Here’s a quiet truth about working with AI tools: the quality of what you get back is mostly a function of how much context you put in. Short, vague prompts get short, vague answers. The people getting genuinely useful output from ChatGPT, Claude, and Cursor are the ones writing two- and three-paragraph prompts with specifics, examples, and constraints.

The problem is that nobody likes typing two paragraphs into a prompt box, every time, all day. So most of us write less than we should and accept worse output than we could be getting.

Voice fixes that math. Speaking 200 words takes about 80 seconds. Typing them takes around 5 minutes. When the cost of giving the model real context drops, you start giving it real context, and the answers get better.

Why Apple Dictation Falls Short for AI Prompts

If you’ve already tried using Mac’s built-in dictation for prompts, you may have run into the same wall most people do. The transcription itself is fine for short, clean sentences. The trouble starts when you talk like a normal human.

You think out loud, restart sentences, throw in a “wait, actually” halfway through, and finish with a trailing thought. Built-in dictation faithfully transcribes all of that — every “um,” every false start, every time you said “what I mean is.” Then you paste it into ChatGPT and it looks like a transcript of someone arguing with themselves.

You can edit it before sending, but at that point you’ve spent more time cleaning the prompt than you saved by speaking it.

The fix isn’t better speech recognition — Whisper-class models already nail the words. The fix is post-processing: a layer that takes the messy transcript and quietly cleans it into something a coherent person would have typed.

A Practical Workflow

The setup that works well for most people looks something like this:

  1. Pick a hotkey you can hit without thinking. Most dictation apps support a global hotkey that toggles recording from any app. The point is to remove the friction so you actually use it.

  2. Talk like you’re explaining to a smart colleague. Don’t try to sound like a prompt engineer. Just say what you want, what you’ve tried, and what’s confusing.

  3. Use a “clean text” mode. Instead of inserting raw transcription, run it through a quick AI cleanup that removes filler words, fixes false starts, and tightens the grammar — without changing what you actually said.

  4. Let it type directly into the prompt box. No copy/paste detour. The cursor’s already in the input field; the cleaned text just appears there.

This is more or less how I use LittleWhisper for AI prompting. It lives in the menu bar, runs transcription through your choice of engine (OpenAI Whisper, Deepgram, Groq, or fully on-device), and pipes the result through an editor mode that polishes it before typing it into whatever app is focused. Other apps like Superwhisper take a similar shape. The key feature category is “transcription with AI post-processing” — the engine isn’t the differentiator, the cleanup is.

What to Actually Say

The hardest part of switching to voice prompts isn’t the tool — it’s getting comfortable thinking out loud. A few patterns help.

For ChatGPT and Claude

Start with the situation, then the ask. Something like:

“I’m writing a function that takes a list of user records and groups them by signup month, but I want to handle the case where the timestamp is null. Right now I’m getting a TypeError. Here’s the function I have so far… actually let me paste that in. Can you tell me what’s wrong and suggest a cleaner approach?”

That’s how you’d describe it to a coworker. With dictation, that’s how you can prompt the model. Built-in dictation would render this as a wall of run-on sentences with literal “actually let me paste that in” inside it. A tool with AI cleanup will produce something closer to:

“I’m writing a function that takes a list of user records and groups them by signup month, but I need to handle null timestamps. I’m currently getting a TypeError. Can you review the function below and suggest a cleaner approach?”

Same meaning, none of the throat-clearing.

For Cursor and other AI code editors

The voice approach pays off even more in Cursor’s chat panel, where you’re often pasting code and explaining what you want changed. Talking through “I want to extract this validation logic into its own function, but keep the error handling inline because the calling code branches on the specific error type” is dramatically faster than typing it. The cleaned-up prompt gives Cursor better signal, which means fewer rounds of “no, not like that.”

The one place voice doesn’t help is when you’d be writing code syntax inside the prompt. Dictating brackets and semicolons is miserable. Type the code, dictate the explanation around it.

Privacy Considerations

If you’re dictating prompts that include sensitive context — internal codebases, customer data, draft strategy docs — pay attention to where the audio is going. Cloud transcription APIs from OpenAI, Deepgram, and Groq are subject to their respective privacy policies. For most work this is fine; for regulated environments it might not be.

The on-device option matters here. Whisper has open-source model weights that run entirely on Apple Silicon Macs with reasonable speed. Audio never leaves your machine. You give up a small amount of accuracy compared to the cloud large-v3 model, but for prompt dictation — where the AI cleanup pass will catch most errors anyway — it’s more than good enough.

Is It Worth Setting Up?

If you use AI tools casually — a few prompts a day, mostly short questions — probably not. The setup overhead won’t pay back.

If you’re using ChatGPT, Claude, or Cursor as part of your daily workflow and you’re routinely typing multi-paragraph prompts, the math changes fast. Most people who try voice prompting for a week don’t go back. Not because it’s a wow experience, but because the friction of typing detailed context starts to feel unnecessary once you’ve felt the alternative.

The thing nobody mentions: better prompts produce better output, and better output reduces the back-and-forth. The win isn’t just “I typed less.” It’s “I got the right answer in one round instead of three.” That compounds.


LittleWhisper is a free macOS menu bar app for dictating polished text into any application — including AI prompt boxes. Bring your own API key, use cloud credits, or run fully on-device. Download from littlewhisper.app.