The Standard Prompt

The standard prompt is the foundation of everything — a direct question or instruction. Understanding its limits shows you why better techniques exist.

March 30, 20263 min read1 / 5

The standard prompt is the simplest form of prompting: a direct question or instruction to the model. If you've used any AI tool, you've already used standard prompts.

It's also the foundation that every other prompting technique builds on. So understanding both what it does well — and where it fails — is the starting point.

What It Is

A standard prompt is exactly what it sounds like:

Plain text
What color is the sky?
Plain text
Why is thunder scary?
Plain text
Create a Prompt Library application that lets users save and delete prompts.

All of these are standard prompts. They have no examples, no structured format, no explicit constraints — just a direct ask.

The Key Rule

The quality of the question directly relates to the quality of the answer.

This isn't unique to AI. If you walked up to a stranger and asked them to "help fix your code," you'd get a blank stare. If you gave context — what you're building, what the bug is, what you've already tried — you'd get a useful answer.

LLMs are the same. The more relevant context you include in a standard prompt, the better the output.

How LLMs Respond Differently to the Same Prompt

Ask "what color is the sky?" and you might get:

"Blue."

Or you might get:

"The sky is typically blue during the day due to Rayleigh scattering, but it changes to orange and pink at sunrise/sunset, gray when overcast, and dark blue-black at night..."

Both are valid. LLMs are nondeterministic — even the same model, same prompt, sent at the same time by two different people will produce different outputs.

Standard Prompts Are Like Talking to a Neighbor — Not Googling

On Google, you shorten your query: best tacos NYC.

With LLMs, write like you're asking a knowledgeable colleague:

Plain text
❌ "fix bug auth middleware" ✅ "The auth middleware in Express isn't passing the user object to req.user. Here's the relevant code: [code]. The JWT is valid. What could cause this?"

Full sentences. Full context. Full questions.

Where Standard Prompts Fall Short

Consider this prompt used to build a simple Prompt Library app:

Plain text
Create a Prompt Library application that lets users save and delete prompts. Users should be able to enter a title and content for their prompt, save it to local storage, see all their saved prompts, and delete prompts they no longer need. Make it look clean and professional with HTML, CSS, and JavaScript.

The model built the app — but also added a search bar and an export button that weren't asked for. The save button didn't even work.

That's the core limitation of a standard prompt: the model fills in gaps with its own judgment. If you don't specify what you don't want, it will add what seems natural to it.

When Standard Prompts Are Enough

Standard prompts work well when:

  • The task is simple and well-defined
  • You don't care about output format
  • The model's interpretation of "the right thing" aligns with yours
  • You're in a quick exploratory conversation
Plain text
Translate "bathroom" into Spanish. → baño
Plain text
Classify this sentiment as positive, negative, or neutral: "The product was OK." → Neutral

Simple, direct tasks with clear correct answers are where standard prompts shine.

Standard Prompts as Follow-Ups

Even after using more sophisticated techniques, standard prompts are the right tool for quick corrections and tweaks:

Plain text
// After a complex prompt built most of the feature correctly: "Remove the export button." "Fix the bug where saving doesn't work." "Make the delete button red."

Don't overthink follow-up prompts. If something is slightly wrong and the rest is fine, a simple standard prompt to correct it is faster than rewriting everything.

Practice: Try These

Open any LLM (Claude, ChatGPT, Copilot) and try:

Experiment 1: Observe nondeterminism Type What color is the sky? in three separate chat windows simultaneously. Compare the outputs — they'll differ even though the question is identical.

Experiment 2: See context make a difference

Prompt A (vague):

Plain text
Help me with my React component.

Prompt B (specific):

Plain text
My React component re-renders on every keystroke even though the input state hasn't changed. Here's the component: [paste code]. What's causing this?

The quality difference will be immediately obvious.

Experiment 3: Watch scope creep

Plain text
Build a simple todo app with HTML, CSS, and JavaScript.

Count how many features the model adds that you didn't ask for. Then use a zero-shot prompt (next post) to get a tighter result.

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.