MCP Prompts

The third MCP primitive — canned instruction templates with dynamic parameters that standardize how an LLM approaches recurring tasks.

March 13, 20263 min read6 / 7

MCP has three primitives:

  1. Tools — the LLM calls functions to fetch data or perform actions
  2. Resources — the user injects static context into the conversation
  3. Prompts — canned instruction templates with parameters, invoked by the user

Prompts are the least used of the three, but I've found them genuinely useful when you have recurring tasks that need consistent LLM instructions.


What a Prompt Is

An MCP prompt is a pre-written instruction set with optional parameters. When the user selects a prompt, it fills the chat input with the rendered template — saving them from writing the same detailed instructions every time.

Think of it as a parameterized macro for your LLM interactions.


Defining a Prompt

TypeScript
server.prompt( "create-issue", "Generate a well-formatted GitHub issue based on a description", { title: z.string().describe("Issue title"), description: z.string().describe("What the issue is about"), type: z.enum(["bug", "feature", "enhancement"]).optional(), }, ({ title, description, type = "feature" }) => ({ messages: [ { role: "user", content: { type: "text", text: `Create a GitHub issue with the following details: Title: ${title} Type: ${type} Description: ${description} Format it with: - A clear problem statement - Acceptance criteria as a checklist - Any relevant technical context - Labels: ${type}`, }, }, ], }) );

Listing Prompts

Like tools, prompts are discoverable via the protocol:

Plain text
prompts/list → returns all available prompts with their schemas prompts/get → returns a rendered prompt for given parameters

Claude Desktop surfaces prompts in the UI (typically via a slash command or a dedicated panel). The user selects the prompt, fills in parameters, and the rendered instructions appear in the chat input.


Prompts vs System Prompts

A system prompt is set by the host application — it's always active and shapes the model's behavior globally. An MCP prompt is user-initiated and situation-specific.

They complement each other:

  • System prompt: "You are a helpful engineering assistant for our team."
  • MCP prompt: "Create a bug report in our format for: [specific issue]"

When Prompts Are Worth It

Prompts make sense when:

  • You have recurring tasks with consistent structure (issue creation, PR descriptions, code review checklists)
  • The instructions are long or easy to forget
  • You want to standardize how different team members interact with the LLM

For one-off or exploratory queries, a prompt adds friction without benefit. I use them for repeatable workflows where consistency matters — and skip them for everything else.


Client Support

Not all MCP clients support prompts. Tome, for example, supports tools but not resources or prompts. Claude Desktop and VS Code Agent Mode both support all three primitives. If prompts are core to your workflow, verify your client supports them before building.

Further Reading

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.