MCP Resources

The push model — how MCP resources let users inject static context into an LLM conversation, and when to use them over tools.

March 12, 20263 min read5 / 7

Here's what I found myself reaching for after tools: resources. MCP has three primitives, and resources behave fundamentally differently — they work from the opposite direction.

  • Tools — the LLM decides it needs something and calls the tool (pull)
  • Resources — the user decides to give the LLM context and pushes it in (push)

With a tool, the model is the one initiating the interaction. With a resource, the human is.

Tools vs Resources — Pull vs Push ExpandTools vs Resources — Pull vs Push


When to Use Resources

Resources are for static context that the LLM needs to answer questions accurately — things that don't change between requests and aren't triggered by user queries:

  • A database schema (so the LLM knows your tables and columns)
  • A configuration file (so it understands your project setup)
  • A style guide or API reference (so it follows your conventions)
  • A codebase file (so it can reason about your existing code)

You define the resource once. The user attaches it to a conversation. The LLM has it as context for everything that follows.


Registering a Resource

TypeScript
server.resource( "database-schema", "issuetracker://schema", { name: "Database Schema", description: "The full SQLite schema for the issue tracker — tables, columns, and types.", mimeType: "text/plain", }, async () => { const schema = await getSchema(); // runs .schema on the SQLite database return { contents: [{ uri: "issuetracker://schema", text: schema, mimeType: "text/plain", }], }; } );

The resource is identified by a URI (issuetracker://schema). When the user attaches it, the MCP client calls this handler, gets the content, and injects it into the conversation context.


Using It in Claude Desktop

In Claude Desktop, you attach resources through the paperclip icon in the chat interface. Once attached, the LLM can see the resource content for the duration of the conversation.

For the database schema example, once it's attached, you can ask:

  • "What tables do we have?"
  • "Does the issues table have a created_at column?"
  • "Write a query to find all open bugs assigned to me."

The LLM answers based on the actual schema you provided — not hallucinated column names. This alone made resources feel essential to me.


The Static Limitation

Static resources always return the same content regardless of context. They have no input parameters. Every call to issuetracker://schema returns the same schema.

This is fine for things that genuinely don't change. For dynamic data (records from the database, filtered results, per-user content), you want a tool instead — the user asks for something, the LLM calls the tool with the right parameters, and your code fetches only what's needed.

Resource templates are an emerging MCP feature that adds parameterization to resources, but they're experimental as of the current spec. For now, I treat resources as static and use tools for anything dynamic.


Resources vs File Attachments

You might wonder: why not just upload a file? The difference is automation and repeatability. A resource is part of your MCP server — it can be generated dynamically (e.g., the schema is fetched live from the database), it's always up to date, and it can be attached consistently across conversations without manual re-upload.

Resources also work with any MCP-compatible client, not just clients that support file upload.

Further Reading

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.