Roots, Sampling & Elicitation
Three advanced MCP capabilities — roots for workspace awareness, sampling for LLM-to-LLM calls, and elicitation for mid-task user input.
Tools, resources, and prompts cover most MCP use cases. But the spec includes three more capabilities that unlock interesting patterns for more complex servers: roots, sampling, and elicitation. These are the features I reach for once the basics feel solid.
Roots — Workspace Awareness
A root is a URI (typically a file system path) that the MCP client exposes to the server to indicate "this is the workspace you're working in."
// Server reads the roots the client provides
server.server.getRoots(); // returns list of root URIs
// Example root: file:///Users/you/my-projectThis lets your MCP server know which directory the user is working in, without requiring them to pass a path on every tool call.
When it's useful: file system tools, code analysis servers, or any server that needs to know the project scope. Instead of asking the user "what directory?" on every call, the server reads the root the client already declared.
Roots are negotiated during the initialize handshake — the client advertises which roots it supports, and the server can request them.
Sampling — LLM-to-LLM Calls
Sampling lets your MCP server ask the LLM to generate text as part of a tool's execution. Instead of the server returning a fixed string, it can invoke the LLM mid-tool to produce something.
// Inside a tool handler, ask the client to sample from the LLM
const result = await server.server.createMessage({
messages: [{
role: "user",
content: { type: "text", text: "Summarize this in one sentence: " + rawData }
}],
maxTokens: 100,
});The MCP client receives this request, calls the LLM with it, and returns the result to the server.
When it's useful: you're building a tool that needs the LLM's judgment mid-execution — for example, a tool that fetches a large document and needs to summarize it before returning, or a code analysis tool that asks the LLM to rate the severity of findings.
Sampling is powerful but adds latency and cost. Use it when pre-computing in your tool handler isn't practical.
Elicitation — Asking the User Mid-Task
Elicitation lets your MCP server pause a tool execution and ask the user a question, then resume with the answer.
// Inside a tool handler, pause and ask the user
const response = await server.server.elicitInput({
message: "This will delete all records in the issues table. Are you sure?",
requestedSchema: {
type: "object",
properties: {
confirm: { type: "boolean", title: "Confirm deletion" }
}
}
});
if (!response.content.confirm) {
return { content: [{ type: "text", text: "Deletion cancelled." }] };
}The client surfaces this as a dialog or prompt in the UI. The user responds. The server resumes.
When it's useful: destructive or irreversible operations that need explicit confirmation, collecting clarifying information partway through a workflow, multi-step tasks where the next step depends on a user decision.
Current Adoption
These three features are part of the MCP spec but have uneven client support:
- Roots: supported in most modern clients
- Sampling: supported in Claude Desktop and VS Code; less common in others
- Elicitation: newer and still gaining adoption
When building for broad compatibility, I stick to tools, resources, and prompts. I add the advanced features when I know the target client supports them and the use case justifies the complexity.
Further Reading
Keep reading
Enjoyed this? Get more like it.
Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.