Calling External APIs from MCP Tools

Build a weather tool that hits a real API — and learn about LLM temperature, token costs, and how many tools you can safely expose.

March 11, 20263 min read4 / 7

The add tool is useful for testing, but MCP's real value shows when tools fetch live data that the LLM couldn't otherwise know. Current weather is the canonical example: LLMs have a knowledge cutoff, so they can't tell you today's conditions anywhere.

An MCP weather tool bridges that gap — and building one was the moment the whole model clicked for me.


The Open-Meteo API

Open-Meteo is a free, no-auth-required weather API. It accepts latitude and longitude and returns current conditions and forecasts. The endpoint:

Plain text
https://api.open-meteo.com/v1/forecast ?latitude=44.9778 &longitude=-93.2650 &current=temperature_2m,wind_speed_10m &temperature_unit=fahrenheit

Building the Tool

TypeScript
server.tool( "get_weather", "Get the current weather conditions for a location. Provide latitude and longitude. Use when the user asks about current weather, temperature, or wind.", { latitude: z.number().describe("Latitude of the location"), longitude: z.number().describe("Longitude of the location"), location_name: z.string().optional().describe("Human-readable location name for the response"), }, async ({ latitude, longitude, location_name }) => { const url = new URL("https://api.open-meteo.com/v1/forecast"); url.searchParams.set("latitude", String(latitude)); url.searchParams.set("longitude", String(longitude)); url.searchParams.set("current", "temperature_2m,wind_speed_10m"); url.searchParams.set("temperature_unit", "fahrenheit"); const response = await fetch(url.toString()); if (!response.ok) throw new Error(`Weather API error: ${response.status}`); const data = await response.json() as WeatherResponse; const { temperature_2m, wind_speed_10m } = data.current; const name = location_name ?? `${latitude}, ${longitude}`; return { content: [{ type: "text", text: `Current weather in ${name}: ${temperature_2m}°F, wind ${wind_speed_10m} mph`, }], }; } );

How the LLM Handles City Names

The API requires coordinates, not city names. But when a user asks "what's the weather in Minneapolis?", the LLM doesn't ask the user for coordinates — it infers them from its training data.

This is one of those things that delighted me the first time I saw it work. LLMs have geographic knowledge — they know Minneapolis is roughly 44.98°N, 93.27°W. They'll use that to fill in the parameters. They're occasionally imprecise (a few miles off), but accurate enough for weather queries.

This is the LLM doing its job: filling the gap between what the user said and what the tool requires.


LLM Temperature and Tool Selection

Temperature is a parameter (usually 0–1) that controls how deterministic a model's outputs are:

  • Temperature 0: Same answer every time — maximally deterministic
  • Temperature 0.8: More creative and varied — sometimes surprising

For tool selection, higher temperature can cause the model to occasionally not use a tool it has available, or to pick the wrong one. Most MCP clients use a moderate temperature. If you notice inconsistent tool selection, it's worth checking whether the temperature is a factor.


The 40-Tool Limit

Every tool in the context consumes tokens — its name, description, and full input schema get sent with every request. Claude degrades with more than roughly 40 tools in context at once.

Practical guidelines:

  • Expose only the tools needed for the current task
  • Use clients that support enabling/disabling individual tools per session
  • Split large servers into multiple focused servers if needed
  • Keep descriptions concise — verbose descriptions waste tokens and can cause hallucinated edge cases

The goal is signal density: each tool should add clear capability, not noise.

Further Reading

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.