Runtime Composition: The Shell and Remotes Pattern

How the shell-and-remotes pattern works — host app, remote modules, shared dependencies, and the trade-offs you're signing up for.

March 21, 20264 min read2 / 6

Of all the microfrontend patterns, runtime composition is the one with the most team autonomy and the most complexity. Understanding the architecture at a conceptual level before touching configuration makes the complexity much easier to reason about.

The Core Model

Shell and remotes architecture ExpandShell and remotes architecture

Three moving parts:

The Shell (Host): The outer application that loads first. In an ideal world, it does very little — it's the container that fetches and renders the remote modules. In practice, the shell often ends up owning things like authentication, global navigation, and routing, because something has to. The shell is where centralized concerns live.

Who owns the shell matters. If you have a platform team or a team dedicated to the shell, that's a healthy setup. If nobody owns it — if it's an "inner source" project that everyone is supposed to contribute to — that's a tragedy of the commons waiting to happen. Someone has to be responsible for it.

Remotes: The independent pieces each team ships. A dashboard team ships a dashboard bundle. A billing team ships a billing bundle. Each gets deployed to their own CDN, S3 bucket, or hosting endpoint. The shell fetches them at runtime using a manifest that tells it where to look.

Shared dependencies: The stuff that both the shell and remotes need — React, the design system, routing utilities. If each team ships their own copy of React, users download and parse React multiple times. The shared dependency config tells the module federation runtime to load only one copy.

The Developer Experience Problem

One of the less-discussed costs: running this locally means running multiple dev servers.

In the example I've been exploring, there's a shell running on localhost:3000 and a remote running on localhost:3001. To develop the integrated experience, both need to be running simultaneously.

This multiplies questions: Which version of the remote do you point at during development — your local version, staging, or production? If you're building something that touches both the shell and a remote, how do you run that setup? If Team A is developing a feature that depends on a not-yet-deployed change from Team B, how do they develop against it?

There's no universal answer. I've seen teams use shared VPN infrastructure to run all remotes on a development cluster. I've seen teams just point at staging remotes and accept the occasional drift. I've seen teams maintain thorough contract tests so local stubs stay accurate. Each has tradeoffs.

What It Means for State

Here's the thing I got wrong when I first thought about this pattern: the Context API stops working the way you'd expect.

If the shell is a React tree and each remote is its own React tree, the context from the shell doesn't reach into the remotes. A user logged in at the shell level is unknown to the remote — they're in separate trees.

This isn't a bug, it's a consequence of the architecture. Each remote is isolated. That isolation is the whole value proposition. But it means authentication state, user preferences, routing — anything that's "global" — needs an explicit coordination mechanism. We'll cover that when we get to communication patterns.

The Trade-offs, Honestly

What you get:

  • Teams deploy on their own schedule without coordinating
  • A remote team can ship a fix at any time, and the shell picks it up automatically without redeploying
  • Team-scoped blast radius — if one remote breaks, error boundaries can contain it

What it costs:

  • Reliability is now a function of every remote being available. Partial availability is harder to reason about than binary up/down
  • Flaky remotes are harder to deal with than a build that won't pass. At least a failing build is deterministic
  • Dependency version management across teams requires coordination anyway — just at a different layer
  • Configuration overhead: manifest files, shared dependency declarations, environment-specific URLs for every deploy environment

The mental model I find useful: this is like moving from a monolith to microservices on the backend. You get genuine team autonomy. You also get distributed systems problems — partial failures, version skew, coordination overhead — in the frontend. That trade is worth it at certain scales. It's not worth it at others.

We're going to spend the next several posts going deep on the implementation: how the configuration actually works, how lazy loading is wired up, how to manage shared dependencies and version conflicts, and how to solve the communication problem across separate React trees.

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.