GitHub Actions Patterns for Large Frontend Systems

The GitHub Actions patterns that make CI pipelines scale — matrix strategy, conditional jobs, and deciding what to run on which trigger.

March 22, 20265 min read1 / 3

The mistake I made early with CI was treating it as a binary — either run everything or run nothing. The result was a CI pipeline that took 20+ minutes on every push, which meant developers stopped waiting for it and started merging on green before the checks were actually useful.

The patterns that fixed it were simpler than I expected.

What to Run and When

The first structural question: not every check needs to run on every event. The right question is what guarantees you need at each stage.

On every push to a feature branch:

  • Lint (fast — under a minute if configured well)
  • Type check (fast with project references)
  • Unit tests (fast)

On pull request:

  • Everything above, plus integration tests
  • Build verification (does it actually compile?)
  • Bundle size check

On merge to main:

  • Everything, plus
  • End-to-end tests
  • Performance budget checks
  • Deployment

On a schedule (nightly or weekly):

  • Full accessibility audit
  • Full performance audit with Lighthouse
  • Dependency vulnerability scan

The cron checks in particular are valuable because they're not blocking deploys. A scheduled Lighthouse run that catches a performance regression tells you about it without blocking the PR that accidentally introduced it — and gives you time to fix it before it compounds.

Matrix Strategy

When you need to run the same job across multiple configurations — multiple Node.js versions, multiple browsers, multiple packages in a monorepo — the matrix strategy handles the parallelization:

YAML
jobs: test: runs-on: ubuntu-latest strategy: matrix: node-version: [18, 20, 22] steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: ${{ matrix.node-version }} - run: pnpm install - run: pnpm test

GitHub runs each matrix combination as a separate job in parallel. Three Node.js versions means three jobs running simultaneously instead of sequentially.

For a monorepo where each package has independent tests:

YAML
jobs: test: runs-on: ubuntu-latest strategy: matrix: package: [ui, utils, analytics, dashboard] steps: - uses: actions/checkout@v4 - run: pnpm install - run: pnpm --filter @myapp/${{ matrix.package }} test

The fail-fast option (default: true) cancels all other matrix jobs when one fails. For a matrix of package tests where you want to see all failures at once:

YAML
strategy: fail-fast: false matrix: package: [ui, utils, analytics, dashboard]

Filtering by Changed Files

In a monorepo, running all tests on every push is wasteful. If only packages/utils changed, you don't need to test apps/dashboard.

The dorny/paths-filter action handles this:

YAML
jobs: changes: runs-on: ubuntu-latest outputs: ui: ${{ steps.filter.outputs.ui }} utils: ${{ steps.filter.outputs.utils }} steps: - uses: actions/checkout@v4 - uses: dorny/paths-filter@v3 id: filter with: filters: | ui: - 'packages/ui/**' utils: - 'packages/utils/**' test-ui: needs: changes if: ${{ needs.changes.outputs.ui == 'true' }} runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: pnpm install - run: pnpm --filter @myapp/ui test

This adds a small overhead (the changes job) but can eliminate most of the jobs on any given push, dramatically reducing total CI time.

Turborepo's own turbo run test --filter=[HEAD^1] does something similar — it runs tasks only for packages affected by changes since the last commit. If you're using Turborepo, this is often a cleaner approach than manual path filtering.

Dependency Caching

Installing dependencies is often a significant chunk of CI time. Caching the pnpm store eliminates it on cache hits:

YAML
steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v4 with: version: 9 - uses: actions/setup-node@v4 with: node-version: 20 cache: 'pnpm' - run: pnpm install --frozen-lockfile

The cache: 'pnpm' option in actions/setup-node handles the cache key and restore automatically, using the lockfile hash as the cache key. Cache hits mean pnpm install takes seconds instead of minutes.

The Shape of a Good Pipeline

The principle I keep coming back to: the pipeline should be fast enough that developers actually wait for it. A pipeline that takes 45 minutes trains developers to push and move on — which means the CI is providing safety theater, not actual safety.

The targets I aim for:

  • Push-triggered checks: under five minutes
  • PR checks: under ten minutes
  • Full pipeline on merge: however long it takes, but the PR checks have already validated the important things

If you're over these thresholds, the question is always: what can run less often, what can run in parallel, and what can be cached?

Enjoyed this? Get more like it.

Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.