Performance Budgets as Constraints
Making performance and accessibility measurable constraints — not aspirations you check manually.
There's a difference between a performance goal and a performance constraint. A goal is something you check when you remember to. A constraint is something that fails the build when it's violated.
I've had performance goals in codebases for years without them ever being enforced. Every six months someone would run Lighthouse, see the score had drifted down to 62, and then there'd be a sprint to fix it before the quarterly review. This is not a useful process.
The fix is treating performance as a first-class CI concern — something measured on every relevant change, with hard limits that block merges when violated.
Bundle Size as a Budget
The most concrete performance metric you can measure in CI is bundle size. It doesn't require a browser, it doesn't require a real user, and it's deterministic — the same code produces the same bundle.
The bundlesize package handles this with a simple config:
// package.json
{
"bundlesize": [
{
"path": "./dist/main.*.js",
"maxSize": "150 kB"
},
{
"path": "./dist/vendor.*.js",
"maxSize": "200 kB"
}
]
}In the GitHub Actions workflow:
- name: Build
run: pnpm build
- name: Check bundle size
run: npx bundlesizeThis fails the job if any bundle exceeds its configured limit. The limit you set should be something you negotiate with the team based on your performance goals — not set once and forgotten, but treated as a number you actively manage downward.
The more powerful version: bundlesize can comment on PRs showing the size diff between the base branch and the PR branch. You can see exactly how much each PR added or removed from the bundle.
Lighthouse in CI
Lighthouse is the standard tool for measuring Core Web Vitals, accessibility, SEO, and best practices. Running it in CI gives you a repeatable score against your actual deployed application.
lhci (Lighthouse CI) is the official tool:
pnpm add -D @lhci/cli.lighthouserc.json:
{
"ci": {
"collect": {
"url": ["http://localhost:4173"],
"startServerCommand": "pnpm preview",
"numberOfRuns": 3
},
"assert": {
"assertions": {
"categories:performance": ["warn", { "minScore": 0.9 }],
"categories:accessibility": ["error", { "minScore": 0.95 }],
"categories:best-practices": ["warn", { "minScore": 0.9 }]
}
},
"upload": {
"target": "temporary-public-storage"
}
}
}The error vs warn distinction: accessibility failures fail the build (error), performance regressions only warn. This reflects a judgment call — accessibility is a hard requirement, performance is something you want to track but may have legitimate short-term tradeoffs.
In the workflow:
- name: Build for preview
run: pnpm build
- name: Run Lighthouse CI
run: npx lhci autorun
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}With the GitHub app token, LHCI posts results as a PR status check with a link to the full report.
The Cron Approach
Not all performance checks should run on every PR. Full accessibility audits against production, Lighthouse runs against real URLs, and large dependency scans are expensive enough that they should run on a schedule, not on every push.
on:
schedule:
- cron: '0 9 * * 1' # Monday at 9am UTC
workflow_dispatch: # Also triggerable manually
jobs:
lighthouse-production:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Lighthouse audit against production
run: npx lhci autorun
env:
LHCI_URL: ${{ secrets.PRODUCTION_URL }}The advantage of scheduled checks: they're non-blocking. They don't hold up merges, but they do create a record of performance over time. When you see the production Lighthouse score trend down over several weeks, you have data to bring to the conversation about why and what to do about it.
Accessibility as a Hard Limit
The distinction I make: accessibility failures should be error, not warn. An inaccessible UI is broken for real users. It's also increasingly a legal liability. I've started treating WCAG 2.1 AA compliance as a hard build constraint rather than a best-effort guideline.
For component-level accessibility, axe-core catches a significant portion of common violations in unit tests:
import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from 'jest-axe';
expect.extend(toHaveNoViolations);
it('Button has no accessibility violations', async () => {
const { container } = render(<Button>Click me</Button>);
const results = await axe(container);
expect(results).toHaveNoViolations();
});Adding this to component tests means accessibility regressions get caught at the unit level — fast and local — before they ever make it to CI.
Setting Budgets You Can Actually Hit
The temptation when setting budgets is to set them aspirationally — the numbers you wish you were at. This creates a budget that immediately fails and gets disabled.
The practical approach: measure where you actually are, set the initial budget slightly above that (so you don't immediately break), and then tighten it over time. A budget that starts at "100 kB" when your current bundle is "94 kB" gives you a baseline and some room. Tightening to "90 kB" is a deliberate decision you make when you've done the work to get there.
Budgets that you never violate are either too loose or too carefully managed. Budgets that fail and get disabled are useless. The goal is budgets that you occasionally violate, fix, and then defend.
Keep reading
Enjoyed this? Get more like it.
Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.