When you’re maintaining multiple large-scale products across different teams, performance becomes more than a technical metric — it becomes a part of the brand experience.
In our case, every millisecond mattered. Whether it was our marketing site, dashboard, or internal tools, users expected pages to load instantly and interactions to feel effortless. But as our ecosystem grew — dozens of React apps, different build pipelines, and varying dependencies — performance started to drift.
That’s when we decided to make performance a shared responsibility across the company.
This post shares how we did it — from optimizing bundle sizes to implementing CI/CD pipelines that continuously measure, test, and improve our frontend performance.
1. The Problem: Performance Drift Across Products
It started innocently. A few teams added analytics scripts, another imported a UI library “just for one page,” and before we knew it, our once-lightweight apps were ballooning to 10–12 MB bundles.
Each product used slightly different build tools and dependencies, which meant optimizations weren’t consistent. Users on slow networks began noticing longer load times — and Lighthouse scores started dropping below 70.
That’s when our team made performance a first-class citizen in our engineering process.
2. Step One: Measuring What Matters (Lighthouse & Clarity)
We began by setting up Google Lighthouse audits for every major app and domain. This gave us quantifiable benchmarks:
- Largest Contentful Paint (LCP)
- Cumulative Layout Shift (CLS)
- Total Blocking Time (TBT)
- Bundle size and unused JavaScript
Each commit triggered a Lighthouse CI report, stored in our dashboard for trend comparison.
But raw numbers weren’t enough — we wanted to understand why users felt the site was slow.
That’s where Microsoft Clarity came in. It gave us real session replays, showing how real users interacted with our sites:
- Pages that frequently re-rendered.
- Scroll jank on mobile.
- Elements taking too long to become interactive.
Combining Lighthouse and Clarity was a breakthrough. One measured technical performance; the other revealed real experience.
3. Step Two: Cutting Down the Bundle Size
We introduced strict bundle size budgets across all repositories.
Each CI build now failed if the production bundle exceeded a certain limit (for example, 250 KB per route for SPAs).
To get there, we implemented several optimizations:
- Code-splitting and lazy loading using dynamic imports.
- Tree-shaking for unused exports.
- Shared dependencies extracted into a single runtime chunk via Turborepo.
- Replaced heavy libraries (like
moment.js) with lighter ones (dayjsor native date functions). - Analyzed bundles using
webpack-bundle-analyzerandtamagui/analyzer(for cross-platform components).
In one of our largest apps, this alone reduced bundle size by 38%, cutting load time by almost 2 seconds on 3G networks.
4. Step Three: Enforcing Code Quality with SonarQube
Once our builds were stable, we integrated SonarQube into our CI pipeline to enforce performance-related coding standards.
SonarQube became our automated reviewer. It flagged:
- Unused imports or dead code.
- Expensive re-renders (detected via React hooks misuse).
- Inline styles causing layout thrashing.
- Unoptimized loops and nested map calls.
- Large or uncompressed assets.
By making SonarQube a part of every merge request, we shifted performance left — catching issues before they hit production.
5. Step Four: Automated Testing and Continuous Validation
We already had unit, integration, and E2E tests — but now we added performance tests into the mix.
Each release candidate went through:
- Lighthouse CI run (for metrics)
- Playwright E2E flow (for UX stability)
- Visual regression checks (via Chromatic)
- SonarQube analysis (for code quality)
- Bundle budget validation
All of this ran automatically in GitHub Actions as part of our CI/CD workflow.
If any metric degraded beyond threshold, the pipeline would block deployment and notify the responsible team.
6. Step Five: Real-User Monitoring (RUM)
Optimizations in the lab don’t always translate to real-world improvements.
So we integrated a RUM script in production to track key metrics directly from user sessions — FID, LCP, TTFB — aggregated by device and region.
This allowed us to spot performance regressions before users complained.
For example, a regression in one product was traced back to a 2MB analytics SDK introduced by marketing. Without RUM, we might never have caught it.
7. Step Six: Continuous Improvement through Dashboards
We built an internal Performance Dashboard pulling data from Lighthouse CI, Clarity, and SonarQube APIs.
Every product team could see:
- Their latest Lighthouse score
- Bundle trends over time
- Top slow pages
- Hotspots in real user data
Teams started to compete (in a friendly way) to get the “fastest load time” badge.
Performance became something everyone cared about — not just frontend engineers.
8. The Results
After six months of enforcing these practices:
- Average Lighthouse score increased from 67 → 94.
- Average first contentful paint dropped by 1.5s.
- Bundle sizes reduced by up to 40%.
- Page abandonment on mobile dropped by 23%.
- CI/CD runs became a single source of truth for both code quality and performance.
But the biggest win?
Consistency.
Every product — regardless of team or stack — shared the same baseline of performance excellence.
Final Thoughts
Performance optimization isn’t a one-time effort. It’s a living discipline — a mix of good tooling, strong engineering culture, and continuous measurement.
By combining Lighthouse, Clarity, SonarQube, testing, and CI/CD automation, we turned performance into a predictable and measurable process, not a guessing game.
And now, when we ship something new — whether it’s a dashboard, microsite, or AI-powered app — we ship it fast, confident, and optimized from the start.