How we optimize performance across different products

When you’re maintaining multiple large-scale products across different teams, performance becomes more than a technical metric — it becomes a part of the brand experience.

In our case, every millisecond mattered. Whether it was our marketing site, dashboard, or internal tools, users expected pages to load instantly and interactions to feel effortless. But as our ecosystem grew — dozens of React apps, different build pipelines, and varying dependencies — performance started to drift.

That’s when we decided to make performance a shared responsibility across the company.

This post shares how we did it — from optimizing bundle sizes to implementing CI/CD pipelines that continuously measure, test, and improve our frontend performance.


1. The Problem: Performance Drift Across Products

It started innocently. A few teams added analytics scripts, another imported a UI library “just for one page,” and before we knew it, our once-lightweight apps were ballooning to 10–12 MB bundles.

Each product used slightly different build tools and dependencies, which meant optimizations weren’t consistent. Users on slow networks began noticing longer load times — and Lighthouse scores started dropping below 70.

That’s when our team made performance a first-class citizen in our engineering process.


2. Step One: Measuring What Matters (Lighthouse & Clarity)

We began by setting up Google Lighthouse audits for every major app and domain. This gave us quantifiable benchmarks:

Each commit triggered a Lighthouse CI report, stored in our dashboard for trend comparison.

But raw numbers weren’t enough — we wanted to understand why users felt the site was slow.

That’s where Microsoft Clarity came in. It gave us real session replays, showing how real users interacted with our sites:

Combining Lighthouse and Clarity was a breakthrough. One measured technical performance; the other revealed real experience.


3. Step Two: Cutting Down the Bundle Size

We introduced strict bundle size budgets across all repositories.

Each CI build now failed if the production bundle exceeded a certain limit (for example, 250 KB per route for SPAs).

To get there, we implemented several optimizations:

In one of our largest apps, this alone reduced bundle size by 38%, cutting load time by almost 2 seconds on 3G networks.


4. Step Three: Enforcing Code Quality with SonarQube

Once our builds were stable, we integrated SonarQube into our CI pipeline to enforce performance-related coding standards.

SonarQube became our automated reviewer. It flagged:

By making SonarQube a part of every merge request, we shifted performance left — catching issues before they hit production.


5. Step Four: Automated Testing and Continuous Validation

We already had unit, integration, and E2E tests — but now we added performance tests into the mix.

Each release candidate went through:

All of this ran automatically in GitHub Actions as part of our CI/CD workflow.

If any metric degraded beyond threshold, the pipeline would block deployment and notify the responsible team.


6. Step Five: Real-User Monitoring (RUM)

Optimizations in the lab don’t always translate to real-world improvements.
So we integrated a RUM script in production to track key metrics directly from user sessions — FID, LCP, TTFB — aggregated by device and region.

This allowed us to spot performance regressions before users complained.

For example, a regression in one product was traced back to a 2MB analytics SDK introduced by marketing. Without RUM, we might never have caught it.


7. Step Six: Continuous Improvement through Dashboards

We built an internal Performance Dashboard pulling data from Lighthouse CI, Clarity, and SonarQube APIs.

Every product team could see:

Teams started to compete (in a friendly way) to get the “fastest load time” badge.
Performance became something everyone cared about — not just frontend engineers.


8. The Results

After six months of enforcing these practices:

But the biggest win?
Consistency.
Every product — regardless of team or stack — shared the same baseline of performance excellence.


Final Thoughts

Performance optimization isn’t a one-time effort. It’s a living discipline — a mix of good tooling, strong engineering culture, and continuous measurement.

By combining Lighthouse, Clarity, SonarQube, testing, and CI/CD automation, we turned performance into a predictable and measurable process, not a guessing game.

And now, when we ship something new — whether it’s a dashboard, microsite, or AI-powered app — we ship it fast, confident, and optimized from the start.

Dito

© 2025 Ditorahard

Instagram 𝕏 GitHub