A widely deployed ISV package used by large enterprise customers had a critical Aura-based match management screen that took around 10 seconds to become usable. I redesigned the component architecture and migrated to LWC, bringing first usable load down to around 0.1 seconds.
What was the real component we migrated?
Note: This write-up is an anonymised abstraction of a real client implementation. Component names, object names, and snippets are adapted to protect customer and codebase details while preserving the technical approach.
I was brought in by an ISV whose compliance package is deployed across many large enterprise customers. The product handled KYC workflows inside Salesforce, was mature, and originally built in Aura. The team had already optimised around the edges, but users were still waiting too long before they could take action.
The core bottleneck was the match management tab and the set of components it coordinated, not one isolated widget. This tab handled open, flagged, and discounted matches, plus bulk review actions, and it carried most of the interaction load for operations users.
What was the business impact of the slowdown?
When first usable load sits around 10 seconds, operations teams feel it immediately:
- Higher cognitive interruption during repetitive case review.
- More refreshes and duplicate clicks while waiting for UI feedback.
- Slower throughput per analyst across a shift.
- Lower confidence in the platform under peak usage.
This was not just a technical debt issue. It was directly reducing daily throughput in a business-critical process.
Before and after load evidence
| Before (Aura) | After (LWC) |
|---|---|
![]() |
![]() |
Why the original Aura architecture struggled
The old approach loaded a large result set up front and then layered heavy client-side state management on top.
Representative Aura initialisation looked like this:
var action = component.get("c.getMatchedPotentials");
action.setParams({ recordId: recordId });
action.setCallback(this, function(response) {
var data = response.getReturnValue();
component.set('v.allData', data.potentials);
component.set('v.totalSize', data.potentials.length);
component.set('v.pageSize', data.matchesPerPage);
component.set('v.data', data.potentials.slice(0, data.matchesPerPage));
});
$A.enqueueAction(action);
From an engineering perspective, this has predictable scaling pain:
- Full open-match dataset loaded before the user could interact.
- Extensive in-memory list/state mutations in a single parent controller.
- Tight coupling between view state and interaction state.
- More unnecessary screen redraws as selection, filtering, and bulk actions stacked up.
Architecture changes that actually moved the needle
The migration succeeded because it was an architecture change, not a syntax translation.
1) Parent tab refactor
I replaced the Aura parent tab with an LWC parent that owns state transitions explicitly and applies filtering/pagination as first-class behaviour.
async loadMatchedPotentials(resetPagination = true) {
const result = await getMatchedPotentials({ recordId: this.recordId });
this.allMatchedPotentials = (result.potentials || []).map(row => ({
...row,
isSelected: false,
statusDisplay: this.getStatusDisplay(row)
}));
if (resetPagination) {
this.currentPage = 1;
this.selectedRows = [];
}
this.applyFiltersAndPagination();
}
2) Component graph decomposition
Instead of one heavy Aura surface doing everything, the LWC flow split responsibilities:
- Parent match tab for orchestration
- Open match card components for focused row rendering
- Dedicated flagged list component
- Dedicated discounted list component
- Separate bulk action modal workflow
This reduced competing state updates and made each user action cheaper to process.
3) Interaction contract cleanup
Events became explicit (itemaction, viewentity, card-level events), which improved debugging and made UI behaviour more deterministic under repeated analyst workflows.
What I did not do
I did not present this as “LWC is automatically faster than Aura.” That framing is too simplistic and often wrong in practice.
If you port the same loading and state model into LWC, you usually keep most of the pain.
Measured outcome
With the tab architecture and state flow redesigned, first usable load moved from roughly 10 seconds to around 0.1 seconds in the same operational context.
Secondary gains:
- Faster perceived responsiveness during repeat review actions.
- Less UI friction in bulk flag/discount workflows.
- Easier component-level troubleshooting for future enhancements.
Decision and trade-offs
| Option | Benefit | Cost / Risk |
|---|---|---|
| Keep Aura and patch hotspots | Fastest short-term path | Structural state/render bottlenecks remain |
| LWC rewrite only (no architecture change) | Cleaner codebase | Limited user-perceived performance gain |
| LWC + component/state architecture redesign (chosen) | Major UX and maintainability gain | Higher implementation and QA effort |
The chosen path had higher upfront engineering cost, but it delivered lasting performance gains and improved long-term delivery velocity.
Practical checklist for Aura-to-LWC performance migrations
- Measure first usable interaction, not just raw API timing.
- Identify where full datasets are loaded before actionability.
- Separate orchestration components from rendering components.
- Make state transitions explicit and testable.
- Isolate high-frequency user flows (bulk actions, filtering, tab switches).
- Validate improvements with before/after browser timing evidence.
Why this matters for senior Salesforce engineering
High-impact performance work is rarely a one-line query tweak. It is usually architecture: data flow boundaries, state ownership, and rendering strategy.
That is the difference between a migration that looks modern and one that materially improves business throughput.
Need to speed up slow Salesforce screens?
I support short specialist contracts for complex Apex and LWC performance work in production orgs.

