Shipped 2026-05-17. PIN unlock → workspace topbar visible dropped from
4403ms → 3470ms (~21% / 933ms win, stable across two runs).
The route now uses <Suspense> streaming on the OPEN-phase render so the
workspace shell flushes to the browser immediately while the heavy data bundle resolves in a
child async server component.
| Approach | Measured | Delta vs baseline |
|---|---|---|
| Pre-Slice-I baseline (sequential page.tsx) | 4403ms | — |
| Slice I attempt 1: action returns bundle | 4977ms | +574ms (regressed) |
| Slice I attempt 2: parallel page.tsx fetches | 4486ms | +83ms (~noise) |
| Slice I attempt 3: Suspense streaming | 3470ms | −933ms (−21%) |
Suspense wrapper around the OPEN-phase render in
app/(pos-fullscreen)/pos/register/[configId]/page.tsx.
<Suspense fallback={fallbackShell}><ProOpenContent /></Suspense>. The fallback IS the same LockableShell with empty OPEN-phase data — so the workspace shell layout + topbar are visible immediately.ProOpenContent calls loadProOpenPhaseBundle + renders the populated LockableShell. Next streams it in when ready, swapping the fallback.Loader file lib/server/open_phase_loader.ts stays in place — fans out 12 DAOs via Promise.allSettled with per-fetch unwrap() fallback. Used by the SSR path (page.tsx → ProOpenContent).
Cookie semantics unchanged — unlock action still writes the cashier cookie + audit log; on success the client calls router.refresh() which goes through the new Suspense path. Lock + close-shift round-trip verified green.
Attempt 1 — action returns bundle inline. Made the bundle load run on the
action's critical path AS A SERIAL BLOCK. Lost Next's RSC streaming that overlaps server
render with client reconciliation. Action took 2519ms on prod (vs 313ms local) because
12 parallel DAOs + audit-log INSERT hit Supabase's pool_size: 15 limit
(EMAXCONNSESSION errors caught by unwrap()). Net: +574ms
regression.
Attempt 2 — parallelize page.tsx fetches. Same pool exhaustion problem, just inside the SSR path now. Promise.allSettled waits for ALL fetches; the slowest one (or pool-back-off retry) drives wall time. Net: ~0 change (within noise).
Attempt 3 — Suspense streaming (shipped). Stops chasing wall-time
reduction. Instead: render the workspace shell immediately as the Suspense fallback so
perceived latency drops. The test measurement (PIN-Enter → topbar visible)
captured this because the fallback shell renders pos-open-topbar from the
start — total wait until cashier sees the workspace shell collapsed from full-render-cycle
to fallback-paint.
Lesson: on prod cf-workers with Supabase pool=15, naive parallelization of N>5 DAOs can be SLOWER than sequential because of pool contention. Suspense streaming wins by changing the user's perception of progress, not the total work done.
For ~1s after PIN unlock, the workspace shell renders with empty data (no products in grid, no draft tabs, no cash movements). Cashier could in principle tap a button in that window:
Net: brief window of empty-but-functional UI. Acceptable trade for the 933ms win. If it proves confusing in practice, a future polish could overlay a "Loading register..." indicator that fades out when the bundle arrives.
Raw: result.json
| test-phase1-prod.mjs | 11/11 |
| test-phase2-sso-outdoor-prod.mjs | 6/6 |
| test-phase2-cafe-multishop-prod.mjs | 6/6 |
| test-m1-prod.mjs | 10/10 |
| test-r7-prod.mjs | 14/14 |
| test-r8-prod.mjs | 4/4 |
nix-cafe — commits 576e5cf (pivot) + 1727151 (Suspense)
New: lib/server/open_phase_loader.ts
Modified: app/(pos-fullscreen)/pos/register/[configId]/page.tsx
(Suspense wrapper + async ProOpenContent child server component)
Note: the slice originally pushed a much bigger architecture (action-returns-bundle + client
state machines on both lockable shells + new types in lock-screens.tsx). That whole approach
was reverted in the pivot commit; only the loader file + the Suspense wrapper on page.tsx
remain. Tracked diagnostic commits (409eb78, 425efc3) were also
superseded by the pivot.