2026-05-14 on Supabase prod + get-coffee Odoo. Multi-register arc follow-up SHIPPED end-to-end. NIX-created Pro registers (sync_state='pending_create') now drain through the existing odoo-sync cron tick and land in Odoo as new pos.config rows with cloned settings from an existing template — fully functional immediately, no operator setup in Odoo backoffice required. Gate 2 verified the round-trip: inserted a PUSH-TEST row on get-coffee, watched the cron pick it up, confirmed Odoo created pos.config.id=11, NIX row flipped to sync_state='synced' with odoo_pos_config_id=11 stamped back.
cafe.pos_configs.odoo_sync_retries + odoo_next_attempt_at + partial drain index cafe_pos_configs_pending_push_idx.PUSH-TEST-… with sync_state='pending_create' on get-coffee. Cron fired → cloned the template pos.config (id=4 "Bakery Shop") → created new pos.config with id=11 in Odoo → wrote back odoo_pos_config_id=11 + flipped to sync_state='synced'. NIX row marked inactive post-test (Odoo pos.config.id=11 stays — operator can archive in backoffice).nix-outdoor-sales-backend: 9118e63 migration: queue plumbing (retries + next_attempt + partial index) 708dafb migrate.js: bundle 1/2/3 entries skip backfill steps when re-run after Bundle 3 nix-cafe: b716c4e feat: pos.config push connector (clone-from-existing + cron drain) c16f151 fix: bump pool teardown 5s → 25s (cron drain runs longer than 5s now) b6adf41 fix: drop stock_location_id from create payload (deprecated in newer Odoo) a7fc471 chore: remove debug console.logs after Gate 2 verified (355cace + ac8e7a8 were Gate 2 debug log additions, superseded by a7fc471)
1. **Pool teardown timeout was too short.** lib/db/client.ts schedules pool.end() 5s after creation via setTimeout. Worked fine for typical request handlers (<5s). The cron route now iterates all 3 tenants serially with Odoo I/O each — total runtime ~5-15s. Pool died mid-run, causing every later DB call to throw "Cannot use a pool after calling end on the pool". Bumped to 25s. Stays under Workers' 30s CPU cap; `idleTimeoutMillis: 1_000` already releases connections back to Hyperdrive promptly when truly idle. Affects EVERY DB call after the first 5s in any request — silent latent bug for any future long route. 2. **stock_location_id removed from pos.config in newer Odoo.** Odoo 17+ derives the stock location from picking_type_id; the field was dropped from pos.config. My initial CLONE_FIELDS list included it and Odoo rejected the create with "Invalid field 'stock_location_id'". Removed; picking_type_id alone is sufficient for inventory behavior. 3. **Re-running migrate.js after Bundle 3 broke Bundle 1+2 entries.** Bundle 1 + 2 had backfill steps that joined cafe.shop_pos_configs (now UUID after Bundle 3) against bigint columns. Re-running migrate.js triggered "operator does not exist: uuid = bigint". Added a guard at the top of each entry: probe whether the column has been migrated to UUID, skip the now-impossible backfill if so. Idempotency preserved + safe to re-run without errors.
Setup:
Inserted INTO cafe.pos_configs (tenant=get-coffee, shop=809e3f7f-…,
name='PUSH-TEST-1778724508344', is_active=true,
pos_config_int_id=1000000023, sync_state='pending_create');
Cron tick (1):
→ "Cannot use a pool after calling end on the pool" (root cause #1)
Fixed via pool timeout bump.
Cron tick (2):
→ "Odoo error: Invalid field 'stock_location_id' on 'pos.config'"
(root cause #2). Marked retry=1 + next_attempt_at=now+1min + error
captured. Backoff worked correctly.
Fixed via dropping stock_location_id from CLONE_FIELDS.
Cron tick (3):
→ SUCCESS. odoo_pos_config_id=11 stamped, sync_state='synced',
retries reset to 0, error cleared.
Final NIX row:
┌──────────────────────────────────────┬────────────────────────────┬──────────┬───────────────────┬───────────┐
│ id │ name │ state │ odoo_pos_config_id│ retries │
├──────────────────────────────────────┼────────────────────────────┼──────────┼───────────────────┼───────────┤
│ b25d1b8a-78ce-… │ PUSH-TEST-… [TEST] │ synced │ 11 │ 0 │
└──────────────────────────────────────┴────────────────────────────┴──────────┴───────────────────┴───────────┘
Cleanup:
Marked NIX row is_active=false + appended ' [TEST]' to name. Operator
can archive Odoo pos.config.id=11 via backoffice if desired.
| test-phase1-prod.mjs | 11/11 |
| test-phase2-sso-outdoor-prod.mjs | 6/6 |
| test-phase2-cafe-multishop-prod.mjs | 6/6 |
| test-m1-prod.mjs | 10/10 |
| test-r7-prod.mjs | 14/14 |
| test-r8-prod.mjs | 4/4 |
| Total | 51/51 |
Before this connector:
Pro tenant admin clicks "Add register" in /cafe/settings/registers →
cafe.pos_configs row inserted with sync_state='pending_create' and
odoo_pos_config_id=NULL. Register exists in NIX but invisible in Odoo.
Sessions/orders against the register would push to Odoo as orphans
because the corresponding pos.config doesn't exist.
After this connector:
Same admin click → ≤60s later, Odoo has a new pos.config (cloned from
an existing one) and the NIX row carries odoo_pos_config_id pointing
at it. Sessions/orders push cleanly. No operator action in Odoo
backoffice required.
Multi-register arc is now FULLY OPERATIONAL on Pro tenants:
Bundle 1 (schema) +
Bundle 2 (admin UI + Starter landing + dual-write) +
Bundle 3 (UUID cutover) +
pos.config push connector (this) =
→ admin can add a register on a Pro tenant → it's usable on POS in
NIX immediately AND visible/usable in Odoo within ~60s.