The mental model.
Three ideas explain how Flux works. Once you have them, every CLI command becomes obvious.
request_id that caused it. That single ID unlocks the entire debugging workflow.Every HTTP request is a recorded execution
When a request hits Flux, the gateway captures it and the runtime executes your function inside a V8 isolate. That entire round-trip — inputs, outputs, spans, tool calls, DB queries — is stored as a single execution record.
The execution record is identified by a request_id. Every CLI command that debugs a request takes this ID.
Every database write is a logged mutation
When your function writes to PostgreSQL, Flux's Data Engine intercepts the write and records it in the mutation log. Each entry carries the row that changed, the old value, the new value, and — critically — the request_id that caused it.
This makes your database auditable by default. You never need to write audit_log tables manually.
Every layer emits spans — automatically
The gateway, runtime, data engine, tool calls, async jobs — every layer in Flux emits spans without any instrumentation from you. Those spans are assembled into a trace graph keyed by request_id.
The trace graph is not just for reading. You can replay it, diff it against a different request, bisect your commit history with it, or step through it interactively.
Production can be replayed safely
Because every execution is recorded — inputs, state, secrets, timing — Flux can re-run a production time window against your current code with side-effects disabled. Emails won't send. Payments won't process. Slack won't fire.
This is the Replay Engine. It is how you test a fix against exactly the production traffic that caused an incident, without touching production.
Ready to see it in action?
The quickstart puts all four concepts into practice in about 5 minutes.