Database
Hosted Postgres — query from functions using a simple, structured API.
Overview
Every Flux project has a dedicated Postgres database. You interact with it via ctx.db.query() inside your functions — no boilerplate, no connection pooling, no ORM configuration.
ctx.db.query(params)
const rows = await ctx.db.query(params);
| Field | Type | Required | Description |
|---|---|---|---|
table | string | ✔ | Table name |
operation | "select" | "insert" | "update" | "delete" | ✔ | Query type |
filters | Filter[] | WHERE conditions | |
columns | string[] | Columns to return (SELECT) | |
data | object | Row data (INSERT / UPDATE) | |
limit | number | Max rows returned | |
offset | number | Pagination offset |
Filter operators
| op | SQL equivalent |
|---|---|
eq | = value |
neq | != value |
gt | > value |
lt | < value |
gte | >= value |
lte | <= value |
like | LIKE value |
in | IN (values) |
is_null | IS NULL |
Examples
SELECT — fetch rows
const todos = await ctx.db.query({
table: "todos",
operation: "select",
filters: [{ column: "done", op: "eq", value: false }],
columns: ["id", "title", "created_at"],
limit: 50,
});
INSERT — create a row
const [todo] = await ctx.db.query({
table: "todos",
operation: "insert",
data: { title: input.title, done: false, user_id: input.userId },
});
UPDATE — modify a row
await ctx.db.query({
table: "todos",
operation: "update",
filters: [{ column: "id", op: "eq", value: input.id }],
data: { done: true },
});
DELETE — remove a row
await ctx.db.query({
table: "todos",
operation: "delete",
filters: [{ column: "id", op: "eq", value: input.id }],
});
Mutation logging
Every INSERT, UPDATE, and DELETE executed via ctx.db.query() automatically produces a mutation record in the state_mutations table. This record is linked directly to the request_id of the request that caused it.
ctx.db.query({ operation: 'insert', table: 'users', data: { ... } })
↓
Data Engine writes the row to Postgres
Data Engine writes a mutation record to state_mutations
{ request_id, table: 'users', row_id, old: null, new: { ... } }
Both writes are in the same transaction — they commit or roll back together
You never configure this — it happens automatically for every write operation. The mutation record captures the before and after values of the affected row, not just the fact that a change occurred.
Mutation records can be queried with:
# Full history of all changes to a row
$ flux state history users --id 42
2026-03-10 12:00 INSERT email=a@b.com, plan=free req:1a2b3c4d
2026-03-10 14:22 UPDATE plan: free → pro req:4f9a3b2c
2026-03-10 14:22 UPDATE plan: pro → null (rolled back) req:550e8400
# Who last wrote each column
$ flux state blame users --id 42
plan free req:550e8400 ← last attempt rolled back; current value from req:4f9a3b2c
If a request fails and its transaction is rolled back, mutation records are marked rolled back rather than deleted — so you can see what the request attempted even when the write did not persist.
See Inspect Database Mutations for a full walkthrough.
Schema migrations
Define your schema as SQL files in schema/. Apply them with:
$ flux db migrate
Example schema/todos.sql:
CREATE TABLE IF NOT EXISTS todos (
id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
title TEXT NOT NULL,
done BOOLEAN DEFAULT FALSE,
user_id TEXT,
created_at TIMESTAMPTZ DEFAULT now()
);
Row-level security
You can enable tenant isolation by filtering on a tenant_id column. The gateway injects tenant context automatically when using project API keys.
Observability
All database queries are automatically traced. Slow queries (>100ms) are flagged, N+1 patterns are detected, and index suggestions are generated based on repeated filter columns. See Observability.