# database patterns for sqlite in zig - connection management, thread safety, transactions. ## zqlite vs raw c bindings prefer [zqlite](https://github.com/karlseguin/zqlite.zig) over raw c bindings. the api is cleaner and handles common pitfalls: ```zig const zqlite = @import("zqlite"); // open with flags conn = zqlite.open(path, zqlite.OpenFlags.Create | zqlite.OpenFlags.ReadWrite) catch |err| { return err; }; // exec with tuple binding (type-safe) conn.exec("INSERT INTO users (name, age) VALUES (?, ?)", .{ name, age }) catch |err| { return err; }; // exec without args conn.execNoArgs("CREATE TABLE IF NOT EXISTS users (...)") catch |err| { return err; }; ``` with raw c bindings you manually manage statement lifecycle, bind parameters by index, and handle error codes. zqlite wraps all that. see: [build.zig.zon dependency](https://tangled.sh/@zzstoatzz.io/music-atmosphere-feed/tree/main/build.zig.zon) ## connection patterns ### global connection + mutex (simple apps) for single-writer workloads, a global connection protected by mutex works: ```zig pub var conn: zqlite.Conn = undefined; pub var mutex: std.Thread.Mutex = .{}; pub fn init(path: [*:0]const u8) !void { conn = zqlite.open(path, zqlite.OpenFlags.Create | zqlite.OpenFlags.ReadWrite) catch |err| { return err; }; // enable WAL for better read concurrency _ = conn.exec("PRAGMA journal_mode=WAL", .{}) catch {}; _ = conn.exec("PRAGMA busy_timeout=5000", .{}) catch {}; } pub fn insertUser(name: []const u8) !void { mutex.lock(); defer mutex.unlock(); conn.exec("INSERT INTO users (name) VALUES (?)", .{name}) catch |err| { return err; }; } ``` every function that touches the db acquires the mutex first. the `defer mutex.unlock()` ensures unlock even on error. see: [pollz/db.zig](https://tangled.sh/@zzstoatzz.io/pollz/tree/main/backend/src/db.zig), [music-atmosphere-feed/db.zig](https://tangled.sh/@zzstoatzz.io/music-atmosphere-feed/tree/main/src/store/db.zig) ### connection pool (concurrent apps) for multi-threaded servers with concurrent writes, use zqlite's pool: ```zig const pool = try zqlite.Pool.init(.{ .path = path, .size = 4, // number of connections .on_first_connection = initSchema, // runs once .on_connection = configurePragmas, // runs per connection }); // in request handler const c = pool.acquire(); defer c.release(); try c.exec("INSERT ...", .{...}); ``` the pool won't grow beyond the configured size. each connection gets its own cursor/statement state. ## transactions wrap multi-statement operations in transactions for atomicity: ```zig try conn.transaction(); // BEGIN errdefer conn.rollback(); try conn.exec("UPDATE flow_run SET state = ?", .{state}); try conn.exec("INSERT INTO flow_run_state ...", .{...}); try conn.commit(); // COMMIT ``` the `errdefer conn.rollback()` ensures rollback if any statement fails. without this, partial writes corrupt state. for write-heavy operations, use `exclusiveTransaction()` which acquires a write lock immediately (avoids busy-wait): ```zig try conn.exclusiveTransaction(); // BEGIN IMMEDIATE // ... writes ... try conn.commit(); ``` ## pragmas for concurrency always set these for concurrent access: ```zig _ = conn.exec("PRAGMA journal_mode=WAL", .{}) catch {}; // write-ahead logging _ = conn.exec("PRAGMA busy_timeout=5000", .{}) catch {}; // wait 5s on lock _ = conn.exec("PRAGMA foreign_keys=ON", .{}) catch {}; // enforce FK constraints ``` WAL allows concurrent readers during writes. busy_timeout prevents immediate SQLITE_BUSY errors. ## module organization for larger apps, separate db concerns into a module: ``` src/db/ mod.zig -- re-exports, global state Client.zig -- connection/pool management schema.zig -- table definitions, migrations result.zig -- result parsing helpers ``` the `mod.zig` pattern: ```zig pub const Client = @import("Client.zig"); pub const Row = result.Row; var client: ?Client = null; pub fn init() !void { client = try Client.init(); try schema.init(&client.?); } pub fn getClient() ?*Client { if (client) |*c| return c; return null; } ``` see: [leaflet-search/db/mod.zig](https://tangled.sh/@zzstoatzz.io/leaflet-search/tree/main/backend/src/db/mod.zig) ## comptime sql with zql [zql](https://tangled.sh/@zzstoatzz.io/zql) parses sql at compile time and provides type-safe bindings. it extracts column names, named parameters, and validates struct mapping: ```zig const zql = @import("zql"); // parse sql at comptime const Q = zql.Query("SELECT id, name FROM users WHERE age > :min_age"); // comptime validation - fails if struct missing required param Q.validateArgs(struct { min_age: i64 }); // access parsed metadata _ = Q.positional; // "SELECT id, name FROM users WHERE age > ?" _ = Q.params; // ["min_age"] _ = Q.columns; // ["id", "name"] // bind named params to positional tuple (order independent) const args = Q.bind(.{ .min_age = 25 }); // returns tuple for db.exec // map result row to struct (validates fields match columns) const User = struct { id: i64, name: []const u8 }; Q.validateStruct(User); const user = Q.fromRow(User, row); // row must have .text(idx), .int(idx) ``` the key pattern: `Query(comptime sql)` returns a type with methods for validation and binding. all validation happens at compile time - typos in param names or missing fields are compile errors. combine with zqlite for fully type-safe database layer: ```zig const Q = zql.Query("INSERT INTO users (name, age) VALUES (:name, :age)"); try conn.exec(Q.positional, Q.bind(.{ .name = "alice", .age = 25 })); ``` see: [zql/Query.zig](https://tangled.sh/@zzstoatzz.io/zql/tree/main/src/Query.zig) ## postgres with pg.zig [pg.zig](https://github.com/karlseguin/pg.zig) is the postgresql client. key difference from sqlite: **strict type checking on column types**. ### strict integer types pg.zig enforces exact type matching for integer columns: ```zig // WRONG - panics at runtime row.getInt(col) // returns i32, but you assigned to i64 from BIGINT column // RIGHT - match postgres types exactly const count: i32 = row.getInt(col); // for INTEGER / INT4 columns const big_id: i64 = row.getBigInt(col); // for BIGINT / INT8 columns ``` common mistake: sqlite's `row.int()` returns i64 regardless of column type. pg.zig is strict. ### abstracting sqlite/postgres when supporting both databases, create a wrapper that handles the differences: ```zig pub const Row = union(enum) { sqlite: zqlite.Row, postgres: pg.Row, // for INTEGER columns pub fn int(self: Row, col: usize) i64 { return switch (self) { .sqlite => |r| r.int(col), .postgres => |r| r.getInt(col), // returns i32 }; } // for BIGINT columns - MUST use separate method pub fn bigint(self: Row, col: usize) i64 { return switch (self) { .sqlite => |r| r.int(col), .postgres => |r| r.getBigInt(col), // returns i64 }; } pub fn text(self: Row, col: usize) []const u8 { return switch (self) { .sqlite => |r| r.text(col), .postgres => |r| r.getText(col) orelse "", }; } }; ``` see: [prefect-server/db/backend.zig](https://tangled.sh/@zzstoatzz.io/prefect-server/tree/main/src/db/backend.zig) ### placeholder syntax postgres uses `$1, $2` instead of `?`. convert at runtime: ```zig fn rewritePlaceholders(alloc: Allocator, sql: []const u8) ![]const u8 { var result: std.ArrayList(u8) = .empty; errdefer result.deinit(alloc); var param_num: usize = 1; for (sql) |c| { if (c == '?') { try result.print(alloc, "${d}", .{param_num}); param_num += 1; } else { try result.append(alloc, c); } } return result.toOwnedSlice(alloc); } ``` ### dynamic filters: bind values, not strings dynamic query shape is sometimes unavoidable for API filters (`key.any_`, `level.ge_`, optional timestamp bounds, etc.). keep the dynamic part limited to SQL fragments chosen by the program, and bind every request-derived value. the trap is building a list of "bindings" and then interpolating them into SQL later. even if you escape single quotes, the code now has two SQL paths: normal tuple binding for fixed queries and manual string construction for dynamic queries. it is harder to audit, and every new filter repeats the escape discipline. prefer a tiny dynamic binding abstraction: ```zig pub const BoundValue = union(enum) { text: []const u8, int: i64, null, }; pub fn queryBound(db: *Backend, sql: []const u8, args: []const BoundValue) !Rows { const stmt = try sqlite_conn.prepare(sql); errdefer stmt.deinit(); for (args, 0..) |arg, i| switch (arg) { .text => |v| try stmt.bindValue(v, i), .int => |v| try stmt.bindValue(v, i), .null => try stmt.bindValue(null, i), }; return Rows{ .backend = .{ .sqlite = .{ .stmt = stmt, .err = null } } }; } ``` then build filter SQL with placeholders: ```zig try where.appendSlice(alloc, " AND flow_run_id = ?"); try bindings.append(alloc, .{ .text = flow_run_id }); try where.appendSlice(alloc, " AND level >= ?"); try bindings.append(alloc, .{ .int = level }); ``` this keeps the "one obvious way" invariant: values are values, not SQL text. dynamic column names, sort clauses, and operators should still come from fixed enums or code paths, never directly from request strings. see: [prefect-server dynamic query binding](https://tangled.sh/@zzstoatzz.io/prefect-server/tree/main/src/db/backend.zig) ### connection pooling pg.zig has built-in pooling: ```zig const pool = try pg.Pool.init(.{ .connect = .{ .host = "localhost", .port = 5432, .database = "mydb", .username = "user", .password = "pass", }, .size = 10, }); // acquire connection (returns to pool on deinit) const conn = try pool.acquire(); defer conn.deinit(); var result = try conn.query("SELECT ...", .{}); defer result.deinit(); ```