commits
This path wasn't matched by the /oauth/ prefix since it starts with
/oauth- not /oauth/, causing it to fall through to SvelteKit (404).
Bumps to 0.0.1-alpha.30.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The $hatk module alias needs a Node.js resolve hook for dynamic import()
in scanServerDir. Previously only test.ts had it — now main.ts uses the
same shared hook so production also resolves $hatk correctly.
Bumps to 0.0.1-alpha.29.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
SvelteKit SSR load functions call callXrpc via globalThis.__hatk_callXrpc,
but main.ts never set it. Now exposes callXrpc, parseSessionCookie, and
sessionCookieName before loading the SvelteKit handler.
Bumps to 0.0.1-alpha.28.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
scanServerDir uses raw import() which bypasses Vite's resolve.alias.
Register a Node.js module resolve hook in createTestContext so $hatk
resolves to hatk.generated.ts in all contexts.
Also adds resolve.alias in the vite plugin for dev/build consistency.
Bumps to 0.0.1-alpha.27.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The transform hook alone wasn't enough — Vitest resolves imports before
transform runs. Adding resolve.alias ensures $hatk resolves in all contexts.
Bumps to 0.0.1-alpha.26.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Already declared in packages/hatk/package.json where it's actually used.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Guard __dev/login with oauth check to prevent crash when OAuth not configured
- Fix sendResponse to preserve duplicate Set-Cookie headers (flat array format)
- Remove misleading status_code: 0 from telemetry emit
- Stabilize HMAC key derivation with sorted JSON keys
- Log PDS proxy local index failures via emit() instead of silent catch
- Extract shared withDpopRetry() to deduplicate proxy retry logic
- Share isHatkRoute() between adapter.ts and vite-plugin.ts
- Fix hasRenderer to use live function call instead of boot-time const
- Support configurable cookie name in generated client code
- Remove any types from pds-proxy.ts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rewrite server.ts to Request→Response handler, add SSR renderer and
Node.js adapter, implement session cookies for viewer resolution in
SSR, add /__dev/login endpoint for test auth, register core XRPC
handlers for preferences, fix OAuth scope joining, switch test context
to SQLite, and add /__dev/ route forwarding in vite plugin.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add define functions (defineFeed, defineQuery, defineProcedure, defineSetup,
defineHook, defineLabels, defineOG) with __type discriminants. Add recursive
scanner that discovers and categorizes server/ modules by type. Add
server-init.ts that wires scanner results to registration functions. Update
main.ts to use initServer() when server/ directory exists, with fallback to
legacy separate directories. Update codegen to scaffold server/ files and
export all define functions from hatk.generated.ts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Backfill was inserting records without lexicon validation, allowing
invalid data (e.g. createdAt: "wowzers") through. Now validates
each record against the lexicon schema, matching indexer behavior.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
SQLite is now the default for new projects and when no databaseEngine
is specified in config. Use --duckdb flag with hatk new to opt in.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Diff lexicon-derived schema against actual DB columns on every startup.
Emit ALTER TABLE ADD/DROP COLUMN for changes, drop orphaned child tables,
and trigger backfill when new empty collection tables are detected.
Also includes:
- Skip post-backfill restart in dev mode (DEV_MODE env)
- Suppress ECONNREFUSED proxy errors in Vite plugin during startup
- Move db/schema.sql write after setup hooks
- Normalize SQL parameter passing to use arrays consistently
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Refactor hatk's monolithic DuckDB data layer into a hexagonal architecture
supporting both DuckDB and SQLite via DatabasePort/SearchPort interfaces.
- Add database/ports.ts with DatabasePort, BulkInserter, SearchPort interfaces
- Add database/dialect.ts with SqlDialect configs for DuckDB and SQLite
- Add DuckDB adapter (database/adapters/duckdb.ts) preserving read/write queues
- Add SQLite adapter (database/adapters/sqlite.ts) with $1→? param translation
- Add DuckDB SearchPort using PRAGMA FTS and SQLite SearchPort using FTS5
- Add adapter factory with dynamic imports for tree-shaking
- Refactor db.ts to use DatabasePort instead of direct DuckDB API calls
- Make schema generation dialect-aware (type maps, timestamps, JSON)
- Make FTS index building dialect-aware (string_agg, json_extract)
- Add databaseEngine config option ('duckdb' | 'sqlite')
- Add --sqlite flag to hatk new scaffolding
- Add db/schema.sql auto-generation on startup
- Fix hatk schema command to work with both engines
- Update reset command to clean up SQLite WAL files
- Update all import paths to new database/ directory
- Add database/index.ts barrel export
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Catch resolveHandle failures in handlePar and throw a clear
"Handle not found" error. PAR endpoint returns 400 instead of 500.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds an AGENTS.md file to new projects created with `hatk new`, providing
AI coding assistants with project structure and CLI usage context.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
hatk schema now inits the DB from lexicons if it doesn't exist yet,
so you can inspect the schema without running the server first.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add module-level docs with examples to labels, setup, seed, xrpc, and
hooks. Add function-level JSDoc to logger, mst, indexer, and all exports.
Move hooks.ts from oauth/ to src/ since it's generic lifecycle infrastructure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace execSync with spawn-based helper that forwards SIGINT/SIGTERM
to child processes, eliminating the need to Ctrl+C twice.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add defineConfig() identity function for type inference. Rewrite loadConfig
to use dynamic import() instead of YAML parsing. Update all call sites
(main.ts, cli.ts, test.ts, vite-plugin.ts) and the scaffolder. Add
./config package export. Dockerfile template uses --experimental-strip-types
for native Node 25 TS support.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Resync endpoint sets repos to pending then calls runBackfill
instead of triggerAutoBackfill, using the same batch worker pool
- runBackfill returns record count; restart only if work was done
- Shared runBackfillAndRestart function used by both boot and resync
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- runBackfill returns record count instead of void
- Only restart after backfill if records were actually processed
- Use exit(1) so Railway restarts the container
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove DuckDB memory_limit/threads config (use defaults) and exit
after backfill+FTS completes so the container restarts at baseline ~250MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Host reports 36GB RAM but container is capped at 4GB. Without limits
DuckDB targets 29GB buffer pool and gets OOM-killed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fall back to full import when diff CAR is missing root block (PDS
returned 200 but compacted past our since rev)
- Handle SIGTERM for graceful shutdown on container platforms
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- DuckDB threads 2→1 (saves ~125MB native), memory_limit 512→256MB
- FTS rebuild interval 500→5000 (reduces frequency of expensive shadow table materialization)
- CHECKPOINT after FTS rebuild to compact WAL and free DuckDB memory
- Startup phase memory logging to diagnose where memory is consumed
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Set DuckDB memory_limit=512MB and threads=2 to prevent FTS shadow
table rebuilds from consuming all container RAM alongside V8.
Remove backfillChildTables() — was a one-time migration, child rows
are already populated during normal indexing and backfill.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace buffered res.arrayBuffer() with incremental stream parsing.
Each block is .slice()d into its own Uint8Array, eliminating the single
large external ArrayBuffer that V8 can't GC (213MB → 64MB external).
Also adds diff-based backfill via `since` parameter with fallback to
full import when the PDS has compacted history.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Retry timers in triggerAutoBackfill bypassed the concurrency check in
processMessage, allowing unbounded concurrent CAR downloads (23 observed
in production, 2GB RSS). Move the concurrency gate into
triggerAutoBackfill itself and use config.backfill.parallelism instead of
a hardcoded constant. Add dedup set to prevent re-schedule timer
accumulation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace eager block Map with LazyBlockMap that stores byte offsets
into the CAR buffer instead of copying block data
- Convert walkMst to a generator so entries are yielded and processed
one at a time instead of collecting 193k+ entries into an array
- Delete block offsets as they're consumed to allow mid-processing GC
- Add iterator support to LazyBlockMap for indexer compatibility
- Reduce default backfill parallelism from 5 to 3
- Bump max-old-space-size from 256 to 512 in Dockerfile template
- Add build/publish/release scripts to root package.json
Benchmarked against a 71MB CAR (244k blocks, 193k records):
peak heap dropped from OOM at 512MB to ~415MB with recovery to 109MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Inserts records in batches of 1000 instead of collecting all records
for a repo before inserting. Moves DELETE before insert loop so
chunked inserts work correctly. Bumps max-old-space-size to 512.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reduces memory pressure by releasing intermediate data before bulk insert.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Expose rss/heapUsed/heapTotal/external in /admin/stats response
- Add --max-old-space-size=256 to Dockerfile CMD template
- Bump version to 0.0.1-alpha.5
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Install all deps before build, then prune devDependencies after
- Add required: ['items'] to feed/search/list output schemas
- Add @types/node to scaffold devDependencies
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
scanServerDir uses raw import() which bypasses Vite's resolve.alias.
Register a Node.js module resolve hook in createTestContext so $hatk
resolves to hatk.generated.ts in all contexts.
Also adds resolve.alias in the vite plugin for dev/build consistency.
Bumps to 0.0.1-alpha.27.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Guard __dev/login with oauth check to prevent crash when OAuth not configured
- Fix sendResponse to preserve duplicate Set-Cookie headers (flat array format)
- Remove misleading status_code: 0 from telemetry emit
- Stabilize HMAC key derivation with sorted JSON keys
- Log PDS proxy local index failures via emit() instead of silent catch
- Extract shared withDpopRetry() to deduplicate proxy retry logic
- Share isHatkRoute() between adapter.ts and vite-plugin.ts
- Fix hasRenderer to use live function call instead of boot-time const
- Support configurable cookie name in generated client code
- Remove any types from pds-proxy.ts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rewrite server.ts to Request→Response handler, add SSR renderer and
Node.js adapter, implement session cookies for viewer resolution in
SSR, add /__dev/login endpoint for test auth, register core XRPC
handlers for preferences, fix OAuth scope joining, switch test context
to SQLite, and add /__dev/ route forwarding in vite plugin.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add define functions (defineFeed, defineQuery, defineProcedure, defineSetup,
defineHook, defineLabels, defineOG) with __type discriminants. Add recursive
scanner that discovers and categorizes server/ modules by type. Add
server-init.ts that wires scanner results to registration functions. Update
main.ts to use initServer() when server/ directory exists, with fallback to
legacy separate directories. Update codegen to scaffold server/ files and
export all define functions from hatk.generated.ts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Diff lexicon-derived schema against actual DB columns on every startup.
Emit ALTER TABLE ADD/DROP COLUMN for changes, drop orphaned child tables,
and trigger backfill when new empty collection tables are detected.
Also includes:
- Skip post-backfill restart in dev mode (DEV_MODE env)
- Suppress ECONNREFUSED proxy errors in Vite plugin during startup
- Move db/schema.sql write after setup hooks
- Normalize SQL parameter passing to use arrays consistently
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Refactor hatk's monolithic DuckDB data layer into a hexagonal architecture
supporting both DuckDB and SQLite via DatabasePort/SearchPort interfaces.
- Add database/ports.ts with DatabasePort, BulkInserter, SearchPort interfaces
- Add database/dialect.ts with SqlDialect configs for DuckDB and SQLite
- Add DuckDB adapter (database/adapters/duckdb.ts) preserving read/write queues
- Add SQLite adapter (database/adapters/sqlite.ts) with $1→? param translation
- Add DuckDB SearchPort using PRAGMA FTS and SQLite SearchPort using FTS5
- Add adapter factory with dynamic imports for tree-shaking
- Refactor db.ts to use DatabasePort instead of direct DuckDB API calls
- Make schema generation dialect-aware (type maps, timestamps, JSON)
- Make FTS index building dialect-aware (string_agg, json_extract)
- Add databaseEngine config option ('duckdb' | 'sqlite')
- Add --sqlite flag to hatk new scaffolding
- Add db/schema.sql auto-generation on startup
- Fix hatk schema command to work with both engines
- Update reset command to clean up SQLite WAL files
- Update all import paths to new database/ directory
- Add database/index.ts barrel export
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add defineConfig() identity function for type inference. Rewrite loadConfig
to use dynamic import() instead of YAML parsing. Update all call sites
(main.ts, cli.ts, test.ts, vite-plugin.ts) and the scaffolder. Add
./config package export. Dockerfile template uses --experimental-strip-types
for native Node 25 TS support.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Resync endpoint sets repos to pending then calls runBackfill
instead of triggerAutoBackfill, using the same batch worker pool
- runBackfill returns record count; restart only if work was done
- Shared runBackfillAndRestart function used by both boot and resync
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- DuckDB threads 2→1 (saves ~125MB native), memory_limit 512→256MB
- FTS rebuild interval 500→5000 (reduces frequency of expensive shadow table materialization)
- CHECKPOINT after FTS rebuild to compact WAL and free DuckDB memory
- Startup phase memory logging to diagnose where memory is consumed
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Set DuckDB memory_limit=512MB and threads=2 to prevent FTS shadow
table rebuilds from consuming all container RAM alongside V8.
Remove backfillChildTables() — was a one-time migration, child rows
are already populated during normal indexing and backfill.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace buffered res.arrayBuffer() with incremental stream parsing.
Each block is .slice()d into its own Uint8Array, eliminating the single
large external ArrayBuffer that V8 can't GC (213MB → 64MB external).
Also adds diff-based backfill via `since` parameter with fallback to
full import when the PDS has compacted history.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Retry timers in triggerAutoBackfill bypassed the concurrency check in
processMessage, allowing unbounded concurrent CAR downloads (23 observed
in production, 2GB RSS). Move the concurrency gate into
triggerAutoBackfill itself and use config.backfill.parallelism instead of
a hardcoded constant. Add dedup set to prevent re-schedule timer
accumulation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace eager block Map with LazyBlockMap that stores byte offsets
into the CAR buffer instead of copying block data
- Convert walkMst to a generator so entries are yielded and processed
one at a time instead of collecting 193k+ entries into an array
- Delete block offsets as they're consumed to allow mid-processing GC
- Add iterator support to LazyBlockMap for indexer compatibility
- Reduce default backfill parallelism from 5 to 3
- Bump max-old-space-size from 256 to 512 in Dockerfile template
- Add build/publish/release scripts to root package.json
Benchmarked against a 71MB CAR (244k blocks, 193k records):
peak heap dropped from OOM at 512MB to ~415MB with recovery to 109MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>