commits
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove DuckDB memory_limit/threads config (use defaults) and exit
after backfill+FTS completes so the container restarts at baseline ~250MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Host reports 36GB RAM but container is capped at 4GB. Without limits
DuckDB targets 29GB buffer pool and gets OOM-killed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fall back to full import when diff CAR is missing root block (PDS
returned 200 but compacted past our since rev)
- Handle SIGTERM for graceful shutdown on container platforms
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- DuckDB threads 2→1 (saves ~125MB native), memory_limit 512→256MB
- FTS rebuild interval 500→5000 (reduces frequency of expensive shadow table materialization)
- CHECKPOINT after FTS rebuild to compact WAL and free DuckDB memory
- Startup phase memory logging to diagnose where memory is consumed
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Set DuckDB memory_limit=512MB and threads=2 to prevent FTS shadow
table rebuilds from consuming all container RAM alongside V8.
Remove backfillChildTables() — was a one-time migration, child rows
are already populated during normal indexing and backfill.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace buffered res.arrayBuffer() with incremental stream parsing.
Each block is .slice()d into its own Uint8Array, eliminating the single
large external ArrayBuffer that V8 can't GC (213MB → 64MB external).
Also adds diff-based backfill via `since` parameter with fallback to
full import when the PDS has compacted history.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Retry timers in triggerAutoBackfill bypassed the concurrency check in
processMessage, allowing unbounded concurrent CAR downloads (23 observed
in production, 2GB RSS). Move the concurrency gate into
triggerAutoBackfill itself and use config.backfill.parallelism instead of
a hardcoded constant. Add dedup set to prevent re-schedule timer
accumulation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace eager block Map with LazyBlockMap that stores byte offsets
into the CAR buffer instead of copying block data
- Convert walkMst to a generator so entries are yielded and processed
one at a time instead of collecting 193k+ entries into an array
- Delete block offsets as they're consumed to allow mid-processing GC
- Add iterator support to LazyBlockMap for indexer compatibility
- Reduce default backfill parallelism from 5 to 3
- Bump max-old-space-size from 256 to 512 in Dockerfile template
- Add build/publish/release scripts to root package.json
Benchmarked against a 71MB CAR (244k blocks, 193k records):
peak heap dropped from OOM at 512MB to ~415MB with recovery to 109MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Inserts records in batches of 1000 instead of collecting all records
for a repo before inserting. Moves DELETE before insert loop so
chunked inserts work correctly. Bumps max-old-space-size to 512.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reduces memory pressure by releasing intermediate data before bulk insert.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Expose rss/heapUsed/heapTotal/external in /admin/stats response
- Add --max-old-space-size=256 to Dockerfile CMD template
- Bump version to 0.0.1-alpha.5
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Install all deps before build, then prune devDependencies after
- Add required: ['items'] to feed/search/list output schemas
- Add @types/node to scaffold devDependencies
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add skipLibCheck and exclude node_modules/dist/docs from root
tsconfig so tsc stops checking third-party .d.ts files.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sets DEV_MODE=1 when running hatk dev, which makes requireAdmin
skip the DID allowlist check. All seeded accounts automatically
have admin access during development.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Compresses JSON responses > 1KB when the client supports gzip.
Uses a shared sendJson helper so jsonResponse/jsonError signatures
stay unchanged.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Serves a default robots.txt from the hatk package, with user override
via public/robots.txt. Adds meta description to the app.html scaffold
template.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Switches hatk from shipping raw TypeScript to compiled JS via tsc.
Renames npm package from hatk to @hatk/hatk (npm blocked the unscoped name).
Updates all codegen templates and path references for the new package name.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove unused functions (flattenRow, resolveBlobOverrides), fix
empty destructuring pattern, exclude minified admin-auth.js from
lint, exclude docs/superpowers from formatter.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Templates are standalone hatk projects hosted at github.com/hatk-dev/.
`hatk new my-app --template statusphere` clones hatk-template-statusphere,
strips .git, and sets the project name in package.json.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- DuckDB threads 2→1 (saves ~125MB native), memory_limit 512→256MB
- FTS rebuild interval 500→5000 (reduces frequency of expensive shadow table materialization)
- CHECKPOINT after FTS rebuild to compact WAL and free DuckDB memory
- Startup phase memory logging to diagnose where memory is consumed
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Set DuckDB memory_limit=512MB and threads=2 to prevent FTS shadow
table rebuilds from consuming all container RAM alongside V8.
Remove backfillChildTables() — was a one-time migration, child rows
are already populated during normal indexing and backfill.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace buffered res.arrayBuffer() with incremental stream parsing.
Each block is .slice()d into its own Uint8Array, eliminating the single
large external ArrayBuffer that V8 can't GC (213MB → 64MB external).
Also adds diff-based backfill via `since` parameter with fallback to
full import when the PDS has compacted history.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Retry timers in triggerAutoBackfill bypassed the concurrency check in
processMessage, allowing unbounded concurrent CAR downloads (23 observed
in production, 2GB RSS). Move the concurrency gate into
triggerAutoBackfill itself and use config.backfill.parallelism instead of
a hardcoded constant. Add dedup set to prevent re-schedule timer
accumulation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace eager block Map with LazyBlockMap that stores byte offsets
into the CAR buffer instead of copying block data
- Convert walkMst to a generator so entries are yielded and processed
one at a time instead of collecting 193k+ entries into an array
- Delete block offsets as they're consumed to allow mid-processing GC
- Add iterator support to LazyBlockMap for indexer compatibility
- Reduce default backfill parallelism from 5 to 3
- Bump max-old-space-size from 256 to 512 in Dockerfile template
- Add build/publish/release scripts to root package.json
Benchmarked against a 71MB CAR (244k blocks, 193k records):
peak heap dropped from OOM at 512MB to ~415MB with recovery to 109MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>