this repo has no description
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

setup matt pocock skills

+1456 -3
+117
.agents/skills/diagnose/SKILL.md
··· 1 + --- 2 + name: diagnose 3 + description: Disciplined diagnosis loop for hard bugs and performance regressions. Reproduce → minimise → hypothesise → instrument → fix → regression-test. Use when user says "diagnose this" / "debug this", reports a bug, says something is broken/throwing/failing, or describes a performance regression. 4 + --- 5 + 6 + # Diagnose 7 + 8 + A discipline for hard bugs. Skip phases only when explicitly justified. 9 + 10 + When exploring the codebase, use the project's domain glossary to get a clear mental model of the relevant modules, and check ADRs in the area you're touching. 11 + 12 + ## Phase 1 — Build a feedback loop 13 + 14 + **This is the skill.** Everything else is mechanical. If you have a fast, deterministic, agent-runnable pass/fail signal for the bug, you will find the cause — bisection, hypothesis-testing, and instrumentation all just consume that signal. If you don't have one, no amount of staring at code will save you. 15 + 16 + Spend disproportionate effort here. **Be aggressive. Be creative. Refuse to give up.** 17 + 18 + ### Ways to construct one — try them in roughly this order 19 + 20 + 1. **Failing test** at whatever seam reaches the bug — unit, integration, e2e. 21 + 2. **Curl / HTTP script** against a running dev server. 22 + 3. **CLI invocation** with a fixture input, diffing stdout against a known-good snapshot. 23 + 4. **Headless browser script** (Playwright / Puppeteer) — drives the UI, asserts on DOM/console/network. 24 + 5. **Replay a captured trace.** Save a real network request / payload / event log to disk; replay it through the code path in isolation. 25 + 6. **Throwaway harness.** Spin up a minimal subset of the system (one service, mocked deps) that exercises the bug code path with a single function call. 26 + 7. **Property / fuzz loop.** If the bug is "sometimes wrong output", run 1000 random inputs and look for the failure mode. 27 + 8. **Bisection harness.** If the bug appeared between two known states (commit, dataset, version), automate "boot at state X, check, repeat" so you can `git bisect run` it. 28 + 9. **Differential loop.** Run the same input through old-version vs new-version (or two configs) and diff outputs. 29 + 10. **HITL bash script.** Last resort. If a human must click, drive _them_ with `scripts/hitl-loop.template.sh` so the loop is still structured. Captured output feeds back to you. 30 + 31 + Build the right feedback loop, and the bug is 90% fixed. 32 + 33 + ### Iterate on the loop itself 34 + 35 + Treat the loop as a product. Once you have _a_ loop, ask: 36 + 37 + - Can I make it faster? (Cache setup, skip unrelated init, narrow the test scope.) 38 + - Can I make the signal sharper? (Assert on the specific symptom, not "didn't crash".) 39 + - Can I make it more deterministic? (Pin time, seed RNG, isolate filesystem, freeze network.) 40 + 41 + A 30-second flaky loop is barely better than no loop. A 2-second deterministic loop is a debugging superpower. 42 + 43 + ### Non-deterministic bugs 44 + 45 + The goal is not a clean repro but a **higher reproduction rate**. Loop the trigger 100×, parallelise, add stress, narrow timing windows, inject sleeps. A 50%-flake bug is debuggable; 1% is not — keep raising the rate until it's debuggable. 46 + 47 + ### When you genuinely cannot build a loop 48 + 49 + Stop and say so explicitly. List what you tried. Ask the user for: (a) access to whatever environment reproduces it, (b) a captured artifact (HAR file, log dump, core dump, screen recording with timestamps), or (c) permission to add temporary production instrumentation. Do **not** proceed to hypothesise without a loop. 50 + 51 + Do not proceed to Phase 2 until you have a loop you believe in. 52 + 53 + ## Phase 2 — Reproduce 54 + 55 + Run the loop. Watch the bug appear. 56 + 57 + Confirm: 58 + 59 + - [ ] The loop produces the failure mode the **user** described — not a different failure that happens to be nearby. Wrong bug = wrong fix. 60 + - [ ] The failure is reproducible across multiple runs (or, for non-deterministic bugs, reproducible at a high enough rate to debug against). 61 + - [ ] You have captured the exact symptom (error message, wrong output, slow timing) so later phases can verify the fix actually addresses it. 62 + 63 + Do not proceed until you reproduce the bug. 64 + 65 + ## Phase 3 — Hypothesise 66 + 67 + Generate **3–5 ranked hypotheses** before testing any of them. Single-hypothesis generation anchors on the first plausible idea. 68 + 69 + Each hypothesis must be **falsifiable**: state the prediction it makes. 70 + 71 + > Format: "If <X> is the cause, then <changing Y> will make the bug disappear / <changing Z> will make it worse." 72 + 73 + If you cannot state the prediction, the hypothesis is a vibe — discard or sharpen it. 74 + 75 + **Show the ranked list to the user before testing.** They often have domain knowledge that re-ranks instantly ("we just deployed a change to #3"), or know hypotheses they've already ruled out. Cheap checkpoint, big time saver. Don't block on it — proceed with your ranking if the user is AFK. 76 + 77 + ## Phase 4 — Instrument 78 + 79 + Each probe must map to a specific prediction from Phase 3. **Change one variable at a time.** 80 + 81 + Tool preference: 82 + 83 + 1. **Debugger / REPL inspection** if the env supports it. One breakpoint beats ten logs. 84 + 2. **Targeted logs** at the boundaries that distinguish hypotheses. 85 + 3. Never "log everything and grep". 86 + 87 + **Tag every debug log** with a unique prefix, e.g. `[DEBUG-a4f2]`. Cleanup at the end becomes a single grep. Untagged logs survive; tagged logs die. 88 + 89 + **Perf branch.** For performance regressions, logs are usually wrong. Instead: establish a baseline measurement (timing harness, `performance.now()`, profiler, query plan), then bisect. Measure first, fix second. 90 + 91 + ## Phase 5 — Fix + regression test 92 + 93 + Write the regression test **before the fix** — but only if there is a **correct seam** for it. 94 + 95 + A correct seam is one where the test exercises the **real bug pattern** as it occurs at the call site. If the only available seam is too shallow (single-caller test when the bug needs multiple callers, unit test that can't replicate the chain that triggered the bug), a regression test there gives false confidence. 96 + 97 + **If no correct seam exists, that itself is the finding.** Note it. The codebase architecture is preventing the bug from being locked down. Flag this for the next phase. 98 + 99 + If a correct seam exists: 100 + 101 + 1. Turn the minimised repro into a failing test at that seam. 102 + 2. Watch it fail. 103 + 3. Apply the fix. 104 + 4. Watch it pass. 105 + 5. Re-run the Phase 1 feedback loop against the original (un-minimised) scenario. 106 + 107 + ## Phase 6 — Cleanup + post-mortem 108 + 109 + Required before declaring done: 110 + 111 + - [ ] Original repro no longer reproduces (re-run the Phase 1 loop) 112 + - [ ] Regression test passes (or absence of seam is documented) 113 + - [ ] All `[DEBUG-...]` instrumentation removed (`grep` the prefix) 114 + - [ ] Throwaway prototypes deleted (or moved to a clearly-marked debug location) 115 + - [ ] The hypothesis that turned out correct is stated in the commit / PR message — so the next debugger learns 116 + 117 + **Then ask: what would have prevented this bug?** If the answer involves architectural change (no good test seam, tangled callers, hidden coupling) hand off to the `/improve-codebase-architecture` skill with the specifics. Make the recommendation **after** the fix is in, not before — you have more information now than when you started.
+41
.agents/skills/diagnose/scripts/hitl-loop.template.sh
··· 1 + #!/usr/bin/env bash 2 + # Human-in-the-loop reproduction loop. 3 + # Copy this file, edit the steps below, and run it. 4 + # The agent runs the script; the user follows prompts in their terminal. 5 + # 6 + # Usage: 7 + # bash hitl-loop.template.sh 8 + # 9 + # Two helpers: 10 + # step "<instruction>" → show instruction, wait for Enter 11 + # capture VAR "<question>" → show question, read response into VAR 12 + # 13 + # At the end, captured values are printed as KEY=VALUE for the agent to parse. 14 + 15 + set -euo pipefail 16 + 17 + step() { 18 + printf '\n>>> %s\n' "$1" 19 + read -r -p " [Enter when done] " _ 20 + } 21 + 22 + capture() { 23 + local var="$1" question="$2" answer 24 + printf '\n>>> %s\n' "$question" 25 + read -r -p " > " answer 26 + printf -v "$var" '%s' "$answer" 27 + } 28 + 29 + # --- edit below --------------------------------------------------------- 30 + 31 + step "Open the app at http://localhost:3000 and sign in." 32 + 33 + capture ERRORED "Click the 'Export' button. Did it throw an error? (y/n)" 34 + 35 + capture ERROR_MSG "Paste the error message (or 'none'):" 36 + 37 + # --- edit above --------------------------------------------------------- 38 + 39 + printf '\n--- Captured ---\n' 40 + printf 'ERRORED=%s\n' "$ERRORED" 41 + printf 'ERROR_MSG=%s\n' "$ERROR_MSG"
+10
.agents/skills/grill-me/SKILL.md
··· 1 + --- 2 + name: grill-me 3 + description: Interview the user relentlessly about a plan or design until reaching shared understanding, resolving each branch of the decision tree. Use when user wants to stress-test a plan, get grilled on their design, or mentions "grill me". 4 + --- 5 + 6 + Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer. 7 + 8 + Ask the questions one at a time. 9 + 10 + If a question can be answered by exploring the codebase, explore the codebase instead.
+47
.agents/skills/grill-with-docs/ADR-FORMAT.md
··· 1 + # ADR Format 2 + 3 + ADRs live in `docs/adr/` and use sequential numbering: `0001-slug.md`, `0002-slug.md`, etc. 4 + 5 + Create the `docs/adr/` directory lazily — only when the first ADR is needed. 6 + 7 + ## Template 8 + 9 + ```md 10 + # {Short title of the decision} 11 + 12 + {1-3 sentences: what's the context, what did we decide, and why.} 13 + ``` 14 + 15 + That's it. An ADR can be a single paragraph. The value is in recording *that* a decision was made and *why* — not in filling out sections. 16 + 17 + ## Optional sections 18 + 19 + Only include these when they add genuine value. Most ADRs won't need them. 20 + 21 + - **Status** frontmatter (`proposed | accepted | deprecated | superseded by ADR-NNNN`) — useful when decisions are revisited 22 + - **Considered Options** — only when the rejected alternatives are worth remembering 23 + - **Consequences** — only when non-obvious downstream effects need to be called out 24 + 25 + ## Numbering 26 + 27 + Scan `docs/adr/` for the highest existing number and increment by one. 28 + 29 + ## When to offer an ADR 30 + 31 + All three of these must be true: 32 + 33 + 1. **Hard to reverse** — the cost of changing your mind later is meaningful 34 + 2. **Surprising without context** — a future reader will look at the code and wonder "why on earth did they do it this way?" 35 + 3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons 36 + 37 + If a decision is easy to reverse, skip it — you'll just reverse it. If it's not surprising, nobody will wonder why. If there was no real alternative, there's nothing to record beyond "we did the obvious thing." 38 + 39 + ### What qualifies 40 + 41 + - **Architectural shape.** "We're using a monorepo." "The write model is event-sourced, the read model is projected into Postgres." 42 + - **Integration patterns between contexts.** "Ordering and Billing communicate via domain events, not synchronous HTTP." 43 + - **Technology choices that carry lock-in.** Database, message bus, auth provider, deployment target. Not every library — just the ones that would take a quarter to swap out. 44 + - **Boundary and scope decisions.** "Customer data is owned by the Customer context; other contexts reference it by ID only." The explicit no-s are as valuable as the yes-s. 45 + - **Deliberate deviations from the obvious path.** "We're using manual SQL instead of an ORM because X." Anything where a reasonable reader would assume the opposite. These stop the next engineer from "fixing" something that was deliberate. 46 + - **Constraints not visible in the code.** "We can't use AWS because of compliance requirements." "Response times must be under 200ms because of the partner API contract." 47 + - **Rejected alternatives when the rejection is non-obvious.** If you considered GraphQL and picked REST for subtle reasons, record it — otherwise someone will suggest GraphQL again in six months.
+77
.agents/skills/grill-with-docs/CONTEXT-FORMAT.md
··· 1 + # CONTEXT.md Format 2 + 3 + ## Structure 4 + 5 + ```md 6 + # {Context Name} 7 + 8 + {One or two sentence description of what this context is and why it exists.} 9 + 10 + ## Language 11 + 12 + **Order**: 13 + {A concise description of the term} 14 + _Avoid_: Purchase, transaction 15 + 16 + **Invoice**: 17 + A request for payment sent to a customer after delivery. 18 + _Avoid_: Bill, payment request 19 + 20 + **Customer**: 21 + A person or organization that places orders. 22 + _Avoid_: Client, buyer, account 23 + 24 + ## Relationships 25 + 26 + - An **Order** produces one or more **Invoices** 27 + - An **Invoice** belongs to exactly one **Customer** 28 + 29 + ## Example dialogue 30 + 31 + > **Dev:** "When a **Customer** places an **Order**, do we create the **Invoice** immediately?" 32 + > **Domain expert:** "No — an **Invoice** is only generated once a **Fulfillment** is confirmed." 33 + 34 + ## Flagged ambiguities 35 + 36 + - "account" was used to mean both **Customer** and **User** — resolved: these are distinct concepts. 37 + ``` 38 + 39 + ## Rules 40 + 41 + - **Be opinionated.** When multiple words exist for the same concept, pick the best one and list the others as aliases to avoid. 42 + - **Flag conflicts explicitly.** If a term is used ambiguously, call it out in "Flagged ambiguities" with a clear resolution. 43 + - **Keep definitions tight.** One sentence max. Define what it IS, not what it does. 44 + - **Show relationships.** Use bold term names and express cardinality where obvious. 45 + - **Only include terms specific to this project's context.** General programming concepts (timeouts, error types, utility patterns) don't belong even if the project uses them extensively. Before adding a term, ask: is this a concept unique to this context, or a general programming concept? Only the former belongs. 46 + - **Group terms under subheadings** when natural clusters emerge. If all terms belong to a single cohesive area, a flat list is fine. 47 + - **Write an example dialogue.** A conversation between a dev and a domain expert that demonstrates how the terms interact naturally and clarifies boundaries between related concepts. 48 + 49 + ## Single vs multi-context repos 50 + 51 + **Single context (most repos):** One `CONTEXT.md` at the repo root. 52 + 53 + **Multiple contexts:** A `CONTEXT-MAP.md` at the repo root lists the contexts, where they live, and how they relate to each other: 54 + 55 + ```md 56 + # Context Map 57 + 58 + ## Contexts 59 + 60 + - [Ordering](./src/ordering/CONTEXT.md) — receives and tracks customer orders 61 + - [Billing](./src/billing/CONTEXT.md) — generates invoices and processes payments 62 + - [Fulfillment](./src/fulfillment/CONTEXT.md) — manages warehouse picking and shipping 63 + 64 + ## Relationships 65 + 66 + - **Ordering → Fulfillment**: Ordering emits `OrderPlaced` events; Fulfillment consumes them to start picking 67 + - **Fulfillment → Billing**: Fulfillment emits `ShipmentDispatched` events; Billing consumes them to generate invoices 68 + - **Ordering ↔ Billing**: Shared types for `CustomerId` and `Money` 69 + ``` 70 + 71 + The skill infers which structure applies: 72 + 73 + - If `CONTEXT-MAP.md` exists, read it to find contexts 74 + - If only a root `CONTEXT.md` exists, single context 75 + - If neither exists, create a root `CONTEXT.md` lazily when the first term is resolved 76 + 77 + When multiple contexts exist, infer which one the current topic relates to. If unclear, ask.
+81
.agents/skills/grill-with-docs/SKILL.md
··· 1 + --- 2 + name: grill-with-docs 3 + description: Grilling session that challenges your plan against the existing domain model, sharpens terminology, and updates documentation (CONTEXT.md, ADRs) inline as decisions crystallise. Use when user wants to stress-test a plan against their project's language and documented decisions. 4 + disable-model-invocation: true 5 + --- 6 + 7 + Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer. 8 + 9 + Ask the questions one at a time, waiting for feedback on each question before continuing. 10 + 11 + If a question can be answered by exploring the codebase, explore the codebase instead. 12 + 13 + ## Domain awareness 14 + 15 + During codebase exploration, also look for existing documentation: 16 + 17 + ### File structure 18 + 19 + Most repos have a single context: 20 + 21 + ``` 22 + / 23 + ├── CONTEXT.md 24 + ├── docs/ 25 + │ └── adr/ 26 + │ ├── 0001-event-sourced-orders.md 27 + │ └── 0002-postgres-for-write-model.md 28 + └── src/ 29 + ``` 30 + 31 + If a `CONTEXT-MAP.md` exists at the root, the repo has multiple contexts. The map points to where each one lives: 32 + 33 + ``` 34 + / 35 + ├── CONTEXT-MAP.md 36 + ├── docs/ 37 + │ └── adr/ ← system-wide decisions 38 + ├── src/ 39 + │ ├── ordering/ 40 + │ │ ├── CONTEXT.md 41 + │ │ └── docs/adr/ ← context-specific decisions 42 + │ └── billing/ 43 + │ ├── CONTEXT.md 44 + │ └── docs/adr/ 45 + ``` 46 + 47 + Create files lazily — only when you have something to write. If no `CONTEXT.md` exists, create one when the first term is resolved. If no `docs/adr/` exists, create it when the first ADR is needed. 48 + 49 + ## During the session 50 + 51 + ### Challenge against the glossary 52 + 53 + When the user uses a term that conflicts with the existing language in `CONTEXT.md`, call it out immediately. "Your glossary defines 'cancellation' as X, but you seem to mean Y — which is it?" 54 + 55 + ### Sharpen fuzzy language 56 + 57 + When the user uses vague or overloaded terms, propose a precise canonical term. "You're saying 'account' — do you mean the Customer or the User? Those are different things." 58 + 59 + ### Discuss concrete scenarios 60 + 61 + When domain relationships are being discussed, stress-test them with specific scenarios. Invent scenarios that probe edge cases and force the user to be precise about the boundaries between concepts. 62 + 63 + ### Cross-reference with code 64 + 65 + When the user states how something works, check whether the code agrees. If you find a contradiction, surface it: "Your code cancels entire Orders, but you just said partial cancellation is possible — which is right?" 66 + 67 + ### Update CONTEXT.md inline 68 + 69 + When a term is resolved, update `CONTEXT.md` right there. Don't batch these up — capture them as they happen. Use the format in [CONTEXT-FORMAT.md](./CONTEXT-FORMAT.md). 70 + 71 + Don't couple `CONTEXT.md` to implementation details. Only include terms that are meaningful to domain experts. 72 + 73 + ### Offer ADRs sparingly 74 + 75 + Only offer to create an ADR when all three are true: 76 + 77 + 1. **Hard to reverse** — the cost of changing your mind later is meaningful 78 + 2. **Surprising without context** — a future reader will wonder "why did they do it this way?" 79 + 3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons 80 + 81 + If any of the three is missing, skip the ADR. Use the format in [ADR-FORMAT.md](./ADR-FORMAT.md).
+37
.agents/skills/improve-codebase-architecture/DEEPENING.md
··· 1 + # Deepening 2 + 3 + How to deepen a cluster of shallow modules safely, given its dependencies. Assumes the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**. 4 + 5 + ## Dependency categories 6 + 7 + When assessing a candidate for deepening, classify its dependencies. The category determines how the deepened module is tested across its seam. 8 + 9 + ### 1. In-process 10 + 11 + Pure computation, in-memory state, no I/O. Always deepenable — merge the modules and test through the new interface directly. No adapter needed. 12 + 13 + ### 2. Local-substitutable 14 + 15 + Dependencies that have local test stand-ins (PGLite for Postgres, in-memory filesystem). Deepenable if the stand-in exists. The deepened module is tested with the stand-in running in the test suite. The seam is internal; no port at the module's external interface. 16 + 17 + ### 3. Remote but owned (Ports & Adapters) 18 + 19 + Your own services across a network boundary (microservices, internal APIs). Define a **port** (interface) at the seam. The deep module owns the logic; the transport is injected as an **adapter**. Tests use an in-memory adapter. Production uses an HTTP/gRPC/queue adapter. 20 + 21 + Recommendation shape: *"Define a port at the seam, implement an HTTP adapter for production and an in-memory adapter for testing, so the logic sits in one deep module even though it's deployed across a network."* 22 + 23 + ### 4. True external (Mock) 24 + 25 + Third-party services (Stripe, Twilio, etc.) you don't control. The deepened module takes the external dependency as an injected port; tests provide a mock adapter. 26 + 27 + ## Seam discipline 28 + 29 + - **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a port unless at least two adapters are justified (typically production + test). A single-adapter seam is just indirection. 30 + - **Internal seams vs external seams.** A deep module can have internal seams (private to its implementation, used by its own tests) as well as the external seam at its interface. Don't expose internal seams through the interface just because tests use them. 31 + 32 + ## Testing strategy: replace, don't layer 33 + 34 + - Old unit tests on shallow modules become waste once tests at the deepened module's interface exist — delete them. 35 + - Write new tests at the deepened module's interface. The **interface is the test surface**. 36 + - Tests assert on observable outcomes through the interface, not internal state. 37 + - Tests should survive internal refactors — they describe behaviour, not implementation. If a test has to change when the implementation changes, it's testing past the interface.
+44
.agents/skills/improve-codebase-architecture/INTERFACE-DESIGN.md
··· 1 + # Interface Design 2 + 3 + When the user wants to explore alternative interfaces for a chosen deepening candidate, use this parallel sub-agent pattern. Based on "Design It Twice" (Ousterhout) — your first idea is unlikely to be the best. 4 + 5 + Uses the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**, **leverage**. 6 + 7 + ## Process 8 + 9 + ### 1. Frame the problem space 10 + 11 + Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate: 12 + 13 + - The constraints any new interface would need to satisfy 14 + - The dependencies it would rely on, and which category they fall into (see [DEEPENING.md](DEEPENING.md)) 15 + - A rough illustrative code sketch to ground the constraints — not a proposal, just a way to make the constraints concrete 16 + 17 + Show this to the user, then immediately proceed to Step 2. The user reads and thinks while the sub-agents work in parallel. 18 + 19 + ### 2. Spawn sub-agents 20 + 21 + Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a **radically different** interface for the deepened module. 22 + 23 + Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category from [DEEPENING.md](DEEPENING.md), what sits behind the seam). The brief is independent of the user-facing problem-space explanation in Step 1. Give each agent a different design constraint: 24 + 25 + - Agent 1: "Minimize the interface — aim for 1–3 entry points max. Maximise leverage per entry point." 26 + - Agent 2: "Maximise flexibility — support many use cases and extension." 27 + - Agent 3: "Optimise for the most common caller — make the default case trivial." 28 + - Agent 4 (if applicable): "Design around ports & adapters for cross-seam dependencies." 29 + 30 + Include both [LANGUAGE.md](LANGUAGE.md) vocabulary and CONTEXT.md vocabulary in the brief so each sub-agent names things consistently with the architecture language and the project's domain language. 31 + 32 + Each sub-agent outputs: 33 + 34 + 1. Interface (types, methods, params — plus invariants, ordering, error modes) 35 + 2. Usage example showing how callers use it 36 + 3. What the implementation hides behind the seam 37 + 4. Dependency strategy and adapters (see [DEEPENING.md](DEEPENING.md)) 38 + 5. Trade-offs — where leverage is high, where it's thin 39 + 40 + ### 3. Present and compare 41 + 42 + Present designs sequentially so the user can absorb each one, then compare them in prose. Contrast by **depth** (leverage at the interface), **locality** (where change concentrates), and **seam placement**. 43 + 44 + After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not a menu.
+53
.agents/skills/improve-codebase-architecture/LANGUAGE.md
··· 1 + # Language 2 + 3 + Shared vocabulary for every suggestion this skill makes. Use these terms exactly — don't substitute "component," "service," "API," or "boundary." Consistent language is the whole point. 4 + 5 + ## Terms 6 + 7 + **Module** 8 + Anything with an interface and an implementation. Deliberately scale-agnostic — applies equally to a function, class, package, or tier-spanning slice. 9 + _Avoid_: unit, component, service. 10 + 11 + **Interface** 12 + Everything a caller must know to use the module correctly. Includes the type signature, but also invariants, ordering constraints, error modes, required configuration, and performance characteristics. 13 + _Avoid_: API, signature (too narrow — those refer only to the type-level surface). 14 + 15 + **Implementation** 16 + What's inside a module — its body of code. Distinct from **Adapter**: a thing can be a small adapter with a large implementation (a Postgres repo) or a large adapter with a small implementation (an in-memory fake). Reach for "adapter" when the seam is the topic; "implementation" otherwise. 17 + 18 + **Depth** 19 + Leverage at the interface — the amount of behaviour a caller (or test) can exercise per unit of interface they have to learn. A module is **deep** when a large amount of behaviour sits behind a small interface. A module is **shallow** when the interface is nearly as complex as the implementation. 20 + 21 + **Seam** _(from Michael Feathers)_ 22 + A place where you can alter behaviour without editing in that place. The *location* at which a module's interface lives. Choosing where to put the seam is its own design decision, distinct from what goes behind it. 23 + _Avoid_: boundary (overloaded with DDD's bounded context). 24 + 25 + **Adapter** 26 + A concrete thing that satisfies an interface at a seam. Describes *role* (what slot it fills), not substance (what's inside). 27 + 28 + **Leverage** 29 + What callers get from depth. More capability per unit of interface they have to learn. One implementation pays back across N call sites and M tests. 30 + 31 + **Locality** 32 + What maintainers get from depth. Change, bugs, knowledge, and verification concentrate at one place rather than spreading across callers. Fix once, fixed everywhere. 33 + 34 + ## Principles 35 + 36 + - **Depth is a property of the interface, not the implementation.** A deep module can be internally composed of small, mockable, swappable parts — they just aren't part of the interface. A module can have **internal seams** (private to its implementation, used by its own tests) as well as the **external seam** at its interface. 37 + - **The deletion test.** Imagine deleting the module. If complexity vanishes, the module wasn't hiding anything (it was a pass-through). If complexity reappears across N callers, the module was earning its keep. 38 + - **The interface is the test surface.** Callers and tests cross the same seam. If you want to test *past* the interface, the module is probably the wrong shape. 39 + - **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a seam unless something actually varies across it. 40 + 41 + ## Relationships 42 + 43 + - A **Module** has exactly one **Interface** (the surface it presents to callers and tests). 44 + - **Depth** is a property of a **Module**, measured against its **Interface**. 45 + - A **Seam** is where a **Module**'s **Interface** lives. 46 + - An **Adapter** sits at a **Seam** and satisfies the **Interface**. 47 + - **Depth** produces **Leverage** for callers and **Locality** for maintainers. 48 + 49 + ## Rejected framings 50 + 51 + - **Depth as ratio of implementation-lines to interface-lines** (Ousterhout): rewards padding the implementation. We use depth-as-leverage instead. 52 + - **"Interface" as the TypeScript `interface` keyword or a class's public methods**: too narrow — interface here includes every fact a caller must know. 53 + - **"Boundary"**: overloaded with DDD's bounded context. Say **seam** or **interface**.
+71
.agents/skills/improve-codebase-architecture/SKILL.md
··· 1 + --- 2 + name: improve-codebase-architecture 3 + description: Find deepening opportunities in a codebase, informed by the domain language in CONTEXT.md and the decisions in docs/adr/. Use when the user wants to improve architecture, find refactoring opportunities, consolidate tightly-coupled modules, or make a codebase more testable and AI-navigable. 4 + --- 5 + 6 + # Improve Codebase Architecture 7 + 8 + Surface architectural friction and propose **deepening opportunities** — refactors that turn shallow modules into deep ones. The aim is testability and AI-navigability. 9 + 10 + ## Glossary 11 + 12 + Use these terms exactly in every suggestion. Consistent language is the point — don't drift into "component," "service," "API," or "boundary." Full definitions in [LANGUAGE.md](LANGUAGE.md). 13 + 14 + - **Module** — anything with an interface and an implementation (function, class, package, slice). 15 + - **Interface** — everything a caller must know to use the module: types, invariants, error modes, ordering, config. Not just the type signature. 16 + - **Implementation** — the code inside. 17 + - **Depth** — leverage at the interface: a lot of behaviour behind a small interface. **Deep** = high leverage. **Shallow** = interface nearly as complex as the implementation. 18 + - **Seam** — where an interface lives; a place behaviour can be altered without editing in place. (Use this, not "boundary.") 19 + - **Adapter** — a concrete thing satisfying an interface at a seam. 20 + - **Leverage** — what callers get from depth. 21 + - **Locality** — what maintainers get from depth: change, bugs, knowledge concentrated in one place. 22 + 23 + Key principles (see [LANGUAGE.md](LANGUAGE.md) for the full list): 24 + 25 + - **Deletion test**: imagine deleting the module. If complexity vanishes, it was a pass-through. If complexity reappears across N callers, it was earning its keep. 26 + - **The interface is the test surface.** 27 + - **One adapter = hypothetical seam. Two adapters = real seam.** 28 + 29 + This skill is _informed_ by the project's domain model. The domain language gives names to good seams; ADRs record decisions the skill should not re-litigate. 30 + 31 + ## Process 32 + 33 + ### 1. Explore 34 + 35 + Read the project's domain glossary and any ADRs in the area you're touching first. 36 + 37 + Then use the Agent tool with `subagent_type=Explore` to walk the codebase. Don't follow rigid heuristics — explore organically and note where you experience friction: 38 + 39 + - Where does understanding one concept require bouncing between many small modules? 40 + - Where are modules **shallow** — interface nearly as complex as the implementation? 41 + - Where have pure functions been extracted just for testability, but the real bugs hide in how they're called (no **locality**)? 42 + - Where do tightly-coupled modules leak across their seams? 43 + - Which parts of the codebase are untested, or hard to test through their current interface? 44 + 45 + Apply the **deletion test** to anything you suspect is shallow: would deleting it concentrate complexity, or just move it? A "yes, concentrates" is the signal you want. 46 + 47 + ### 2. Present candidates 48 + 49 + Present a numbered list of deepening opportunities. For each candidate: 50 + 51 + - **Files** — which files/modules are involved 52 + - **Problem** — why the current architecture is causing friction 53 + - **Solution** — plain English description of what would change 54 + - **Benefits** — explained in terms of locality and leverage, and also in how tests would improve 55 + 56 + **Use CONTEXT.md vocabulary for the domain, and [LANGUAGE.md](LANGUAGE.md) vocabulary for the architecture.** If `CONTEXT.md` defines "Order," talk about "the Order intake module" — not "the FooBarHandler," and not "the Order service." 57 + 58 + **ADR conflicts**: if a candidate contradicts an existing ADR, only surface it when the friction is real enough to warrant revisiting the ADR. Mark it clearly (e.g. _"contradicts ADR-0007 — but worth reopening because…"_). Don't list every theoretical refactor an ADR forbids. 59 + 60 + Do NOT propose interfaces yet. Ask the user: "Which of these would you like to explore?" 61 + 62 + ### 3. Grilling loop 63 + 64 + Once the user picks a candidate, drop into a grilling conversation. Walk the design tree with them — constraints, dependencies, the shape of the deepened module, what sits behind the seam, what tests survive. 65 + 66 + Side effects happen inline as decisions crystallize: 67 + 68 + - **Naming a deepened module after a concept not in `CONTEXT.md`?** Add the term to `CONTEXT.md` — same discipline as `/grill-with-docs` (see [CONTEXT-FORMAT.md](../grill-with-docs/CONTEXT-FORMAT.md)). Create the file lazily if it doesn't exist. 69 + - **Sharpening a fuzzy term during the conversation?** Update `CONTEXT.md` right there. 70 + - **User rejects the candidate with a load-bearing reason?** Offer an ADR, framed as: _"Want me to record this as an ADR so future architecture reviews don't re-suggest it?"_ Only offer when the reason would actually be needed by a future explorer to avoid re-suggesting the same thing — skip ephemeral reasons ("not worth it right now") and self-evident ones. See [ADR-FORMAT.md](../grill-with-docs/ADR-FORMAT.md). 71 + - **Want to explore alternative interfaces for the deepened module?** See [INTERFACE-DESIGN.md](INTERFACE-DESIGN.md).
+121
.agents/skills/setup-matt-pocock-skills/SKILL.md
··· 1 + --- 2 + name: setup-matt-pocock-skills 3 + description: Sets up an `## Agent skills` block in AGENTS.md/CLAUDE.md and `docs/agents/` so the engineering skills know this repo's issue tracker (GitHub or local markdown), triage label vocabulary, and domain doc layout. Run before first use of `to-issues`, `to-prd`, `triage`, `diagnose`, `tdd`, `improve-codebase-architecture`, or `zoom-out` — or if those skills appear to be missing context about the issue tracker, triage labels, or domain docs. 4 + disable-model-invocation: true 5 + --- 6 + 7 + # Setup Matt Pocock's Skills 8 + 9 + Scaffold the per-repo configuration that the engineering skills assume: 10 + 11 + - **Issue tracker** — where issues live (GitHub by default; local markdown is also supported out of the box) 12 + - **Triage labels** — the strings used for the five canonical triage roles 13 + - **Domain docs** — where `CONTEXT.md` and ADRs live, and the consumer rules for reading them 14 + 15 + This is a prompt-driven skill, not a deterministic script. Explore, present what you found, confirm with the user, then write. 16 + 17 + ## Process 18 + 19 + ### 1. Explore 20 + 21 + Look at the current repo to understand its starting state. Read whatever exists; don't assume: 22 + 23 + - `git remote -v` and `.git/config` — is this a GitHub repo? Which one? 24 + - `AGENTS.md` and `CLAUDE.md` at the repo root — does either exist? Is there already an `## Agent skills` section in either? 25 + - `CONTEXT.md` and `CONTEXT-MAP.md` at the repo root 26 + - `docs/adr/` and any `src/*/docs/adr/` directories 27 + - `docs/agents/` — does this skill's prior output already exist? 28 + - `.scratch/` — sign that a local-markdown issue tracker convention is already in use 29 + 30 + ### 2. Present findings and ask 31 + 32 + Summarise what's present and what's missing. Then walk the user through the three decisions **one at a time** — present a section, get the user's answer, then move to the next. Don't dump all three at once. 33 + 34 + Assume the user does not know what these terms mean. Each section starts with a short explainer (what it is, why these skills need it, what changes if they pick differently). Then show the choices and the default. 35 + 36 + **Section A — Issue tracker.** 37 + 38 + > Explainer: The "issue tracker" is where issues live for this repo. Skills like `to-issues`, `triage`, `to-prd`, and `qa` read from and write to it — they need to know whether to call `gh issue create`, write a markdown file under `.scratch/`, or follow some other workflow you describe. Pick the place you actually track work for this repo. 39 + 40 + Default posture: these skills were designed for GitHub. If a `git remote` points at GitHub, propose that. If a `git remote` points at GitLab (`gitlab.com` or a self-hosted host), propose GitLab. Otherwise (or if the user prefers), offer: 41 + 42 + - **GitHub** — issues live in the repo's GitHub Issues (uses the `gh` CLI) 43 + - **GitLab** — issues live in the repo's GitLab Issues (uses the [`glab`](https://gitlab.com/gitlab-org/cli) CLI) 44 + - **Local markdown** — issues live as files under `.scratch/<feature>/` in this repo (good for solo projects or repos without a remote) 45 + - **Other** (Jira, Linear, etc.) — ask the user to describe the workflow in one paragraph; the skill will record it as freeform prose 46 + 47 + **Section B — Triage label vocabulary.** 48 + 49 + > Explainer: When the `triage` skill processes an incoming issue, it moves it through a state machine — needs evaluation, waiting on reporter, ready for an AFK agent to pick up, ready for a human, or won't fix. To do that, it needs to apply labels (or the equivalent in your issue tracker) that match strings *you've actually configured*. If your repo already uses different label names (e.g. `bug:triage` instead of `needs-triage`), map them here so the skill applies the right ones instead of creating duplicates. 50 + 51 + The five canonical roles: 52 + 53 + - `needs-triage` — maintainer needs to evaluate 54 + - `needs-info` — waiting on reporter 55 + - `ready-for-agent` — fully specified, AFK-ready (an agent can pick it up with no human context) 56 + - `ready-for-human` — needs human implementation 57 + - `wontfix` — will not be actioned 58 + 59 + Default: each role's string equals its name. Ask the user if they want to override any. If their issue tracker has no existing labels, the defaults are fine. 60 + 61 + **Section C — Domain docs.** 62 + 63 + > Explainer: Some skills (`improve-codebase-architecture`, `diagnose`, `tdd`) read a `CONTEXT.md` file to learn the project's domain language, and `docs/adr/` for past architectural decisions. They need to know whether the repo has one global context or multiple (e.g. a monorepo with separate frontend/backend contexts) so they look in the right place. 64 + 65 + Confirm the layout: 66 + 67 + - **Single-context** — one `CONTEXT.md` + `docs/adr/` at the repo root. Most repos are this. 68 + - **Multi-context** — `CONTEXT-MAP.md` at the root pointing to per-context `CONTEXT.md` files (typically a monorepo). 69 + 70 + ### 3. Confirm and edit 71 + 72 + Show the user a draft of: 73 + 74 + - The `## Agent skills` block to add to whichever of `CLAUDE.md` / `AGENTS.md` is being edited (see step 4 for selection rules) 75 + - The contents of `docs/agents/issue-tracker.md`, `docs/agents/triage-labels.md`, `docs/agents/domain.md` 76 + 77 + Let them edit before writing. 78 + 79 + ### 4. Write 80 + 81 + **Pick the file to edit:** 82 + 83 + - If `CLAUDE.md` exists, edit it. 84 + - Else if `AGENTS.md` exists, edit it. 85 + - If neither exists, ask the user which one to create — don't pick for them. 86 + 87 + Never create `AGENTS.md` when `CLAUDE.md` already exists (or vice versa) — always edit the one that's already there. 88 + 89 + If an `## Agent skills` block already exists in the chosen file, update its contents in-place rather than appending a duplicate. Don't overwrite user edits to the surrounding sections. 90 + 91 + The block: 92 + 93 + ```markdown 94 + ## Agent skills 95 + 96 + ### Issue tracker 97 + 98 + [one-line summary of where issues are tracked]. See `docs/agents/issue-tracker.md`. 99 + 100 + ### Triage labels 101 + 102 + [one-line summary of the label vocabulary]. See `docs/agents/triage-labels.md`. 103 + 104 + ### Domain docs 105 + 106 + [one-line summary of layout — "single-context" or "multi-context"]. See `docs/agents/domain.md`. 107 + ``` 108 + 109 + Then write the three docs files using the seed templates in this skill folder as a starting point: 110 + 111 + - [issue-tracker-github.md](./issue-tracker-github.md) — GitHub issue tracker 112 + - [issue-tracker-gitlab.md](./issue-tracker-gitlab.md) — GitLab issue tracker 113 + - [issue-tracker-local.md](./issue-tracker-local.md) — local-markdown issue tracker 114 + - [triage-labels.md](./triage-labels.md) — label mapping 115 + - [domain.md](./domain.md) — domain doc consumer rules + layout 116 + 117 + For "other" issue trackers, write `docs/agents/issue-tracker.md` from scratch using the user's description. 118 + 119 + ### 5. Done 120 + 121 + Tell the user the setup is complete and which engineering skills will now read from these files. Mention they can edit `docs/agents/*.md` directly later — re-running this skill is only necessary if they want to switch issue trackers or restart from scratch.
+51
.agents/skills/setup-matt-pocock-skills/domain.md
··· 1 + # Domain Docs 2 + 3 + How the engineering skills should consume this repo's domain documentation when exploring the codebase. 4 + 5 + ## Before exploring, read these 6 + 7 + - **`CONTEXT.md`** at the repo root, or 8 + - **`CONTEXT-MAP.md`** at the repo root if it exists — it points at one `CONTEXT.md` per context. Read each one relevant to the topic. 9 + - **`docs/adr/`** — read ADRs that touch the area you're about to work in. In multi-context repos, also check `src/<context>/docs/adr/` for context-scoped decisions. 10 + 11 + If any of these files don't exist, **proceed silently**. Don't flag their absence; don't suggest creating them upfront. The producer skill (`/grill-with-docs`) creates them lazily when terms or decisions actually get resolved. 12 + 13 + ## File structure 14 + 15 + Single-context repo (most repos): 16 + 17 + ``` 18 + / 19 + ├── CONTEXT.md 20 + ├── docs/adr/ 21 + │ ├── 0001-event-sourced-orders.md 22 + │ └── 0002-postgres-for-write-model.md 23 + └── src/ 24 + ``` 25 + 26 + Multi-context repo (presence of `CONTEXT-MAP.md` at the root): 27 + 28 + ``` 29 + / 30 + ├── CONTEXT-MAP.md 31 + ├── docs/adr/ ← system-wide decisions 32 + └── src/ 33 + ├── ordering/ 34 + │ ├── CONTEXT.md 35 + │ └── docs/adr/ ← context-specific decisions 36 + └── billing/ 37 + ├── CONTEXT.md 38 + └── docs/adr/ 39 + ``` 40 + 41 + ## Use the glossary's vocabulary 42 + 43 + When your output names a domain concept (in an issue title, a refactor proposal, a hypothesis, a test name), use the term as defined in `CONTEXT.md`. Don't drift to synonyms the glossary explicitly avoids. 44 + 45 + If the concept you need isn't in the glossary yet, that's a signal — either you're inventing language the project doesn't use (reconsider) or there's a real gap (note it for `/grill-with-docs`). 46 + 47 + ## Flag ADR conflicts 48 + 49 + If your output contradicts an existing ADR, surface it explicitly rather than silently overriding: 50 + 51 + > _Contradicts ADR-0007 (event-sourced orders) — but worth reopening because…_
+22
.agents/skills/setup-matt-pocock-skills/issue-tracker-github.md
··· 1 + # Issue tracker: GitHub 2 + 3 + Issues and PRDs for this repo live as GitHub issues. Use the `gh` CLI for all operations. 4 + 5 + ## Conventions 6 + 7 + - **Create an issue**: `gh issue create --title "..." --body "..."`. Use a heredoc for multi-line bodies. 8 + - **Read an issue**: `gh issue view <number> --comments`, filtering comments by `jq` and also fetching labels. 9 + - **List issues**: `gh issue list --state open --json number,title,body,labels,comments --jq '[.[] | {number, title, body, labels: [.labels[].name], comments: [.comments[].body]}]'` with appropriate `--label` and `--state` filters. 10 + - **Comment on an issue**: `gh issue comment <number> --body "..."` 11 + - **Apply / remove labels**: `gh issue edit <number> --add-label "..."` / `--remove-label "..."` 12 + - **Close**: `gh issue close <number> --comment "..."` 13 + 14 + Infer the repo from `git remote -v` — `gh` does this automatically when run inside a clone. 15 + 16 + ## When a skill says "publish to the issue tracker" 17 + 18 + Create a GitHub issue. 19 + 20 + ## When a skill says "fetch the relevant ticket" 21 + 22 + Run `gh issue view <number> --comments`.
+23
.agents/skills/setup-matt-pocock-skills/issue-tracker-gitlab.md
··· 1 + # Issue tracker: GitLab 2 + 3 + Issues and PRDs for this repo live as GitLab issues. Use the [`glab`](https://gitlab.com/gitlab-org/cli) CLI for all operations. 4 + 5 + ## Conventions 6 + 7 + - **Create an issue**: `glab issue create --title "..." --description "..."`. Use a heredoc for multi-line descriptions. Pass `--description -` to open an editor. 8 + - **Read an issue**: `glab issue view <number> --comments`. Use `-F json` for machine-readable output. 9 + - **List issues**: `glab issue list --state opened -F json` with appropriate `--label` filters. Note that GitLab uses `opened` (not `open`) for the state value. 10 + - **Comment on an issue**: `glab issue note <number> --message "..."`. GitLab calls comments "notes". 11 + - **Apply / remove labels**: `glab issue update <number> --label "..."` / `--unlabel "..."`. Multiple labels can be comma-separated or by repeating the flag. 12 + - **Close**: `glab issue close <number>`. `glab issue close` does not accept a closing comment, so post the explanation first with `glab issue note <number> --message "..."`, then close. 13 + - **Merge requests**: GitLab calls PRs "merge requests". Use `glab mr create`, `glab mr view`, `glab mr note`, etc. — the same shape as `gh pr ...` with `mr` in place of `pr` and `note`/`--message` in place of `comment`/`--body`. 14 + 15 + Infer the repo from `git remote -v` — `glab` does this automatically when run inside a clone. 16 + 17 + ## When a skill says "publish to the issue tracker" 18 + 19 + Create a GitLab issue. 20 + 21 + ## When a skill says "fetch the relevant ticket" 22 + 23 + Run `glab issue view <number> --comments`.
+19
.agents/skills/setup-matt-pocock-skills/issue-tracker-local.md
··· 1 + # Issue tracker: Local Markdown 2 + 3 + Issues and PRDs for this repo live as markdown files in `.scratch/`. 4 + 5 + ## Conventions 6 + 7 + - One feature per directory: `.scratch/<feature-slug>/` 8 + - The PRD is `.scratch/<feature-slug>/PRD.md` 9 + - Implementation issues are `.scratch/<feature-slug>/issues/<NN>-<slug>.md`, numbered from `01` 10 + - Triage state is recorded as a `Status:` line near the top of each issue file (see `triage-labels.md` for the role strings) 11 + - Comments and conversation history append to the bottom of the file under a `## Comments` heading 12 + 13 + ## When a skill says "publish to the issue tracker" 14 + 15 + Create a new file under `.scratch/<feature-slug>/` (creating the directory if needed). 16 + 17 + ## When a skill says "fetch the relevant ticket" 18 + 19 + Read the file at the referenced path. The user will normally pass the path or the issue number directly.
+15
.agents/skills/setup-matt-pocock-skills/triage-labels.md
··· 1 + # Triage Labels 2 + 3 + The skills speak in terms of five canonical triage roles. This file maps those roles to the actual label strings used in this repo's issue tracker. 4 + 5 + | Label in mattpocock/skills | Label in our tracker | Meaning | 6 + | -------------------------- | -------------------- | ---------------------------------------- | 7 + | `needs-triage` | `needs-triage` | Maintainer needs to evaluate this issue | 8 + | `needs-info` | `needs-info` | Waiting on reporter for more information | 9 + | `ready-for-agent` | `ready-for-agent` | Fully specified, ready for an AFK agent | 10 + | `ready-for-human` | `ready-for-human` | Requires human implementation | 11 + | `wontfix` | `wontfix` | Will not be actioned | 12 + 13 + When a skill mentions a role (e.g. "apply the AFK-ready triage label"), use the corresponding label string from this table. 14 + 15 + Edit the right-hand column to match whatever vocabulary you actually use.
+109
.agents/skills/tdd/SKILL.md
··· 1 + --- 2 + name: tdd 3 + description: Test-driven development with red-green-refactor loop. Use when user wants to build features or fix bugs using TDD, mentions "red-green-refactor", wants integration tests, or asks for test-first development. 4 + --- 5 + 6 + # Test-Driven Development 7 + 8 + ## Philosophy 9 + 10 + **Core principle**: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't. 11 + 12 + **Good tests** are integration-style: they exercise real code paths through public APIs. They describe _what_ the system does, not _how_ it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure. 13 + 14 + **Bad tests** are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior. 15 + 16 + See [tests.md](tests.md) for examples and [mocking.md](mocking.md) for mocking guidelines. 17 + 18 + ## Anti-Pattern: Horizontal Slices 19 + 20 + **DO NOT write all tests first, then all implementation.** This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code." 21 + 22 + This produces **crap tests**: 23 + 24 + - Tests written in bulk test _imagined_ behavior, not _actual_ behavior 25 + - You end up testing the _shape_ of things (data structures, function signatures) rather than user-facing behavior 26 + - Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine 27 + - You outrun your headlights, committing to test structure before understanding the implementation 28 + 29 + **Correct approach**: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it. 30 + 31 + ``` 32 + WRONG (horizontal): 33 + RED: test1, test2, test3, test4, test5 34 + GREEN: impl1, impl2, impl3, impl4, impl5 35 + 36 + RIGHT (vertical): 37 + RED→GREEN: test1→impl1 38 + RED→GREEN: test2→impl2 39 + RED→GREEN: test3→impl3 40 + ... 41 + ``` 42 + 43 + ## Workflow 44 + 45 + ### 1. Planning 46 + 47 + When exploring the codebase, use the project's domain glossary so that test names and interface vocabulary match the project's language, and respect ADRs in the area you're touching. 48 + 49 + Before writing any code: 50 + 51 + - [ ] Confirm with user what interface changes are needed 52 + - [ ] Confirm with user which behaviors to test (prioritize) 53 + - [ ] Identify opportunities for [deep modules](deep-modules.md) (small interface, deep implementation) 54 + - [ ] Design interfaces for [testability](interface-design.md) 55 + - [ ] List the behaviors to test (not implementation steps) 56 + - [ ] Get user approval on the plan 57 + 58 + Ask: "What should the public interface look like? Which behaviors are most important to test?" 59 + 60 + **You can't test everything.** Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case. 61 + 62 + ### 2. Tracer Bullet 63 + 64 + Write ONE test that confirms ONE thing about the system: 65 + 66 + ``` 67 + RED: Write test for first behavior → test fails 68 + GREEN: Write minimal code to pass → test passes 69 + ``` 70 + 71 + This is your tracer bullet - proves the path works end-to-end. 72 + 73 + ### 3. Incremental Loop 74 + 75 + For each remaining behavior: 76 + 77 + ``` 78 + RED: Write next test → fails 79 + GREEN: Minimal code to pass → passes 80 + ``` 81 + 82 + Rules: 83 + 84 + - One test at a time 85 + - Only enough code to pass current test 86 + - Don't anticipate future tests 87 + - Keep tests focused on observable behavior 88 + 89 + ### 4. Refactor 90 + 91 + After all tests pass, look for [refactor candidates](refactoring.md): 92 + 93 + - [ ] Extract duplication 94 + - [ ] Deepen modules (move complexity behind simple interfaces) 95 + - [ ] Apply SOLID principles where natural 96 + - [ ] Consider what new code reveals about existing code 97 + - [ ] Run tests after each refactor step 98 + 99 + **Never refactor while RED.** Get to GREEN first. 100 + 101 + ## Checklist Per Cycle 102 + 103 + ``` 104 + [ ] Test describes behavior, not implementation 105 + [ ] Test uses public interface only 106 + [ ] Test would survive internal refactor 107 + [ ] Code is minimal for this test 108 + [ ] No speculative features added 109 + ```
+33
.agents/skills/tdd/deep-modules.md
··· 1 + # Deep Modules 2 + 3 + From "A Philosophy of Software Design": 4 + 5 + **Deep module** = small interface + lots of implementation 6 + 7 + ``` 8 + ┌─────────────────────┐ 9 + │ Small Interface │ ← Few methods, simple params 10 + ├─────────────────────┤ 11 + │ │ 12 + │ │ 13 + │ Deep Implementation│ ← Complex logic hidden 14 + │ │ 15 + │ │ 16 + └─────────────────────┘ 17 + ``` 18 + 19 + **Shallow module** = large interface + little implementation (avoid) 20 + 21 + ``` 22 + ┌─────────────────────────────────┐ 23 + │ Large Interface │ ← Many methods, complex params 24 + ├─────────────────────────────────┤ 25 + │ Thin Implementation │ ← Just passes through 26 + └─────────────────────────────────┘ 27 + ``` 28 + 29 + When designing interfaces, ask: 30 + 31 + - Can I reduce the number of methods? 32 + - Can I simplify the parameters? 33 + - Can I hide more complexity inside?
+31
.agents/skills/tdd/interface-design.md
··· 1 + # Interface Design for Testability 2 + 3 + Good interfaces make testing natural: 4 + 5 + 1. **Accept dependencies, don't create them** 6 + 7 + ```typescript 8 + // Testable 9 + function processOrder(order, paymentGateway) {} 10 + 11 + // Hard to test 12 + function processOrder(order) { 13 + const gateway = new StripeGateway(); 14 + } 15 + ``` 16 + 17 + 2. **Return results, don't produce side effects** 18 + 19 + ```typescript 20 + // Testable 21 + function calculateDiscount(cart): Discount {} 22 + 23 + // Hard to test 24 + function applyDiscount(cart): void { 25 + cart.total -= discount; 26 + } 27 + ``` 28 + 29 + 3. **Small surface area** 30 + - Fewer methods = fewer tests needed 31 + - Fewer params = simpler test setup
+59
.agents/skills/tdd/mocking.md
··· 1 + # When to Mock 2 + 3 + Mock at **system boundaries** only: 4 + 5 + - External APIs (payment, email, etc.) 6 + - Databases (sometimes - prefer test DB) 7 + - Time/randomness 8 + - File system (sometimes) 9 + 10 + Don't mock: 11 + 12 + - Your own classes/modules 13 + - Internal collaborators 14 + - Anything you control 15 + 16 + ## Designing for Mockability 17 + 18 + At system boundaries, design interfaces that are easy to mock: 19 + 20 + **1. Use dependency injection** 21 + 22 + Pass external dependencies in rather than creating them internally: 23 + 24 + ```typescript 25 + // Easy to mock 26 + function processPayment(order, paymentClient) { 27 + return paymentClient.charge(order.total); 28 + } 29 + 30 + // Hard to mock 31 + function processPayment(order) { 32 + const client = new StripeClient(process.env.STRIPE_KEY); 33 + return client.charge(order.total); 34 + } 35 + ``` 36 + 37 + **2. Prefer SDK-style interfaces over generic fetchers** 38 + 39 + Create specific functions for each external operation instead of one generic function with conditional logic: 40 + 41 + ```typescript 42 + // GOOD: Each function is independently mockable 43 + const api = { 44 + getUser: (id) => fetch(`/users/${id}`), 45 + getOrders: (userId) => fetch(`/users/${userId}/orders`), 46 + createOrder: (data) => fetch('/orders', { method: 'POST', body: data }), 47 + }; 48 + 49 + // BAD: Mocking requires conditional logic inside the mock 50 + const api = { 51 + fetch: (endpoint, options) => fetch(endpoint, options), 52 + }; 53 + ``` 54 + 55 + The SDK approach means: 56 + - Each mock returns one specific shape 57 + - No conditional logic in test setup 58 + - Easier to see which endpoints a test exercises 59 + - Type safety per endpoint
+10
.agents/skills/tdd/refactoring.md
··· 1 + # Refactor Candidates 2 + 3 + After TDD cycle, look for: 4 + 5 + - **Duplication** → Extract function/class 6 + - **Long methods** → Break into private helpers (keep tests on public interface) 7 + - **Shallow modules** → Combine or deepen 8 + - **Feature envy** → Move logic to where data lives 9 + - **Primitive obsession** → Introduce value objects 10 + - **Existing code** the new code reveals as problematic
+61
.agents/skills/tdd/tests.md
··· 1 + # Good and Bad Tests 2 + 3 + ## Good Tests 4 + 5 + **Integration-style**: Test through real interfaces, not mocks of internal parts. 6 + 7 + ```typescript 8 + // GOOD: Tests observable behavior 9 + test("user can checkout with valid cart", async () => { 10 + const cart = createCart(); 11 + cart.add(product); 12 + const result = await checkout(cart, paymentMethod); 13 + expect(result.status).toBe("confirmed"); 14 + }); 15 + ``` 16 + 17 + Characteristics: 18 + 19 + - Tests behavior users/callers care about 20 + - Uses public API only 21 + - Survives internal refactors 22 + - Describes WHAT, not HOW 23 + - One logical assertion per test 24 + 25 + ## Bad Tests 26 + 27 + **Implementation-detail tests**: Coupled to internal structure. 28 + 29 + ```typescript 30 + // BAD: Tests implementation details 31 + test("checkout calls paymentService.process", async () => { 32 + const mockPayment = jest.mock(paymentService); 33 + await checkout(cart, payment); 34 + expect(mockPayment.process).toHaveBeenCalledWith(cart.total); 35 + }); 36 + ``` 37 + 38 + Red flags: 39 + 40 + - Mocking internal collaborators 41 + - Testing private methods 42 + - Asserting on call counts/order 43 + - Test breaks when refactoring without behavior change 44 + - Test name describes HOW not WHAT 45 + - Verifying through external means instead of interface 46 + 47 + ```typescript 48 + // BAD: Bypasses interface to verify 49 + test("createUser saves to database", async () => { 50 + await createUser({ name: "Alice" }); 51 + const row = await db.query("SELECT * FROM users WHERE name = ?", ["Alice"]); 52 + expect(row).toBeDefined(); 53 + }); 54 + 55 + // GOOD: Verifies through interface 56 + test("createUser makes user retrievable", async () => { 57 + const user = await createUser({ name: "Alice" }); 58 + const retrieved = await getUser(user.id); 59 + expect(retrieved.name).toBe("Alice"); 60 + }); 61 + ```
+81
.agents/skills/to-issues/SKILL.md
··· 1 + --- 2 + name: to-issues 3 + description: Break a plan, spec, or PRD into independently-grabbable issues on the project issue tracker using tracer-bullet vertical slices. Use when user wants to convert a plan into issues, create implementation tickets, or break down work into issues. 4 + --- 5 + 6 + # To Issues 7 + 8 + Break a plan into independently-grabbable issues using vertical slices (tracer bullets). 9 + 10 + The issue tracker and triage label vocabulary should have been provided to you — run `/setup-matt-pocock-skills` if not. 11 + 12 + ## Process 13 + 14 + ### 1. Gather context 15 + 16 + Work from whatever is already in the conversation context. If the user passes an issue reference (issue number, URL, or path) as an argument, fetch it from the issue tracker and read its full body and comments. 17 + 18 + ### 2. Explore the codebase (optional) 19 + 20 + If you have not already explored the codebase, do so to understand the current state of the code. Issue titles and descriptions should use the project's domain glossary vocabulary, and respect ADRs in the area you're touching. 21 + 22 + ### 3. Draft vertical slices 23 + 24 + Break the plan into **tracer bullet** issues. Each issue is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer. 25 + 26 + Slices may be 'HITL' or 'AFK'. HITL slices require human interaction, such as an architectural decision or a design review. AFK slices can be implemented and merged without human interaction. Prefer AFK over HITL where possible. 27 + 28 + <vertical-slice-rules> 29 + - Each slice delivers a narrow but COMPLETE path through every layer (schema, API, UI, tests) 30 + - A completed slice is demoable or verifiable on its own 31 + - Prefer many thin slices over few thick ones 32 + </vertical-slice-rules> 33 + 34 + ### 4. Quiz the user 35 + 36 + Present the proposed breakdown as a numbered list. For each slice, show: 37 + 38 + - **Title**: short descriptive name 39 + - **Type**: HITL / AFK 40 + - **Blocked by**: which other slices (if any) must complete first 41 + - **User stories covered**: which user stories this addresses (if the source material has them) 42 + 43 + Ask the user: 44 + 45 + - Does the granularity feel right? (too coarse / too fine) 46 + - Are the dependency relationships correct? 47 + - Should any slices be merged or split further? 48 + - Are the correct slices marked as HITL and AFK? 49 + 50 + Iterate until the user approves the breakdown. 51 + 52 + ### 5. Publish the issues to the issue tracker 53 + 54 + For each approved slice, publish a new issue to the issue tracker. Use the issue body template below. Apply the `needs-triage` triage label so each issue enters the normal triage flow. 55 + 56 + Publish issues in dependency order (blockers first) so you can reference real issue identifiers in the "Blocked by" field. 57 + 58 + <issue-template> 59 + ## Parent 60 + 61 + A reference to the parent issue on the issue tracker (if the source was an existing issue, otherwise omit this section). 62 + 63 + ## What to build 64 + 65 + A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation. 66 + 67 + ## Acceptance criteria 68 + 69 + - [ ] Criterion 1 70 + - [ ] Criterion 2 71 + - [ ] Criterion 3 72 + 73 + ## Blocked by 74 + 75 + - A reference to the blocking ticket (if any) 76 + 77 + Or "None - can start immediately" if no blockers. 78 + 79 + </issue-template> 80 + 81 + Do NOT close or modify any parent issue.
+74
.agents/skills/to-prd/SKILL.md
··· 1 + --- 2 + name: to-prd 3 + description: Turn the current conversation context into a PRD and publish it to the project issue tracker. Use when user wants to create a PRD from the current context. 4 + --- 5 + 6 + This skill takes the current conversation context and codebase understanding and produces a PRD. Do NOT interview the user — just synthesize what you already know. 7 + 8 + The issue tracker and triage label vocabulary should have been provided to you — run `/setup-matt-pocock-skills` if not. 9 + 10 + ## Process 11 + 12 + 1. Explore the repo to understand the current state of the codebase, if you haven't already. Use the project's domain glossary vocabulary throughout the PRD, and respect any ADRs in the area you're touching. 13 + 14 + 2. Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation. 15 + 16 + A deep module (as opposed to a shallow module) is one which encapsulates a lot of functionality in a simple, testable interface which rarely changes. 17 + 18 + Check with the user that these modules match their expectations. Check with the user which modules they want tests written for. 19 + 20 + 3. Write the PRD using the template below, then publish it to the project issue tracker. Apply the `needs-triage` triage label so it enters the normal triage flow. 21 + 22 + <prd-template> 23 + 24 + ## Problem Statement 25 + 26 + The problem that the user is facing, from the user's perspective. 27 + 28 + ## Solution 29 + 30 + The solution to the problem, from the user's perspective. 31 + 32 + ## User Stories 33 + 34 + A LONG, numbered list of user stories. Each user story should be in the format of: 35 + 36 + 1. As an <actor>, I want a <feature>, so that <benefit> 37 + 38 + <user-story-example> 39 + 1. As a mobile bank customer, I want to see balance on my accounts, so that I can make better informed decisions about my spending 40 + </user-story-example> 41 + 42 + This list of user stories should be extremely extensive and cover all aspects of the feature. 43 + 44 + ## Implementation Decisions 45 + 46 + A list of implementation decisions that were made. This can include: 47 + 48 + - The modules that will be built/modified 49 + - The interfaces of those modules that will be modified 50 + - Technical clarifications from the developer 51 + - Architectural decisions 52 + - Schema changes 53 + - API contracts 54 + - Specific interactions 55 + 56 + Do NOT include specific file paths or code snippets. They may end up being outdated very quickly. 57 + 58 + ## Testing Decisions 59 + 60 + A list of testing decisions that were made. Include: 61 + 62 + - A description of what makes a good test (only test external behavior, not implementation details) 63 + - Which modules will be tested 64 + - Prior art for the tests (i.e. similar types of tests in the codebase) 65 + 66 + ## Out of Scope 67 + 68 + A description of the things that are out of scope for this PRD. 69 + 70 + ## Further Notes 71 + 72 + Any further notes about the feature. 73 + 74 + </prd-template>
+7
.agents/skills/zoom-out/SKILL.md
··· 1 + --- 2 + name: zoom-out 3 + description: Tell the agent to zoom out and give broader context or a higher-level perspective. Use when you're unfamiliar with a section of code or need to understand how it fits into the bigger picture. 4 + disable-model-invocation: true 5 + --- 6 + 7 + I don't know this area of code well. Go up a layer of abstraction. Give me a map of all the relevant modules and callers, using the project's domain glossary vocabulary.
+20 -3
AGENTS.md
··· 1 - # Vite+ Rules For This Repo 1 + # Rules For This Project 2 2 3 - This project uses Vite+ and the `vp` CLI. 3 + ## Script Instructions 4 4 5 - ## Repo-Specific Rules 5 + This project uses Vite+ and the `vp` CLI. 6 6 7 7 - Do not use `pnpm`, `npm`, or Yarn directly for installs, updates, or package execution. 8 8 - Do not use raw tool CLIs like `vite`, `vitest`, `oxlint`, or `oxfmt`; use the matching `vp` command instead. 9 9 - Use `vp run <script>` when you need a package script that shares a name with a built-in Vite+ command. 10 10 - Use `vpx` instead of `npx` for one-off package binaries. 11 11 - Import JavaScript modules from `vite-plus` rather than `vite` or `vitest`. 12 + - Never start up a dev server e.g `vp dev` or `vp run dev`. Always use an existing sever e.g. localhost:3000 12 13 13 14 ## CI Notes 14 15 ··· 28 29 29 30 ## Skill Loading 30 31 32 + This project uses both skills installed in `.agent/skills` as standard and the TanStack Intent CLI to load skills directly from packages. 33 + 31 34 Before substantial work: 32 35 33 36 - Skill check: run `vpx @tanstack/intent@latest list`, or use skills already listed in context. ··· 35 38 - Monorepos: when working across packages, run the skill check from the workspace root and prefer the local skill for the package being changed. 36 39 - Multiple matches: prefer the most specific local skill for the package or concern you are changing; load additional skills only when the task spans multiple packages or concerns. 37 40 <!-- intent-skills:end --> 41 + 42 + ## Agent skills 43 + 44 + ### Issue tracker 45 + 46 + Issues and PRDs are tracked in GitHub Issues for `DogPawHat/preloading-example`. See `docs/agents/issue-tracker.md`. 47 + 48 + ### Triage labels 49 + 50 + This repo uses the default Matt Pocock skills triage label vocabulary. See `docs/agents/triage-labels.md`. 51 + 52 + ### Domain docs 53 + 54 + This repo uses a single-context domain docs layout. See `docs/agents/domain.md`.
+46
docs/agents/domain.md
··· 1 + # Domain Docs 2 + 3 + How the engineering skills should consume this repo's domain documentation when exploring the codebase. 4 + 5 + ## Before exploring, read these 6 + 7 + - **`CONTEXT.md`** at the repo root for project domain language and codebase concepts. 8 + - **`docs/adr/`** for ADRs that touch the area you're about to work in. 9 + - **`PRODUCT.md`** and **`DESIGN.md`** through the `impeccable` workflow when the task involves product, brand, UX, or UI design. 10 + 11 + If any of these files don't exist, **proceed silently**. Don't flag their absence; don't suggest creating them upfront. The producer skill (`/grill-with-docs`) creates domain docs lazily when terms or decisions actually get resolved. The `impeccable` skill owns product and design context. 12 + 13 + ## File structure 14 + 15 + Single-context repo: 16 + 17 + ``` 18 + / 19 + ├── CONTEXT.md 20 + ├── PRODUCT.md 21 + ├── DESIGN.md 22 + ├── docs/adr/ 23 + │ ├── 0001-example-decision.md 24 + │ └── 0002-example-decision.md 25 + └── src/ 26 + ``` 27 + 28 + ## Use each source for its job 29 + 30 + Use `CONTEXT.md` for domain terms, architectural vocabulary, invariants, and codebase concepts. Do not duplicate product strategy or visual design guidance into it. 31 + 32 + Use `PRODUCT.md` for product intent, users, brand, tone, anti-references, and strategic principles. 33 + 34 + Use `DESIGN.md` for visual system details, UI conventions, styling decisions, components, typography, color, and interaction patterns. 35 + 36 + ## Use the glossary's vocabulary 37 + 38 + When your output names a domain concept (in an issue title, a refactor proposal, a hypothesis, a test name), use the term as defined in `CONTEXT.md`. Don't drift to synonyms the glossary explicitly avoids. 39 + 40 + If the concept you need isn't in the glossary yet, that's a signal — either you're inventing language the project doesn't use (reconsider) or there's a real gap (note it for `/grill-with-docs`). 41 + 42 + ## Flag ADR conflicts 43 + 44 + If your output contradicts an existing ADR, surface it explicitly rather than silently overriding: 45 + 46 + > _Contradicts ADR-0007 (event-sourced orders) — but worth reopening because..._
+22
docs/agents/issue-tracker.md
··· 1 + # Issue tracker: GitHub 2 + 3 + Issues and PRDs for this repo live as GitHub issues. Use the `gh` CLI for all operations. 4 + 5 + ## Conventions 6 + 7 + - **Create an issue**: `gh issue create --title "..." --body "..."`. Use a heredoc for multi-line bodies. 8 + - **Read an issue**: `gh issue view <number> --comments`, filtering comments by `jq` and also fetching labels. 9 + - **List issues**: `gh issue list --state open --json number,title,body,labels,comments --jq '[.[] | {number, title, body, labels: [.labels[].name], comments: [.comments[].body]}]'` with appropriate `--label` and `--state` filters. 10 + - **Comment on an issue**: `gh issue comment <number> --body "..."` 11 + - **Apply / remove labels**: `gh issue edit <number> --add-label "..."` / `--remove-label "..."` 12 + - **Close**: `gh issue close <number> --comment "..."` 13 + 14 + Infer the repo from `git remote -v` — `gh` does this automatically when run inside a clone. 15 + 16 + ## When a skill says "publish to the issue tracker" 17 + 18 + Create a GitHub issue. 19 + 20 + ## When a skill says "fetch the relevant ticket" 21 + 22 + Run `gh issue view <number> --comments`.
+15
docs/agents/triage-labels.md
··· 1 + # Triage Labels 2 + 3 + The skills speak in terms of five canonical triage roles. This file maps those roles to the actual label strings used in this repo's issue tracker. 4 + 5 + | Label in mattpocock/skills | Label in our tracker | Meaning | 6 + | -------------------------- | -------------------- | ---------------------------------------- | 7 + | `needs-triage` | `needs-triage` | Maintainer needs to evaluate this issue | 8 + | `needs-info` | `needs-info` | Waiting on reporter for more information | 9 + | `ready-for-agent` | `ready-for-agent` | Fully specified, ready for an AFK agent | 10 + | `ready-for-human` | `ready-for-human` | Requires human implementation | 11 + | `wontfix` | `wontfix` | Will not be actioned | 12 + 13 + When a skill mentions a role (e.g. "apply the AFK-ready triage label"), use the corresponding label string from this table. 14 + 15 + Edit the right-hand column to match whatever vocabulary you actually use.
+59
skills-lock.json
··· 1 + { 2 + "version": 1, 3 + "skills": { 4 + "diagnose": { 5 + "source": "mattpocock/skills", 6 + "sourceType": "github", 7 + "skillPath": "skills/engineering/diagnose/SKILL.md", 8 + "computedHash": "15939a26f86edec2d4862042b8564e5a062cb81d04e047a0cea6305c8830b5f5" 9 + }, 10 + "grill-me": { 11 + "source": "mattpocock/skills", 12 + "sourceType": "github", 13 + "skillPath": "skills/productivity/grill-me/SKILL.md", 14 + "computedHash": "784f0dbb7403b0f00324bce9a112f715342777a0daee7bbb7385f9c6f0a170ea" 15 + }, 16 + "grill-with-docs": { 17 + "source": "mattpocock/skills", 18 + "sourceType": "github", 19 + "skillPath": "skills/engineering/grill-with-docs/SKILL.md", 20 + "computedHash": "ea225e10406dbb6a18ba97af34e4e9313424438195cd16e657635f841b6e7a2f" 21 + }, 22 + "improve-codebase-architecture": { 23 + "source": "mattpocock/skills", 24 + "sourceType": "github", 25 + "skillPath": "skills/engineering/improve-codebase-architecture/SKILL.md", 26 + "computedHash": "c77b86b4332919499608f9af1880074e1fec65a59b95c70c27a9f39cd137865e" 27 + }, 28 + "setup-matt-pocock-skills": { 29 + "source": "mattpocock/skills", 30 + "sourceType": "github", 31 + "skillPath": "skills/engineering/setup-matt-pocock-skills/SKILL.md", 32 + "computedHash": "3a32f8f1ed8160c9d286a2aabe88ee9b884c6f3f88a7a6c47b7d5d552c959587" 33 + }, 34 + "tdd": { 35 + "source": "mattpocock/skills", 36 + "sourceType": "github", 37 + "skillPath": "skills/engineering/tdd/SKILL.md", 38 + "computedHash": "15a7b5e36383ebadb2dec5e586679e55e9663d292da418926b8da6fc0ef27d84" 39 + }, 40 + "to-issues": { 41 + "source": "mattpocock/skills", 42 + "sourceType": "github", 43 + "skillPath": "skills/engineering/to-issues/SKILL.md", 44 + "computedHash": "73a91f30784523aa59ec9b02769576ebfc738e2cd5ad8f6441076031f0a5d5ac" 45 + }, 46 + "to-prd": { 47 + "source": "mattpocock/skills", 48 + "sourceType": "github", 49 + "skillPath": "skills/engineering/to-prd/SKILL.md", 50 + "computedHash": "fd8c259f9c44eff08e29a1a2fc71a806a3568d279a55387a361f78620b10f2aa" 51 + }, 52 + "zoom-out": { 53 + "source": "mattpocock/skills", 54 + "sourceType": "github", 55 + "skillPath": "skills/engineering/zoom-out/SKILL.md", 56 + "computedHash": "8357aeaece3b709c442eab67e64b86844e05e2f1ea95b109565eba50b6def36e" 57 + } 58 + } 59 + }