this repo has no description
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

stuff ssaflda

+877 -5
+244
.agents/skills/critique/SKILL.md
··· 1 + --- 2 + name: critique 3 + description: Evaluate design from a UX perspective, assessing visual hierarchy, information architecture, emotional resonance, cognitive load, and overall quality with quantitative scoring, persona-based testing, automated anti-pattern detection, and actionable feedback. Use when the user asks to review, critique, evaluate, or give feedback on a design or component. 4 + version: 2.1.1 5 + user-invocable: true 6 + argument-hint: "[area (feature, page, component...)]" 7 + --- 8 + 9 + ## STEPS 10 + 11 + ### Step 1: Preparation 12 + 13 + Invoke /impeccable, which contains design principles, anti-patterns, and the **Context Gathering Protocol**. Follow the protocol before proceeding. If no design context exists yet, you MUST run /impeccable teach first. Additionally gather: what the interface is trying to accomplish. 14 + 15 + ### Step 2: Gather Assessments 16 + 17 + Launch two independent assessments. **Neither must see the other's output** to avoid bias. 18 + 19 + You SHOULD delegate each assessment to a separate sub-agent for independence. Use your environment's agent spawning mechanism (e.g., Claude Code's `Agent` tool, or Codex's subagent spawning). Sub-agents should return their findings as structured text. Do NOT output findings to the user yet. 20 + 21 + If sub-agents are not available in the current environment, complete each assessment sequentially, writing findings to internal notes before proceeding. 22 + 23 + **Tab isolation**: When browser automation is available, each assessment MUST create its own new tab. Never reuse an existing tab, even if one is already open at the correct URL. This prevents the two assessments from interfering with each other's page state. 24 + 25 + #### Assessment A: LLM Design Review 26 + 27 + Read the relevant source files (HTML, CSS, JS/TS) and, if browser automation is available, visually inspect the live page. **Create a new tab** for this; do not reuse existing tabs. After navigation, label the tab by setting the document title: 28 + 29 + ```javascript 30 + document.title = "[LLM] " + document.title; 31 + ``` 32 + 33 + Think like a design director. Evaluate: 34 + 35 + **AI Slop Detection (CRITICAL)**: Does this look like every other AI-generated interface? Review against ALL **DON'T** guidelines in the impeccable skill. Check for AI color palette, gradient text, dark glows, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells. **The test**: If someone said "AI made this," would you believe them immediately? 36 + 37 + **Holistic Design Review**: visual hierarchy (eye flow, primary action clarity), information architecture (structure, grouping, cognitive load), emotional resonance (does it match brand and audience?), discoverability (are interactive elements obvious?), composition (balance, whitespace, rhythm), typography (hierarchy, readability, font choices), color (purposeful use, cohesion, accessibility), states & edge cases (empty, loading, error, success), microcopy (clarity, tone, helpfulness). 38 + 39 + **Cognitive Load** (consult [cognitive-load](reference/cognitive-load.md)): 40 + 41 + - Run the 8-item cognitive load checklist. Report failure count: 0-1 = low (good), 2-3 = moderate, 4+ = critical. 42 + - Count visible options at each decision point. If >4, flag it. 43 + - Check for progressive disclosure: is complexity revealed only when needed? 44 + 45 + **Emotional Journey**: 46 + 47 + - What emotion does this interface evoke? Is that intentional? 48 + - **Peak-end rule**: Is the most intense moment positive? Does the experience end well? 49 + - **Emotional valleys**: Check for anxiety spikes at high-stakes moments (payment, delete, commit). Are there design interventions (progress indicators, reassurance copy, undo options)? 50 + 51 + **Nielsen's Heuristics** (consult [heuristics-scoring](reference/heuristics-scoring.md)): 52 + Score each of the 10 heuristics 0-4. This scoring will be presented in the report. 53 + 54 + Return structured findings covering: AI slop verdict, heuristic scores, cognitive load assessment, what's working (2-3 items), priority issues (3-5 with what/why/fix), minor observations, and provocative questions. 55 + 56 + #### Assessment B: Automated Detection 57 + 58 + Run the bundled deterministic detector, which flags 25 specific patterns (AI slop tells + general design quality). 59 + 60 + **CLI scan**: 61 + 62 + ```bash 63 + npx impeccable --json [--fast] [target] 64 + ``` 65 + 66 + - Pass HTML/JSX/TSX/Vue/Svelte files or directories as `[target]` (anything with markup). Do not pass CSS-only files. 67 + - For URLs, skip the CLI scan (it requires Puppeteer). Use browser visualization instead. 68 + - For large directories (200+ scannable files), use `--fast` (regex-only, skips jsdom) 69 + - For 500+ files, narrow scope or ask the user 70 + - Exit code 0 = clean, 2 = findings 71 + 72 + **Browser visualization** (when browser automation tools are available AND the target is a viewable page): 73 + 74 + The overlay is a **visual aid for the user**. It highlights issues directly in their browser. Do NOT scroll through the page to screenshot overlays. Instead, read the console output to get the results programmatically. 75 + 76 + 1. **Start the live detection server**: 77 + ```bash 78 + npx impeccable live & 79 + ``` 80 + Note the port printed to stdout (auto-assigned). Use `--port=PORT` to fix it. 81 + 2. **Create a new tab** and navigate to the page (use dev server URL for local files, or direct URL). Do not reuse existing tabs. 82 + 3. **Label the tab** via `javascript_tool` so the user can distinguish it: 83 + ```javascript 84 + document.title = "[Human] " + document.title; 85 + ``` 86 + 4. **Scroll to top** to ensure the page is scrolled to the very top before injection 87 + 5. **Inject** via `javascript_tool` (replace PORT with the port from step 1): 88 + ```javascript 89 + const s = document.createElement("script"); 90 + s.src = "http://localhost:PORT/detect.js"; 91 + document.head.appendChild(s); 92 + ``` 93 + 6. Wait 2-3 seconds for the detector to render overlays 94 + 7. **Read results from console** using `read_console_messages` with pattern `impeccable`. The detector logs all findings with the `[impeccable]` prefix. Do NOT scroll through the page to take screenshots of the overlays. 95 + 8. **Cleanup**: Stop the live server when done: 96 + ```bash 97 + npx impeccable live stop 98 + ``` 99 + 100 + For multi-view targets, inject on 3-5 representative pages. If injection fails, continue with CLI results only. 101 + 102 + Return: CLI findings (JSON), browser console findings (if applicable), and any false positives noted. 103 + 104 + ### Step 3: Generate Combined Critique Report 105 + 106 + Synthesize both assessments into a single report. Do NOT simply concatenate. Weave the findings together, noting where the LLM review and detector agree, where the detector caught issues the LLM missed, and where detector findings are false positives. 107 + 108 + Structure your feedback as a design director would: 109 + 110 + #### Design Health Score 111 + 112 + > _Consult [heuristics-scoring](reference/heuristics-scoring.md)_ 113 + 114 + Present the Nielsen's 10 heuristics scores as a table: 115 + 116 + | # | Heuristic | Score | Key Issue | 117 + | --------- | ------------------------------- | --------- | ------------------------------------ | 118 + | 1 | Visibility of System Status | ? | [specific finding or "n/a" if solid] | 119 + | 2 | Match System / Real World | ? | | 120 + | 3 | User Control and Freedom | ? | | 121 + | 4 | Consistency and Standards | ? | | 122 + | 5 | Error Prevention | ? | | 123 + | 6 | Recognition Rather Than Recall | ? | | 124 + | 7 | Flexibility and Efficiency | ? | | 125 + | 8 | Aesthetic and Minimalist Design | ? | | 126 + | 9 | Error Recovery | ? | | 127 + | 10 | Help and Documentation | ? | | 128 + | **Total** | | **??/40** | **[Rating band]** | 129 + 130 + Be honest with scores. A 4 means genuinely excellent. Most real interfaces score 20-32. 131 + 132 + #### Anti-Patterns Verdict 133 + 134 + **Start here.** Does this look AI-generated? 135 + 136 + **LLM assessment**: Your own evaluation of AI slop tells. Cover overall aesthetic feel, layout sameness, generic composition, missed opportunities for personality. 137 + 138 + **Deterministic scan**: Summarize what the automated detector found, with counts and file locations. Note any additional issues the detector caught that you missed, and flag any false positives. 139 + 140 + **Visual overlays** (if browser was used): Tell the user that overlays are now visible in the **[Human]** tab in their browser, highlighting the detected issues. Summarize what the console output reported. 141 + 142 + #### Overall Impression 143 + 144 + A brief gut reaction: what works, what doesn't, and the single biggest opportunity. 145 + 146 + #### What's Working 147 + 148 + Highlight 2-3 things done well. Be specific about why they work. 149 + 150 + #### Priority Issues 151 + 152 + The 3-5 most impactful design problems, ordered by importance. 153 + 154 + For each issue, tag with **P0-P3 severity** (consult [heuristics-scoring](reference/heuristics-scoring.md) for severity definitions): 155 + 156 + - **[P?] What**: Name the problem clearly 157 + - **Why it matters**: How this hurts users or undermines goals 158 + - **Fix**: What to do about it (be concrete) 159 + - **Suggested command**: Which command could address this (from: /animate, /quieter, /shape, /optimize, /adapt, /clarify, /layout, /distill, /delight, /audit, /harden, /polish, /bolder, /typeset, /critique, /colorize, /overdrive) 160 + 161 + #### Persona Red Flags 162 + 163 + > _Consult [personas](reference/personas.md)_ 164 + 165 + Auto-select 2-3 personas most relevant to this interface type (use the selection table in the reference). If `.github/copilot-instructions.md` contains a `## Design Context` section from `impeccable teach`, also generate 1-2 project-specific personas from the audience/brand info. 166 + 167 + For each selected persona, walk through the primary user action and list specific red flags found: 168 + 169 + **Alex (Power User)**: No keyboard shortcuts detected. Form requires 8 clicks for primary action. Forced modal onboarding. High abandonment risk. 170 + 171 + **Jordan (First-Timer)**: Icon-only nav in sidebar. Technical jargon in error messages ("404 Not Found"). No visible help. Will abandon at step 2. 172 + 173 + Be specific. Name the exact elements and interactions that fail each persona. Don't write generic persona descriptions; write what broke for them. 174 + 175 + #### Minor Observations 176 + 177 + Quick notes on smaller issues worth addressing. 178 + 179 + #### Questions to Consider 180 + 181 + Provocative questions that might unlock better solutions: 182 + 183 + - "What if the primary action were more prominent?" 184 + - "Does this need to feel this complex?" 185 + - "What would a confident version of this look like?" 186 + 187 + **Remember**: 188 + 189 + - Be direct. Vague feedback wastes everyone's time. 190 + - Be specific. "The submit button," not "some elements." 191 + - Say what's wrong AND why it matters to users. 192 + - Give concrete suggestions, not just "consider exploring..." 193 + - Prioritize ruthlessly. If everything is important, nothing is. 194 + - Don't soften criticism. Developers need honest feedback to ship great design. 195 + 196 + ### Step 4: Ask the User 197 + 198 + **After presenting findings**, use targeted questions based on what was actually found. ask the user directly to clarify what you cannot infer. These answers will shape the action plan. 199 + 200 + Ask questions along these lines (adapt to the specific findings; do NOT ask generic questions): 201 + 202 + 1. **Priority direction**: Based on the issues found, ask which category matters most to the user right now. For example: "I found problems with visual hierarchy, color usage, and information overload. Which area should we tackle first?" Offer the top 2-3 issue categories as options. 203 + 204 + 2. **Design intent**: If the critique found a tonal mismatch, ask whether it was intentional. For example: "The interface feels clinical and corporate. Is that the intended tone, or should it feel warmer/bolder/more playful?" Offer 2-3 tonal directions as options based on what would fix the issues found. 205 + 206 + 3. **Scope**: Ask how much the user wants to take on. For example: "I found N issues. Want to address everything, or focus on the top 3?" Offer scope options like "Top 3 only", "All issues", "Critical issues only". 207 + 208 + 4. **Constraints** (optional; only ask if relevant): If the findings touch many areas, ask if anything is off-limits. For example: "Should any sections stay as-is?" This prevents the plan from touching things the user considers done. 209 + 210 + **Rules for questions**: 211 + 212 + - Every question must reference specific findings from the report. Never ask generic "who is your audience?" questions. 213 + - Keep it to 2-4 questions maximum. Respect the user's time. 214 + - Offer concrete options, not open-ended prompts. 215 + - If findings are straightforward (e.g., only 1-2 clear issues), skip questions and go directly to Step 5. 216 + 217 + ### Step 5: Recommended Actions 218 + 219 + **After receiving the user's answers**, present a prioritized action summary reflecting the user's priorities and scope from Step 4. 220 + 221 + #### Action Summary 222 + 223 + List recommended commands in priority order, based on the user's answers: 224 + 225 + 1. **`/command-name`**: Brief description of what to fix (specific context from critique findings) 226 + 2. **`/command-name`**: Brief description (specific context) 227 + ... 228 + 229 + **Rules for recommendations**: 230 + 231 + - Only recommend commands from: /animate, /quieter, /shape, /optimize, /adapt, /clarify, /layout, /distill, /delight, /audit, /harden, /polish, /bolder, /typeset, /critique, /colorize, /overdrive 232 + - Order by the user's stated priorities first, then by impact 233 + - Each item's description should carry enough context that the command knows what to focus on 234 + - Map each Priority Issue to the appropriate command 235 + - Skip commands that would address zero issues 236 + - If the user chose a limited scope, only include items within that scope 237 + - If the user marked areas as off-limits, exclude commands that would touch those areas 238 + - End with `/polish` as the final step if any fixes were recommended 239 + 240 + After presenting the summary, tell the user: 241 + 242 + > You can ask me to run these one at a time, all at once, or in any order you prefer. 243 + > 244 + > Re-run `/critique` after fixes to see your score improve.
+122
.agents/skills/critique/reference/cognitive-load.md
··· 1 + # Cognitive Load Assessment 2 + 3 + Cognitive load is the total mental effort required to use an interface. Overloaded users make mistakes, get frustrated, and leave. This reference helps identify and fix cognitive overload. 4 + 5 + --- 6 + 7 + ## Three Types of Cognitive Load 8 + 9 + ### Intrinsic Load — The Task Itself 10 + 11 + Complexity inherent to what the user is trying to do. You can't eliminate this, but you can structure it. 12 + 13 + **Manage it by**: 14 + 15 + - Breaking complex tasks into discrete steps 16 + - Providing scaffolding (templates, defaults, examples) 17 + - Progressive disclosure — show what's needed now, hide the rest 18 + - Grouping related decisions together 19 + 20 + ### Extraneous Load — Bad Design 21 + 22 + Mental effort caused by poor design choices. **Eliminate this ruthlessly** — it's pure waste. 23 + 24 + **Common sources**: 25 + 26 + - Confusing navigation that requires mental mapping 27 + - Unclear labels that force users to guess meaning 28 + - Visual clutter competing for attention 29 + - Inconsistent patterns that prevent learning 30 + - Unnecessary steps between user intent and result 31 + 32 + ### Germane Load — Learning Effort 33 + 34 + Mental effort spent building understanding. This is _good_ cognitive load — it leads to mastery. 35 + 36 + **Support it by**: 37 + 38 + - Progressive disclosure that reveals complexity gradually 39 + - Consistent patterns that reward learning 40 + - Feedback that confirms correct understanding 41 + - Onboarding that teaches through action, not walls of text 42 + 43 + --- 44 + 45 + ## Cognitive Load Checklist 46 + 47 + Evaluate the interface against these 8 items: 48 + 49 + - [ ] **Single focus**: Can the user complete their primary task without distraction from competing elements? 50 + - [ ] **Chunking**: Is information presented in digestible groups (≤4 items per group)? 51 + - [ ] **Grouping**: Are related items visually grouped together (proximity, borders, shared background)? 52 + - [ ] **Visual hierarchy**: Is it immediately clear what's most important on the screen? 53 + - [ ] **One thing at a time**: Can the user focus on a single decision before moving to the next? 54 + - [ ] **Minimal choices**: Are decisions simplified (≤4 visible options at any decision point)? 55 + - [ ] **Working memory**: Does the user need to remember information from a previous screen to act on the current one? 56 + - [ ] **Progressive disclosure**: Is complexity revealed only when the user needs it? 57 + 58 + **Scoring**: Count the failed items. 0–1 failures = low cognitive load (good). 2–3 = moderate (address soon). 4+ = high cognitive load (critical fix needed). 59 + 60 + --- 61 + 62 + ## The Working Memory Rule 63 + 64 + **Humans can hold ≤4 items in working memory at once** (Miller's Law revised by Cowan, 2001). 65 + 66 + At any decision point, count the number of distinct options, actions, or pieces of information a user must simultaneously consider: 67 + 68 + - **≤4 items**: Within working memory limits — manageable 69 + - **5–7 items**: Pushing the boundary — consider grouping or progressive disclosure 70 + - **8+ items**: Overloaded — users will skip, misclick, or abandon 71 + 72 + **Practical applications**: 73 + 74 + - Navigation menus: ≤5 top-level items (group the rest under clear categories) 75 + - Form sections: ≤4 fields visible per group before a visual break 76 + - Action buttons: 1 primary, 1–2 secondary, group the rest in a menu 77 + - Dashboard widgets: ≤4 key metrics visible without scrolling 78 + - Pricing tiers: ≤3 options (more causes analysis paralysis) 79 + 80 + --- 81 + 82 + ## Common Cognitive Load Violations 83 + 84 + ### 1. The Wall of Options 85 + 86 + **Problem**: Presenting 10+ choices at once with no hierarchy. 87 + **Fix**: Group into categories, highlight recommended, use progressive disclosure. 88 + 89 + ### 2. The Memory Bridge 90 + 91 + **Problem**: User must remember info from step 1 to complete step 3. 92 + **Fix**: Keep relevant context visible, or repeat it where it's needed. 93 + 94 + ### 3. The Hidden Navigation 95 + 96 + **Problem**: User must build a mental map of where things are. 97 + **Fix**: Always show current location (breadcrumbs, active states, progress indicators). 98 + 99 + ### 4. The Jargon Barrier 100 + 101 + **Problem**: Technical or domain language forces translation effort. 102 + **Fix**: Use plain language. If domain terms are unavoidable, define them inline. 103 + 104 + ### 5. The Visual Noise Floor 105 + 106 + **Problem**: Every element has the same visual weight — nothing stands out. 107 + **Fix**: Establish clear hierarchy: one primary element, 2–3 secondary, everything else muted. 108 + 109 + ### 6. The Inconsistent Pattern 110 + 111 + **Problem**: Similar actions work differently in different places. 112 + **Fix**: Standardize interaction patterns. Same type of action = same type of UI. 113 + 114 + ### 7. The Multi-Task Demand 115 + 116 + **Problem**: Interface requires processing multiple simultaneous inputs (reading + deciding + navigating). 117 + **Fix**: Sequence the steps. Let the user do one thing at a time. 118 + 119 + ### 8. The Context Switch 120 + 121 + **Problem**: User must jump between screens/tabs/modals to gather info for a single decision. 122 + **Fix**: Co-locate the information needed for each decision. Reduce back-and-forth.
+244
.agents/skills/critique/reference/heuristics-scoring.md
··· 1 + # Heuristics Scoring Guide 2 + 3 + Score each of Nielsen's 10 Usability Heuristics on a 0–4 scale. Be honest — a 4 means genuinely excellent, not "good enough." 4 + 5 + ## Nielsen's 10 Heuristics 6 + 7 + ### 1. Visibility of System Status 8 + 9 + Keep users informed about what's happening through timely, appropriate feedback. 10 + 11 + **Check for**: 12 + 13 + - Loading indicators during async operations 14 + - Confirmation of user actions (save, submit, delete) 15 + - Progress indicators for multi-step processes 16 + - Current location in navigation (breadcrumbs, active states) 17 + - Form validation feedback (inline, not just on submit) 18 + 19 + **Scoring**: 20 + | Score | Criteria | 21 + |-------|----------| 22 + | 0 | No feedback — user is guessing what happened | 23 + | 1 | Rare feedback — most actions produce no visible response | 24 + | 2 | Partial — some states communicated, major gaps remain | 25 + | 3 | Good — most operations give clear feedback, minor gaps | 26 + | 4 | Excellent — every action confirms, progress is always visible | 27 + 28 + ### 2. Match Between System and Real World 29 + 30 + Speak the user's language. Follow real-world conventions. Information appears in natural, logical order. 31 + 32 + **Check for**: 33 + 34 + - Familiar terminology (no unexplained jargon) 35 + - Logical information order matching user expectations 36 + - Recognizable icons and metaphors 37 + - Domain-appropriate language for the target audience 38 + - Natural reading flow (left-to-right, top-to-bottom priority) 39 + 40 + **Scoring**: 41 + | Score | Criteria | 42 + |-------|----------| 43 + | 0 | Pure tech jargon, alien to users | 44 + | 1 | Mostly confusing — requires domain expertise to navigate | 45 + | 2 | Mixed — some plain language, some jargon leaks through | 46 + | 3 | Mostly natural — occasional term needs context | 47 + | 4 | Speaks the user's language fluently throughout | 48 + 49 + ### 3. User Control and Freedom 50 + 51 + Users need a clear "emergency exit" from unwanted states without extended dialogue. 52 + 53 + **Check for**: 54 + 55 + - Undo/redo functionality 56 + - Cancel buttons on forms and modals 57 + - Clear navigation back to safety (home, previous) 58 + - Easy way to clear filters, search, selections 59 + - Escape from long or multi-step processes 60 + 61 + **Scoring**: 62 + | Score | Criteria | 63 + |-------|----------| 64 + | 0 | Users get trapped — no way out without refreshing | 65 + | 1 | Difficult exits — must find obscure paths to escape | 66 + | 2 | Some exits — main flows have escape, edge cases don't | 67 + | 3 | Good control — users can exit and undo most actions | 68 + | 4 | Full control — undo, cancel, back, and escape everywhere | 69 + 70 + ### 4. Consistency and Standards 71 + 72 + Users shouldn't wonder whether different words, situations, or actions mean the same thing. 73 + 74 + **Check for**: 75 + 76 + - Consistent terminology throughout the interface 77 + - Same actions produce same results everywhere 78 + - Platform conventions followed (standard UI patterns) 79 + - Visual consistency (colors, typography, spacing, components) 80 + - Consistent interaction patterns (same gesture = same behavior) 81 + 82 + **Scoring**: 83 + | Score | Criteria | 84 + |-------|----------| 85 + | 0 | Inconsistent everywhere — feels like different products stitched together | 86 + | 1 | Many inconsistencies — similar things look/behave differently | 87 + | 2 | Partially consistent — main flows match, details diverge | 88 + | 3 | Mostly consistent — occasional deviation, nothing confusing | 89 + | 4 | Fully consistent — cohesive system, predictable behavior | 90 + 91 + ### 5. Error Prevention 92 + 93 + Better than good error messages is a design that prevents problems in the first place. 94 + 95 + **Check for**: 96 + 97 + - Confirmation before destructive actions (delete, overwrite) 98 + - Constraints preventing invalid input (date pickers, dropdowns) 99 + - Smart defaults that reduce errors 100 + - Clear labels that prevent misunderstanding 101 + - Autosave and draft recovery 102 + 103 + **Scoring**: 104 + | Score | Criteria | 105 + |-------|----------| 106 + | 0 | Errors easy to make — no guardrails anywhere | 107 + | 1 | Few safeguards — some inputs validated, most aren't | 108 + | 2 | Partial prevention — common errors caught, edge cases slip | 109 + | 3 | Good prevention — most error paths blocked proactively | 110 + | 4 | Excellent — errors nearly impossible through smart constraints | 111 + 112 + ### 6. Recognition Rather Than Recall 113 + 114 + Minimize memory load. Make objects, actions, and options visible or easily retrievable. 115 + 116 + **Check for**: 117 + 118 + - Visible options (not buried in hidden menus) 119 + - Contextual help when needed (tooltips, inline hints) 120 + - Recent items and history 121 + - Autocomplete and suggestions 122 + - Labels on icons (not icon-only navigation) 123 + 124 + **Scoring**: 125 + | Score | Criteria | 126 + |-------|----------| 127 + | 0 | Heavy memorization — users must remember paths and commands | 128 + | 1 | Mostly recall — many hidden features, few visible cues | 129 + | 2 | Some aids — main actions visible, secondary features hidden | 130 + | 3 | Good recognition — most things discoverable, few memory demands | 131 + | 4 | Everything discoverable — users never need to memorize | 132 + 133 + ### 7. Flexibility and Efficiency of Use 134 + 135 + Accelerators — invisible to novices — speed up expert interaction. 136 + 137 + **Check for**: 138 + 139 + - Keyboard shortcuts for common actions 140 + - Customizable interface elements 141 + - Recent items and favorites 142 + - Bulk/batch actions 143 + - Power user features that don't complicate the basics 144 + 145 + **Scoring**: 146 + | Score | Criteria | 147 + |-------|----------| 148 + | 0 | One rigid path — no shortcuts or alternatives | 149 + | 1 | Limited flexibility — few alternatives to the main path | 150 + | 2 | Some shortcuts — basic keyboard support, limited bulk actions | 151 + | 3 | Good accelerators — keyboard nav, some customization | 152 + | 4 | Highly flexible — multiple paths, power features, customizable | 153 + 154 + ### 8. Aesthetic and Minimalist Design 155 + 156 + Interfaces should not contain irrelevant or rarely needed information. Every element should serve a purpose. 157 + 158 + **Check for**: 159 + 160 + - Only necessary information visible at each step 161 + - Clear visual hierarchy directing attention 162 + - Purposeful use of color and emphasis 163 + - No decorative clutter competing for attention 164 + - Focused, uncluttered layouts 165 + 166 + **Scoring**: 167 + | Score | Criteria | 168 + |-------|----------| 169 + | 0 | Overwhelming — everything competes for attention equally | 170 + | 1 | Cluttered — too much noise, hard to find what matters | 171 + | 2 | Some clutter — main content clear, periphery noisy | 172 + | 3 | Mostly clean — focused design, minor visual noise | 173 + | 4 | Perfectly minimal — every element earns its pixel | 174 + 175 + ### 9. Help Users Recognize, Diagnose, and Recover from Errors 176 + 177 + Error messages should use plain language, precisely indicate the problem, and constructively suggest a solution. 178 + 179 + **Check for**: 180 + 181 + - Plain language error messages (no error codes for users) 182 + - Specific problem identification ("Email is missing @" not "Invalid input") 183 + - Actionable recovery suggestions 184 + - Errors displayed near the source of the problem 185 + - Non-blocking error handling (don't wipe the form) 186 + 187 + **Scoring**: 188 + | Score | Criteria | 189 + |-------|----------| 190 + | 0 | Cryptic errors — codes, jargon, or no message at all | 191 + | 1 | Vague errors — "Something went wrong" with no guidance | 192 + | 2 | Clear but unhelpful — names the problem but not the fix | 193 + | 3 | Clear with suggestions — identifies problem and offers next steps | 194 + | 4 | Perfect recovery — pinpoints issue, suggests fix, preserves user work | 195 + 196 + ### 10. Help and Documentation 197 + 198 + Even if the system is usable without docs, help should be easy to find, task-focused, and concise. 199 + 200 + **Check for**: 201 + 202 + - Searchable help or documentation 203 + - Contextual help (tooltips, inline hints, guided tours) 204 + - Task-focused organization (not feature-organized) 205 + - Concise, scannable content 206 + - Easy access without leaving current context 207 + 208 + **Scoring**: 209 + | Score | Criteria | 210 + |-------|----------| 211 + | 0 | No help available anywhere | 212 + | 1 | Help exists but hard to find or irrelevant | 213 + | 2 | Basic help — FAQ or docs exist, not contextual | 214 + | 3 | Good documentation — searchable, mostly task-focused | 215 + | 4 | Excellent contextual help — right info at the right moment | 216 + 217 + --- 218 + 219 + ## Score Summary 220 + 221 + **Total possible**: 40 points (10 heuristics × 4 max) 222 + 223 + | Score Range | Rating | What It Means | 224 + | ----------- | ---------- | ------------------------------------------------------ | 225 + | 36–40 | Excellent | Minor polish only — ship it | 226 + | 28–35 | Good | Address weak areas, solid foundation | 227 + | 20–27 | Acceptable | Significant improvements needed before users are happy | 228 + | 12–19 | Poor | Major UX overhaul required — core experience broken | 229 + | 0–11 | Critical | Redesign needed — unusable in current state | 230 + 231 + --- 232 + 233 + ## Issue Severity (P0–P3) 234 + 235 + Tag each individual issue found during scoring with a priority level: 236 + 237 + | Priority | Name | Description | Action | 238 + | -------- | -------- | ------------------------------------------ | --------------------------------------- | 239 + | **P0** | Blocking | Prevents task completion entirely | Fix immediately — this is a showstopper | 240 + | **P1** | Major | Causes significant difficulty or confusion | Fix before release | 241 + | **P2** | Minor | Annoyance, but workaround exists | Fix in next pass | 242 + | **P3** | Polish | Nice-to-fix, no real user impact | Fix if time permits | 243 + 244 + **Tip**: If you're unsure between two levels, ask: "Would a user contact support about this?" If yes, it's at least P1.
+193
.agents/skills/critique/reference/personas.md
··· 1 + # Persona-Based Design Testing 2 + 3 + Test the interface through the eyes of 5 distinct user archetypes. Each persona exposes different failure modes that a single "design director" perspective would miss. 4 + 5 + **How to use**: Select 2–3 personas most relevant to the interface being critiqued. Walk through the primary user action as each persona. Report specific red flags — not generic concerns. 6 + 7 + --- 8 + 9 + ## 1. Impatient Power User — "Alex" 10 + 11 + **Profile**: Expert with similar products. Expects efficiency, hates hand-holding. Will find shortcuts or leave. 12 + 13 + **Behaviors**: 14 + 15 + - Skips all onboarding and instructions 16 + - Looks for keyboard shortcuts immediately 17 + - Tries to bulk-select, batch-edit, and automate 18 + - Gets frustrated by required steps that feel unnecessary 19 + - Abandons if anything feels slow or patronizing 20 + 21 + **Test Questions**: 22 + 23 + - Can Alex complete the core task in under 60 seconds? 24 + - Are there keyboard shortcuts for common actions? 25 + - Can onboarding be skipped entirely? 26 + - Do modals have keyboard dismiss (Esc)? 27 + - Is there a "power user" path (shortcuts, bulk actions)? 28 + 29 + **Red Flags** (report these specifically): 30 + 31 + - Forced tutorials or unskippable onboarding 32 + - No keyboard navigation for primary actions 33 + - Slow animations that can't be skipped 34 + - One-item-at-a-time workflows where batch would be natural 35 + - Redundant confirmation steps for low-risk actions 36 + 37 + --- 38 + 39 + ## 2. Confused First-Timer — "Jordan" 40 + 41 + **Profile**: Never used this type of product. Needs guidance at every step. Will abandon rather than figure it out. 42 + 43 + **Behaviors**: 44 + 45 + - Reads all instructions carefully 46 + - Hesitates before clicking anything unfamiliar 47 + - Looks for help or support constantly 48 + - Misunderstands jargon and abbreviations 49 + - Takes the most literal interpretation of any label 50 + 51 + **Test Questions**: 52 + 53 + - Is the first action obviously clear within 5 seconds? 54 + - Are all icons labeled with text? 55 + - Is there contextual help at decision points? 56 + - Does terminology assume prior knowledge? 57 + - Is there a clear "back" or "undo" at every step? 58 + 59 + **Red Flags** (report these specifically): 60 + 61 + - Icon-only navigation with no labels 62 + - Technical jargon without explanation 63 + - No visible help option or guidance 64 + - Ambiguous next steps after completing an action 65 + - No confirmation that an action succeeded 66 + 67 + --- 68 + 69 + ## 3. Accessibility-Dependent User — "Sam" 70 + 71 + **Profile**: Uses screen reader (VoiceOver/NVDA), keyboard-only navigation. May have low vision, motor impairment, or cognitive differences. 72 + 73 + **Behaviors**: 74 + 75 + - Tabs through the interface linearly 76 + - Relies on ARIA labels and heading structure 77 + - Cannot see hover states or visual-only indicators 78 + - Needs adequate color contrast (4.5:1 minimum) 79 + - May use browser zoom up to 200% 80 + 81 + **Test Questions**: 82 + 83 + - Can the entire primary flow be completed keyboard-only? 84 + - Are all interactive elements focusable with visible focus indicators? 85 + - Do images have meaningful alt text? 86 + - Is color contrast WCAG AA compliant (4.5:1 for text)? 87 + - Does the screen reader announce state changes (loading, success, errors)? 88 + 89 + **Red Flags** (report these specifically): 90 + 91 + - Click-only interactions with no keyboard alternative 92 + - Missing or invisible focus indicators 93 + - Meaning conveyed by color alone (red = error, green = success) 94 + - Unlabeled form fields or buttons 95 + - Time-limited actions without extension option 96 + - Custom components that break screen reader flow 97 + 98 + --- 99 + 100 + ## 4. Deliberate Stress Tester — "Riley" 101 + 102 + **Profile**: Methodical user who pushes interfaces beyond the happy path. Tests edge cases, tries unexpected inputs, and probes for gaps in the experience. 103 + 104 + **Behaviors**: 105 + 106 + - Tests edge cases intentionally (empty states, long strings, special characters) 107 + - Submits forms with unexpected data (emoji, RTL text, very long values) 108 + - Tries to break workflows by navigating backwards, refreshing mid-flow, or opening in multiple tabs 109 + - Looks for inconsistencies between what the UI promises and what actually happens 110 + - Documents problems methodically 111 + 112 + **Test Questions**: 113 + 114 + - What happens at the edges (0 items, 1000 items, very long text)? 115 + - Do error states recover gracefully or leave the UI in a broken state? 116 + - What happens on refresh mid-workflow? Is state preserved? 117 + - Are there features that appear to work but produce broken results? 118 + - How does the UI handle unexpected input (emoji, special chars, paste from Excel)? 119 + 120 + **Red Flags** (report these specifically): 121 + 122 + - Features that appear to work but silently fail or produce wrong results 123 + - Error handling that exposes technical details or leaves UI in a broken state 124 + - Empty states that show nothing useful ("No results" with no guidance) 125 + - Workflows that lose user data on refresh or navigation 126 + - Inconsistent behavior between similar interactions in different parts of the UI 127 + 128 + --- 129 + 130 + ## 5. Distracted Mobile User — "Casey" 131 + 132 + **Profile**: Using phone one-handed on the go. Frequently interrupted. Possibly on a slow connection. 133 + 134 + **Behaviors**: 135 + 136 + - Uses thumb only — prefers bottom-of-screen actions 137 + - Gets interrupted mid-flow and returns later 138 + - Switches between apps frequently 139 + - Has limited attention span and low patience 140 + - Types as little as possible, prefers taps and selections 141 + 142 + **Test Questions**: 143 + 144 + - Are primary actions in the thumb zone (bottom half of screen)? 145 + - Is state preserved if the user leaves and returns? 146 + - Does it work on slow connections (3G)? 147 + - Can forms leverage autocomplete and smart defaults? 148 + - Are touch targets at least 44×44pt? 149 + 150 + **Red Flags** (report these specifically): 151 + 152 + - Important actions positioned at the top of the screen (unreachable by thumb) 153 + - No state persistence — progress lost on tab switch or interruption 154 + - Large text inputs required where selection would work 155 + - Heavy assets loading on every page (no lazy loading) 156 + - Tiny tap targets or targets too close together 157 + 158 + --- 159 + 160 + ## Selecting Personas 161 + 162 + Choose personas based on the interface type: 163 + 164 + | Interface Type | Primary Personas | Why | 165 + | ------------------------ | -------------------- | -------------------------------- | 166 + | Landing page / marketing | Jordan, Riley, Casey | First impressions, trust, mobile | 167 + | Dashboard / admin | Alex, Sam | Power users, accessibility | 168 + | E-commerce / checkout | Casey, Riley, Jordan | Mobile, edge cases, clarity | 169 + | Onboarding flow | Jordan, Casey | Confusion, interruption | 170 + | Data-heavy / analytics | Alex, Sam | Efficiency, keyboard nav | 171 + | Form-heavy / wizard | Jordan, Sam, Casey | Clarity, accessibility, mobile | 172 + 173 + --- 174 + 175 + ## Project-Specific Personas 176 + 177 + If `.github/copilot-instructions.md` contains a `## Design Context` section (generated by `impeccable teach`), derive 1–2 additional personas from the audience and brand information: 178 + 179 + 1. Read the target audience description 180 + 2. Identify the primary user archetype not covered by the 5 predefined personas 181 + 3. Create a persona following this template: 182 + 183 + ``` 184 + ### [Role] — "[Name]" 185 + 186 + **Profile**: [2-3 key characteristics derived from Design Context] 187 + 188 + **Behaviors**: [3-4 specific behaviors based on the described audience] 189 + 190 + **Red Flags**: [3-4 things that would alienate this specific user type] 191 + ``` 192 + 193 + Only generate project-specific personas when real Design Context data is available. Don't invent audience details — use the 5 predefined personas when no context exists.
+5
skills-lock.json
··· 36 36 "sourceType": "github", 37 37 "computedHash": "4793ac377b2bc1a5831d9634ce882373350cc5d097fd631d192155266204ceb8" 38 38 }, 39 + "critique": { 40 + "source": "pbakaus/impeccable", 41 + "sourceType": "github", 42 + "computedHash": "977f6fc3aa1002ec095f649e1b7c4fa52ee08a447f19229062810a9323c5c342" 43 + }, 39 44 "delight": { 40 45 "source": "pbakaus/impeccable", 41 46 "sourceType": "github",
+44 -1
src/components/header.tsx
··· 1 1 import { Link, useRouterState } from "@tanstack/react-router"; 2 2 import { useQueryClient } from "@tanstack/react-query"; 3 + import { useEffect, useState } from "react"; 3 4 import { StatusDot } from "~/components/console/status-dot"; 4 5 5 6 interface NavItemProps { ··· 39 40 ); 40 41 } 41 42 43 + function ThemeToggle() { 44 + const [theme, setTheme] = useState<"system" | "light" | "dark">("system"); 45 + const [isClicked, setIsClicked] = useState(false); 46 + 47 + useEffect(() => { 48 + const saved = localStorage.getItem("theme") as "system" | "light" | "dark" | null; 49 + if (saved) setTheme(saved); 50 + }, []); 51 + 52 + useEffect(() => { 53 + const root = document.documentElement; 54 + if (theme === "system") { 55 + root.removeAttribute("data-theme"); 56 + localStorage.removeItem("theme"); 57 + } else { 58 + root.setAttribute("data-theme", theme); 59 + localStorage.setItem("theme", theme); 60 + } 61 + }, [theme]); 62 + 63 + const cycleTheme = () => { 64 + setIsClicked(true); 65 + setTimeout(() => setIsClicked(false), 150); 66 + setTheme((prev) => (prev === "system" ? "light" : prev === "light" ? "dark" : "system")); 67 + }; 68 + 69 + const icon = theme === "system" ? "◐" : theme === "light" ? "○" : "●"; 70 + 71 + return ( 72 + <button 73 + onClick={cycleTheme} 74 + className={`nav-link ml-auto ${isClicked ? "theme-toggle-clicked" : ""}`} 75 + title={`Theme: ${theme} (click to cycle)`} 76 + aria-label={`Current theme: ${theme}. Click to cycle.`} 77 + > 78 + <span className="text-(--accent-default) theme-toggle-icon">{icon}</span> 79 + <span className="hidden sm:inline">{theme}</span> 80 + </button> 81 + ); 82 + } 83 + 42 84 export default function Header() { 43 85 return ( 44 86 <header className="bg-(--bg-secondary) border-b border-(--border-default)"> ··· 49 91 <NavItem to="/intent-preloading" label="03_intent-preloading" preload="intent" /> 50 92 <NavItem to="/pagination" label="04_pagination" /> 51 93 <NavItem to="/filters" label="05_filters" /> 52 - <NavItem to="/debounced-preload-filters" label="06_debounced" /> 94 + <NavItem to="/debounced-preload-filters" label="06_debounced-filters" /> 95 + <ThemeToggle /> 53 96 </nav> 54 97 </header> 55 98 );
+25 -4
src/styles/global.css
··· 116 116 } 117 117 118 118 @layer components { 119 - /* Navigation link with left border accent */ 119 + /* Navigation link - terminal style */ 120 120 .nav-link { 121 121 display: flex; 122 122 align-items: center; 123 123 gap: 0.5rem; 124 124 padding: 0.5rem 1rem; 125 - border-left: 2px solid transparent; 126 125 font-size: var(--text-sm); 127 126 color: var(--text-secondary); 128 127 transition: all var(--duration-fast) var(--easing-default); 128 + position: relative; 129 + } 130 + 131 + .nav-link::before { 132 + content: ""; 133 + width: 0.75rem; 134 + text-align: right; 135 + color: var(--text-muted); 136 + font-weight: 400; 129 137 } 130 138 131 139 .nav-link:hover { 132 140 color: var(--accent-default); 133 - border-left-color: var(--accent-default); 134 141 background: var(--accent-subtle); 135 142 } 136 143 137 144 .nav-link-active { 138 145 color: var(--text-primary); 139 - border-left-color: var(--accent-default); 140 146 font-weight: 500; 147 + } 148 + 149 + .nav-link-active::before { 150 + content: ">"; 151 + color: var(--accent-default); 152 + } 153 + 154 + /* Theme toggle animation */ 155 + .theme-toggle-icon { 156 + display: inline-block; 157 + transition: transform var(--duration-fast) var(--easing-default); 158 + } 159 + 160 + .theme-toggle-clicked .theme-toggle-icon { 161 + transform: scale(1.3); 141 162 } 142 163 143 164 /* Example card for landing page */