The code and data behind xeiaso.net
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

feat: add ai-writing-tropes skill from tropes.fyi (#1164)

Comprehensive catalog of AI writing patterns to avoid, organized into
six categories: word choice, sentence structure, paragraph structure,
tone, formatting, and composition. Each category has its own reference
file with examples and fixes.

https://claude.ai/code/session_01S1R6eHV1ftixGbx4sWGg8N

Co-authored-by: Claude <noreply@anthropic.com>

authored by

Xe Iaso
Claude
and committed by
GitHub
be73ea4f cbc53a50

+612
+94
.claude/skills/ai-writing-tropes/SKILL.md
··· 1 + --- 2 + name: ai-writing-tropes 3 + description: 4 + Detect and eliminate common AI writing tropes from prose. Use when drafting, 5 + editing, or reviewing text to avoid the predictable patterns that mark 6 + AI-generated writing. Source: tropes.fyi 7 + metadata: 8 + trigger: 9 + Writing prose, editing drafts, reviewing content for AI tells, or when user 10 + mentions "tropes", "AI patterns", "slop", or "tropes.fyi" 11 + author: tropes.fyi 12 + --- 13 + 14 + # AI Writing Tropes to Avoid 15 + 16 + Comprehensive catalog of AI writing patterns that make text feel machine-generated. 17 + Any single pattern used once might be fine. The problem is when multiple tropes 18 + appear together or when one trope repeats throughout a piece. 19 + 20 + ## How to Use 21 + 22 + When writing or editing prose: 23 + 24 + 1. Draft the content. 25 + 2. Check against the trope categories below. 26 + 3. If you spot a pattern, rewrite that passage. 27 + 4. Re-read the whole piece for pattern density — a few is tolerable, a cluster is 28 + not. 29 + 30 + ## Trope Categories 31 + 32 + ### Word Choice 33 + 34 + Overused vocabulary and phrasing that AI defaults to. See 35 + [references/word-choice.md](references/word-choice.md). 36 + 37 + Key offenders: "quietly", "delve", "tapestry", "landscape", "serves as", 38 + "leverage", "robust", "harness", "streamline". 39 + 40 + ### Sentence Structure 41 + 42 + Formulaic sentence patterns that no human writes at scale. See 43 + [references/sentence-structure.md](references/sentence-structure.md). 44 + 45 + Key offenders: negative parallelism ("not X — it's Y"), dramatic countdowns 46 + ("Not X. Not Y. Just Z."), self-posed rhetorical questions, anaphora abuse, 47 + tricolon abuse, gerund fragment litanies. 48 + 49 + ### Paragraph Structure 50 + 51 + Layout and organization patterns that betray AI generation. See 52 + [references/paragraph-structure.md](references/paragraph-structure.md). 53 + 54 + Key offenders: short punchy fragments as standalone paragraphs, listicles 55 + disguised as prose. 56 + 57 + ### Tone 58 + 59 + Voice and framing habits that sound performative. See 60 + [references/tone.md](references/tone.md). 61 + 62 + Key offenders: false suspense ("Here's the kicker"), patronizing analogies 63 + ("Think of it as..."), false vulnerability, stakes inflation, vague attributions, 64 + invented concept labels. 65 + 66 + ### Formatting 67 + 68 + Visual and typographic tells. See 69 + [references/formatting.md](references/formatting.md). 70 + 71 + Key offenders: em-dash addiction, bold-first bullets, unicode decoration. 72 + 73 + ### Composition 74 + 75 + Document-level structural problems. See 76 + [references/composition.md](references/composition.md). 77 + 78 + Key offenders: fractal summaries, dead metaphors beaten into the ground, 79 + historical analogy stacking, one-point dilution, signposted conclusions. 80 + 81 + ## Quick Self-Check 82 + 83 + Before delivering prose, ask: 84 + 85 + - Did I use the same sentence structure more than twice in a row? 86 + - Did I use "not X — it's Y" or "Here's the thing" anywhere? 87 + - Did I stack three or more historical examples back-to-back? 88 + - Did I inflate the stakes beyond what the content warrants? 89 + - Would a human actually write a first draft this way? 90 + - Does any passage sound like it belongs on a motivational poster? 91 + 92 + ## The One Rule 93 + 94 + Write like a human: varied, imperfect, specific.
+107
.claude/skills/ai-writing-tropes/references/composition.md
··· 1 + # Composition Tropes 2 + 3 + ## Fractal Summaries 4 + 5 + "What I'm going to tell you; what I'm telling you; what I just told you" — 6 + applied at every level of the document. Every subsection gets a summary. Every 7 + section gets a summary. The document itself gets a summary. 8 + 9 + Bad: 10 + 11 + - "In this section, we'll explore... [3000 words later] ...as we've seen in this 12 + section." 13 + - "A conclusion that restates every point already made in the previous 3000 words" 14 + - "And so we return to where we began." 15 + 16 + Fix: Trust readers to remember what they just read. Summarize only when the 17 + audience genuinely needs it (executive summaries, abstracts). 18 + 19 + ## The Dead Metaphor 20 + 21 + Latching onto a single metaphor and beating it into the ground across the entire 22 + piece. A human writer would introduce a metaphor, use it, then move on. AI will 23 + repeat the same metaphor 5-10 times. 24 + 25 + Bad: 26 + 27 + - "The ecosystem needs ecosystems to build ecosystem value." 28 + - "Walls and doors used 30+ times in the same article" 29 + - "Every paragraph finds a way to say 'primitives' again" 30 + 31 + Fix: Use a metaphor once or twice, then let it go. If you need it again later, it 32 + should feel like a callback, not a crutch. 33 + 34 + ## Historical Analogy Stacking 35 + 36 + Rapid-fire listing of historical companies or tech revolutions to build false 37 + authority. Especially common in technical writing. 38 + 39 + Bad: 40 + 41 + - "Apple didn't build Uber. Facebook didn't build Spotify. Stripe didn't build 42 + Shopify. AWS didn't build Airbnb." 43 + - "Every major technological shift -- the web, mobile, social, cloud -- followed 44 + the same pattern." 45 + - "Take Spotify... Or consider Uber... Airbnb followed a similar path... Shopify 46 + is another example... Even Discord..." 47 + 48 + Fix: Pick one example and go deep. A single well-analyzed case is worth more than 49 + five name-drops. 50 + 51 + ## One-Point Dilution 52 + 53 + Making a single argument and restating it in 10 different ways across thousands of 54 + words. The model pads a simple thesis to feel "comprehensive" by rephrasing the 55 + same idea with different metaphors, examples, and framings. An 800-word argument 56 + becomes 4000 words of circular repetition. 57 + 58 + Bad: 59 + 60 + - "The same point, restated eight ways across 4000 words." 61 + - "Each section rephrases the thesis with a different metaphor but adds nothing 62 + new" 63 + 64 + Fix: State the point. Support it. Move on. If a piece only has one idea, it 65 + should be short. 66 + 67 + ## Content Duplication 68 + 69 + Repeating entire sections or paragraphs verbatim within the same piece. Happens 70 + when the model loses track of what it has already written, especially in longer 71 + pieces. 72 + 73 + Bad: 74 + 75 + - "The same section appeared twice, word-for-word identical." 76 + - "Paragraph 3 and paragraph 17 are the same sentence reworded" 77 + 78 + Fix: Re-read what you've written. Cut duplicates. 79 + 80 + ## The Signposted Conclusion 81 + 82 + Explicitly announcing the conclusion with "In conclusion", "To sum up", or "In 83 + summary". Competent writing doesn't need to tell you it's concluding. The reader 84 + can feel it. 85 + 86 + Bad: 87 + 88 + - "In conclusion, the future of AI depends on..." 89 + - "To sum up, we've explored three key themes..." 90 + - "In summary, the evidence suggests..." 91 + 92 + Fix: Just write your final thought. Drop the signpost. 93 + 94 + ## "Despite Its Challenges..." 95 + 96 + The rigid formula where AI acknowledges problems only to immediately dismiss them. 97 + Always follows the same beat: "Despite its [positive words], [subject] faces 98 + challenges..." then ends with "Despite these challenges, [optimistic conclusion]." 99 + 100 + Bad: 101 + 102 + - "Despite these challenges, the initiative continues to thrive." 103 + - "Despite its industrial and residential prosperity, Korattur faces challenges 104 + typical of urban areas." 105 + 106 + Fix: If challenges matter, discuss them seriously. If they don't, skip them. Don't 107 + use them as a rhetorical speed bump on the way to optimism.
+42
.claude/skills/ai-writing-tropes/references/formatting.md
··· 1 + # Formatting Tropes 2 + 3 + ## Em-Dash Addiction 4 + 5 + Compulsive overuse of em dashes for dramatic pauses, parenthetical asides and 6 + pivot points. A human writer might use 2-3 per piece naturally; AI will use 20+. 7 + 8 + Bad: 9 + 10 + - "The problem -- and this is the part nobody talks about -- is systemic." 11 + - "The tinkerer spirit didn't die of natural causes -- it was bought out." 12 + - "Not recklessly, not completely -- but enough -- enough to matter." 13 + 14 + Fix: Use commas, parentheses, or separate sentences. Reserve em dashes for the 15 + one or two places where they genuinely earn their keep. 16 + 17 + ## Bold-First Bullets 18 + 19 + Every bullet point or list item starts with a bolded phrase or sentence. Common in 20 + Claude and ChatGPT markdown output. Almost nobody formats lists this way when 21 + writing by hand. A telltale sign of AI-generated documentation and blog posts. 22 + 23 + Bad: 24 + 25 + - "**Security**: Environment-based configuration with..." 26 + - "**Performance**: Lazy loading of expensive resources..." 27 + 28 + Fix: Write list items as normal sentences. If the list needs headings, use actual 29 + headings. 30 + 31 + ## Unicode Decoration 32 + 33 + Use of unicode arrows, smart/curly quotes, and other special characters that can't 34 + be easily typed on a standard keyboard. Real writers typing in a text editor 35 + produce straight quotes and `->` or `=>`. 36 + 37 + Bad: 38 + 39 + - "Input -> Processing -> Output" 40 + - "This leads to better outcomes -> which means higher engagement" 41 + 42 + Fix: Use `->`, `=>`, or just write "leads to". Use straight quotes.
+34
.claude/skills/ai-writing-tropes/references/paragraph-structure.md
··· 1 + # Paragraph Structure Tropes 2 + 3 + ## Short Punchy Fragments 4 + 5 + Excessive use of very short sentences or sentence fragments as standalone 6 + paragraphs for manufactured emphasis. RLHF training pushes models toward "writing 7 + for readability" aimed at the lowest common denominator: one thought per sentence, 8 + no mental state-keeping required. No real person writes first drafts this way. 9 + 10 + Bad: 11 + 12 + - "He published this. Openly. In a book. As a priest." 13 + - "These weren't just products. And the software side matched. Then it 14 + professionalised. But I adapted." 15 + - "Platforms do." 16 + 17 + Fix: Combine related thoughts into real paragraphs. Let sentences build on each 18 + other. Trust readers to follow compound sentences. 19 + 20 + ## Listicle in a Trench Coat 21 + 22 + Numbered or labeled points dressed up as continuous prose. The model writes a 23 + listicle but wraps each point in a paragraph that starts with "The first... The 24 + second... The third..." to disguise the format. 25 + 26 + Bad: 27 + 28 + - "The first wall is the absence of a free, scoped API... The second wall is the 29 + lack of delegated access... The third wall is the absence of scoped 30 + permissions..." 31 + - "The second takeaway is that... The third takeaway is that..." 32 + 33 + Fix: Either write actual prose that flows between ideas with real transitions, or 34 + admit it's a list and format it as one. The disguise fools nobody.
+132
.claude/skills/ai-writing-tropes/references/sentence-structure.md
··· 1 + # Sentence Structure Tropes 2 + 3 + ## Negative Parallelism 4 + 5 + The "It's not X -- it's Y" pattern, often with an em dash. The single most 6 + commonly identified AI writing tell. AI uses this to create false profundity by 7 + framing everything as a surprising reframe. One in a piece can be effective; ten 8 + in a blog post is an insult to the reader. 9 + 10 + Includes the causal variant "not because X, but because Y" where every 11 + explanation is framed as a surprise reveal. 12 + 13 + Bad: 14 + 15 + - "It's not bold. It's backwards." 16 + - "Feeding isn't nutrition. It's dialysis." 17 + - "Half the bugs you chase aren't in your code. They're in your head." 18 + 19 + Fix: State the actual point directly. "This is backwards." Done. 20 + 21 + ## "Not X. Not Y. Just Z." 22 + 23 + The dramatic countdown. AI builds tension by negating two or more things before 24 + revealing the actual point. 25 + 26 + Bad: 27 + 28 + - "Not a bug. Not a feature. A fundamental design flaw." 29 + - "Not ten. Not fifty. Five hundred and twenty-three lint violations across 67 files." 30 + - "not recklessly, not completely, but enough" 31 + 32 + Fix: Lead with the point. "523 lint violations across 67 files." 33 + 34 + ## "The X? A Y." 35 + 36 + Self-posed rhetorical questions answered immediately. The model asks a question 37 + nobody was asking, then answers it for dramatic effect. 38 + 39 + Bad: 40 + 41 + - "The result? Devastating." 42 + - "The worst part? Nobody saw it coming." 43 + - "The scary part? This attack vector is perfect for developers." 44 + 45 + Fix: Merge into a single statement. "The result was devastating." Or better yet, 46 + show why it was devastating. 47 + 48 + ## Anaphora Abuse 49 + 50 + Repeating the same sentence opening multiple times in quick succession. 51 + 52 + Bad: 53 + 54 + - "They assume that users will pay... They assume that developers will build... 55 + They assume that ecosystems will emerge..." 56 + - "They could expose... They could offer... They could provide... They could 57 + create..." 58 + 59 + Fix: Vary your sentence openings. Combine related points. Cut the weak ones. 60 + 61 + ## Tricolon Abuse 62 + 63 + Overuse of the rule-of-three pattern, often extended to four or five. A single 64 + tricolon is elegant; three back-to-back tricolons are a pattern recognition 65 + failure. 66 + 67 + Bad: 68 + 69 + - "Products impress people; platforms empower them. Products solve problems; 70 + platforms create worlds. Products scale linearly; platforms scale 71 + exponentially." 72 + - "workflows, decisions, and interactions" 73 + 74 + Fix: Use two items, or five. Break the rhythm. Not everything needs three beats. 75 + 76 + ## "It's Worth Noting" 77 + 78 + Filler transitions that signal nothing. AI uses these to introduce new points 79 + without connecting them to the previous argument. 80 + 81 + Also includes: "It bears mentioning", "Importantly", "Interestingly", "Notably". 82 + 83 + Bad: 84 + 85 + - "It's worth noting that this approach has limitations." 86 + - "Importantly, we must consider the broader implications." 87 + - "Interestingly, this pattern repeats across industries." 88 + 89 + Fix: Delete the filler and state the point. "This approach has limitations." 90 + 91 + ## Superficial Analyses 92 + 93 + Tacking a present participle ("-ing") phrase onto the end of a sentence to inject 94 + shallow analysis. The model attaches significance to mundane facts using phrases 95 + like "highlighting its importance", "reflecting broader trends", or "contributing 96 + to the development of...". 97 + 98 + Bad: 99 + 100 + - "contributing to the region's rich cultural heritage" 101 + - "underscoring its role as a dynamic hub of activity and culture" 102 + 103 + Fix: If the analysis matters, give it its own sentence with actual substance. If 104 + it doesn't matter, cut it. 105 + 106 + ## False Ranges 107 + 108 + Using "from X to Y" constructions where X and Y aren't on any real scale. 109 + Legitimate "from X to Y" implies a spectrum with a meaningful middle. AI uses it 110 + as a fancy way to list two loosely related things. 111 + 112 + Bad: 113 + 114 + - "From innovation to implementation to cultural transformation." 115 + - "From the singularity of the Big Bang to the grand cosmic web." 116 + 117 + Fix: Just list the things, or pick the one that matters most. 118 + 119 + ## Gerund Fragment Litany 120 + 121 + After making a claim, AI illustrates it with a stream of verbless gerund 122 + fragments — standalone sentences with no grammatical subject. The first sentence 123 + already said everything. The fragments add nothing except word count. 124 + 125 + Bad: 126 + 127 + - "Fixing small bugs. Writing straightforward features. Implementing well-defined 128 + tickets." 129 + - "Shipping faster. Moving quicker. Delivering more." 130 + 131 + Fix: If examples help, use a real sentence. "They fix bugs and write 132 + straightforward features." Or just cut them — the claim already landed.
+138
.claude/skills/ai-writing-tropes/references/tone.md
··· 1 + # Tone Tropes 2 + 3 + ## "Here's the Kicker" 4 + 5 + False suspense transitions that promise a revelation but deliver a point that did 6 + not need the buildup. The model manufactures drama before an otherwise 7 + unremarkable observation. 8 + 9 + Also includes: "Here's the thing", "Here's where it gets interesting", "Here's 10 + what most people miss". 11 + 12 + Bad: 13 + 14 + - "Here's the kicker." 15 + - "Here's the thing about AI adoption." 16 + - "Here's where it gets interesting." 17 + 18 + Fix: Drop the windup. State the point. 19 + 20 + ## "Think of It As..." 21 + 22 + The patronizing analogy. AI defaults to teacher mode and assumes the reader needs 23 + a metaphor to understand anything. Often produces analogies less clear than the 24 + original concept. 25 + 26 + Bad: 27 + 28 + - "Think of it like a highway system for data." 29 + - "Think of it as a Swiss Army knife for your workflow." 30 + 31 + Fix: Explain the actual thing. If a metaphor genuinely helps, use it without the 32 + "think of it as" framing. 33 + 34 + ## "Imagine a World Where..." 35 + 36 + The classic AI invitation to futurism. Begins with "Imagine" followed by a list 37 + of wonderful things that will happen if the reader agrees with the premise. 38 + 39 + Bad: 40 + 41 + - "Imagine a world where every tool you use has a quiet intelligence behind it..." 42 + - "In that world, workflows stop being collections of manual steps and start 43 + becoming orchestrations." 44 + 45 + Fix: Describe what exists or what you're proposing. Skip the hypothetical 46 + daydream. 47 + 48 + ## False Vulnerability 49 + 50 + Simulated self-awareness or honesty that reads as performative. The model pretends 51 + to break the fourth wall or admit a bias, creating a false sense of authenticity. 52 + Real vulnerability is specific and uncomfortable; AI vulnerability is polished and 53 + risk-free. 54 + 55 + Bad: 56 + 57 + - "And yes, I'm openly in love with the platform model" 58 + - "And yes, since we're being honest: I'm looking at you, OpenAI, Google, 59 + Anthropic, Meta" 60 + - "This is not a rant; it's a diagnosis" 61 + 62 + Fix: If you have a bias, show it through your arguments. Don't announce it for 63 + credibility points. 64 + 65 + ## "The Truth Is Simple" 66 + 67 + Asserting that something is obvious, clear or simple instead of proving it. If you 68 + have to tell the reader your point is clear, it probably isn't. 69 + 70 + Bad: 71 + 72 + - "The reality is simpler and less flattering" 73 + - "History is unambiguous on this point" 74 + - "History is clear, the metrics are clear, the examples are clear" 75 + 76 + Fix: Present the evidence. Let the reader decide if it's clear. 77 + 78 + ## Grandiose Stakes Inflation 79 + 80 + Everything is the most important thing ever. AI inflates the stakes of every 81 + argument to world-historical significance. A blog post about API pricing becomes a 82 + meditation on the fate of civilization. 83 + 84 + Bad: 85 + 86 + - "This will fundamentally reshape how we think about everything." 87 + - "will define the next era of computing" 88 + - "something entirely new" 89 + 90 + Fix: Match your language to the actual stakes. Most things are incremental 91 + improvements, and that's fine. 92 + 93 + ## "Let's Break This Down" 94 + 95 + The pedagogical voice that assumes the reader needs hand-holding. AI defaults to a 96 + teacher-student dynamic even when writing for expert audiences. 97 + 98 + Also includes: "Let's unpack this", "Let's explore", "Let's dive in". 99 + 100 + Bad: 101 + 102 + - "Let's break this down step by step." 103 + - "Let's unpack what this really means." 104 + - "Let's explore this idea further." 105 + 106 + Fix: Just present the analysis. Readers don't need permission to follow along. 107 + 108 + ## Vague Attributions 109 + 110 + Attributing claims to unnamed authorities instead of being specific. AI invokes 111 + "experts", "observers", "industry reports" without naming anyone. It also inflates 112 + quantity — presenting what one person said as a widely held view. 113 + 114 + Bad: 115 + 116 + - "Experts argue that this approach has significant drawbacks." 117 + - "Industry reports suggest that adoption is accelerating." 118 + - "Observers have cited the initiative as a turning point." 119 + 120 + Fix: Name the source or drop the attribution. "A 2024 Gartner report found..." 121 + or just state the claim directly. 122 + 123 + ## Invented Concept Labels 124 + 125 + AI clusters invented compound labels that sound analytical without being grounded. 126 + It appends abstract problem-nouns (paradox, trap, creep, divide, vacuum, 127 + inversion) to domain words and uses them as if they're established terms. They 128 + function as rhetorical shorthand: name a thing, skip the argument. Multiple such 129 + labels in the same piece is a strong AI signal. 130 + 131 + Bad: 132 + 133 + - "the supervision paradox" 134 + - "the acceleration trap" 135 + - "workload creep" 136 + 137 + Fix: Describe the phenomenon instead of labeling it. If the label isn't 138 + established in the field, don't pretend it is.
+65
.claude/skills/ai-writing-tropes/references/word-choice.md
··· 1 + # Word Choice Tropes 2 + 3 + ## "Quietly" and Other Magic Adverbs 4 + 5 + Overuse of "quietly" and similar adverbs to convey subtle importance or 6 + understated power. AI reaches for these to make mundane descriptions feel 7 + significant. 8 + 9 + Also includes: "deeply", "fundamentally", "remarkably", "arguably". 10 + 11 + Bad: 12 + 13 + - "quietly orchestrating workflows, decisions, and interactions" 14 + - "the one that quietly suffocates everything else" 15 + - "a quiet intelligence behind it" 16 + 17 + Fix: Remove the adverb. If the thing is important, the facts will show it. 18 + 19 + ## "Delve" and Friends 20 + 21 + "Delve" went from an uncommon English word to appearing in a huge percentage of 22 + AI-generated text. Part of a family of overused AI vocabulary. 23 + 24 + Also includes: "certainly", "utilize", "leverage" (as a verb), "robust", 25 + "streamline", "harness". 26 + 27 + Bad: 28 + 29 + - "Let's delve into the details..." 30 + - "Delving deeper into this topic..." 31 + - "We certainly need to leverage these robust frameworks..." 32 + 33 + Fix: Use plain words. "Look at", "use", "strong", "simplify". 34 + 35 + ## "Tapestry" and "Landscape" 36 + 37 + Ornate or grandiose nouns where simpler words work. "Tapestry" for anything 38 + interconnected. "Landscape" for any field or domain. 39 + 40 + Also includes: "paradigm", "synergy", "ecosystem", "framework". 41 + 42 + Bad: 43 + 44 + - "The rich tapestry of human experience..." 45 + - "Navigating the complex landscape of modern AI..." 46 + - "The ever-evolving landscape of technology..." 47 + 48 + Fix: Say what you mean. "The field of AI" or just "AI". "How things connect" 49 + instead of "tapestry". 50 + 51 + ## The "Serves As" Dodge 52 + 53 + Replacing simple "is" or "are" with pompous alternatives. AI avoids basic copulas 54 + because its repetition penalty pushes it toward fancier constructions. 55 + 56 + Also includes: "stands as", "marks", "represents". 57 + 58 + Bad: 59 + 60 + - "The building serves as a reminder of the city's heritage." 61 + - "Gallery 825 serves as LAAA's exhibition space for contemporary art." 62 + - "The station marks a pivotal moment in the evolution of regional transit." 63 + 64 + Fix: Use "is". "The building is a reminder of the city's heritage." Shorter, 65 + clearer, better.