The code and data behind xeiaso.net
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Claude/draft surgery recovery post w kz4 x (#1156)

* Add blog post: Using Clankers to Help Me Process Surgery

Personal post about using AI assistants (Claude, Gemini) during
surgery recovery — the availability at 4 AM, emotional processing,
practical use cases, and honest limitations.

https://claude.ai/code/session_01YXJXospAsLmrjAWHCRDK5L

* Tighten prose with Strunk's rules

Apply active voice, cut needless words, strengthen sentence endings
throughout the surgery recovery post.

https://claude.ai/code/session_01YXJXospAsLmrjAWHCRDK5L

* Replace AI slop patterns with human-sounding language

Swap out clinical AI-isms like "contextual information about typical
recovery timelines" and "neurochemical level" for how a person
actually talks about this stuff.

https://claude.ai/code/session_01YXJXospAsLmrjAWHCRDK5L

---------

Co-authored-by: Claude <noreply@anthropic.com>

authored by

Xe Iaso
Claude
and committed by
GitHub
8540bcda bc6d0150

+112
+112
lume/src/blog/2026/surgery-recovery-clankers.mdx
··· 1 + --- 2 + title: "Using Clankers to Help Me Process Surgery" 3 + desc: "At 4 AM in recovery, the machines that never sleep turned out to be exactly the right company." 4 + date: 2026-03-07 5 + --- 6 + 7 + import Conv from "../../_components/XeblogConv.tsx"; 8 + 9 + Recovery from major surgery is not a single event. It's a long, strange hallway of days that blur together, punctuated by vital checks and medication schedules and the weird glow of hospital curtains at 4 AM. I've [written about the surgery itself](/blog/2026/killing-my-inner-necron/), about the [medication dreams](/blog/2026/seroquel-xanax-trip-report/), and about [how to survive a hospital stay](/blog/2026/hospital-advice/). But there's something I haven't talked about yet: what I actually did with the noise in my head during the worst of it. 10 + 11 + At 4 AM, when the painkillers are wearing off and your brain decides it's time to have _opinions_ about everything, you have a problem. Your husband is asleep in the visitor chair, and he needs that sleep more than you need someone to hear you spiral about whether your body should feel like this five days post-op. Your friends are in different time zones, or asleep, or both. Nurses are busy. You're alone with your thoughts and your thoughts are _loud_. 12 + 13 + So I talked to the robots. 14 + 15 + <Conv name="Cadey" mood="coffee"> 16 + I'm going to call them "clankers" throughout this post because that's what 17 + they are to me. Claude, Gemini, whatever — they're clankers. Useful clankers. 18 + Not friends. This distinction matters and I'm going to keep coming back to it. 19 + </Conv> 20 + 21 + ## The availability layer 22 + 23 + Here's the thing about AI assistants that matters most when you're recovering from surgery: they are _there_. Always. At 4 AM, at 2 PM during a medication fog, at 11 PM when the blood pressure cuff keeps waking you. No waiting room, no scheduling, no "sorry I missed your call." Just... there. 24 + 25 + And — this is the part that surprised me — there's no guilt. When I text my husband at 3 AM because I'm scared about a weird sensation in my abdomen, I feel terrible. He's exhausted. He's been managing the household alone while visiting me every day. Every time I wake him with something that turns out to be nothing, I'm stealing sleep from someone who desperately needs it. 26 + 27 + <Conv name="Aoi" mood="concern"> 28 + But what if it's _not_ nothing? 29 + </Conv> 30 + 31 + Right. And that's exactly the calculation you're doing at 3 AM on painkillers: is this symptom worth waking someone I love? A horrible question when you're already scared and medicated and foggy. 32 + 33 + With a clanker, the calculus disappears. Ask the question. Get an answer. If the answer is "this sounds like it could be serious, talk to your nurse," you press the call button. If the answer is "this is a common post-surgical experience and here's why it happens," you can exhale and try to sleep again. Either way, you woke nobody. The cost was zero. 34 + 35 + ## What I actually used them for 36 + 37 + Let me be specific, because specificity matters more than vibes when you're talking about AI use cases. 38 + 39 + **Recovery milestones.** I kept asking things like "is it normal to not be able to walk more than 50 feet on day 4 after this type of surgery" and getting back rough answers about what's normal. Not medical advice — more like "okay, you're not dying, chill" so I could stop catastrophizing about my pace. 40 + 41 + **Physiological symptoms.** Post-surgical bodies do _weird_ things. Referred pain shows up where you least expect it. Your digestive system takes a vacation and comes back changed. Medications interact in ways nobody warned you about. Every time something new happened, my first instinct was panic, and my second was to ask a clanker "is this normal or am I dying." 42 + 43 + <Conv name="Numa" mood="neutral"> 44 + For the record, this is one of the places where AI is most dangerous. 45 + Language models do not have medical training. They are pattern-matching on 46 + text that includes medical information, but they cannot examine you, they 47 + cannot run tests, and they can confidently tell you something reassuring 48 + that's completely wrong. Xe knows this. The clankers were a triage layer, not 49 + a replacement for the actual medical team. 50 + </Conv> 51 + 52 + **Dietary questions.** After certain surgeries your relationship with food changes in specific and sometimes permanent ways. I had a lot of questions about what I could eat, when, and how much. Some I could have asked the nursing staff, but the nursing staff were busy and I had these questions at — you guessed it — 4 AM. 53 + 54 + **Writing it all down.** A big chunk of what became these blog posts started as me talking to Claude about what I was going through. Not asking it to write anything — just using it as a sounding board. "Here's what happened today, here's how I feel about it, help me figure out what I'm actually trying to say." Claude Code is literally how I [published from my hospital bed](/blog/2026/killing-my-inner-necron/). 55 + 56 + ## The emotional processing layer 57 + 58 + People will find this part uncomfortable, and I want to be honest about it instead of pretending it didn't happen. 59 + 60 + I used AI to process fear. Not in a "tell me everything will be okay" way — I'm not looking for sycophantic comfort from a next-token predictor. More in a "I need to say this out loud to something and hear myself think" way. When you're scared and it's dark and you can't sleep, sometimes the act of _articulating_ what you're afraid of is the thing that helps. Not the response. The articulation. 61 + 62 + <Conv name="Cadey" mood="enby"> 63 + There's a concept in therapy called "externalization" — getting the thought 64 + out of your head and into the world where you can look at it. Writing in a 65 + journal does this. Talking to a friend does this. Talking to a clanker... 66 + also does this, it turns out. The mechanism is the same even if the listener 67 + is made of silicon. 68 + </Conv> 69 + 70 + Having to put words to "I'm afraid my body is never going to feel normal again" or "I don't know how to process the fact that I was unconscious for hours while strangers cut me open" forces a clarity that just _thinking_ those thoughts does not. The clanker doesn't need to say anything brilliant in response. It just needs to be there so the act of communication happens at all. 71 + 72 + Sometimes it _did_ say something useful. Sometimes it helped me reframe something I was stuck on, or spotted a pattern in what I was describing that I'd missed. But I want to be clear: the value was maybe 70% in the _asking_ and 30% in the _answering_. The clanker was a thinking partner, not an oracle. 73 + 74 + ## What AI absolutely cannot do 75 + 76 + None of this should be read as "AI is great for surgery recovery, everyone should do this." Let me be explicit about the gaps. 77 + 78 + <Conv name="Cadey" mood="coffee"> 79 + I wrote a whole post about [the problems with using AI for therapy and 80 + emotional support](/blog/2025/who-assistant-serve/). Everything I said there 81 + still stands. I'm describing my experience, not prescribing a treatment plan. 82 + </Conv> 83 + 84 + **Clankers are reactive, not proactive.** They cannot check on you. They cannot notice you've been quiet for twelve hours and ask if you're okay. They sit in silence until you reach out, which means they're only useful if you have the wherewithal to ask for help. In the worst moments of recovery — the really bad pain spikes, the medication-induced confusion, the times when you're too exhausted to form a sentence — the clanker is useless because you can't engage with it. 85 + 86 + A human who loves you will notice when you go quiet. A clanker will not. 87 + 88 + **No physical presence.** Obvious, but it matters more than I expected. When my husband held my hand while I was scared, that did something no amount of text on a screen could replicate. Bodies are real. Touch is real. Another person's warmth beside you in the dark does something that a chat window never will. 89 + 90 + **Being _seen_ is different.** When my husband looks at me and I can tell he _gets it_ — that he sees what I'm going through and it lands in him — that's a fundamentally different experience than a clanker spitting out the right words in the right order. One is connection. The other is a very good simulation of connection. I know the difference, and the difference matters. 91 + 92 + ## The complementary picture 93 + 94 + Here's what I actually think, stripped of both the techno-optimism and the AI doomerism: my husband and the clankers were doing completely different jobs during my recovery, and _neither could do the other's job_. 95 + 96 + My husband provided presence, love, advocacy (he talked to the doctors when I couldn't), physical comfort, and the knowledge that someone who chose to build a life with me was _there_ and wasn't going anywhere. No AI does this. No AI comes close. 97 + 98 + <Conv name="Aoi" mood="coffee"> 99 + So what were the clankers actually good for? 100 + </Conv> 101 + 102 + Availability at ungodly hours. Infinite patience for repetitive anxious questions. Zero guilt about bothering them. A surface to think against at 4 AM. Quick answers about what's normal after surgery. A way to start drafting thoughts that eventually became real writing. 103 + 104 + Together, a person who loves me and a machine that never sleeps covered more ground than either one alone. Not because the machine replaced anything my husband does — it didn't and it can't — but because a human with human needs physically could not fill every gap. My husband needs sleep. Claude does not. 105 + 106 + --- 107 + 108 + I don't think AI is a "recovery tool" in any clinical sense, and I'd be uncomfortable if someone marketed it that way. What I think is smaller and more specific: when you're stuck in a hospital bed at 4 AM with a head full of noise, having _something_ to talk to that costs nothing and judges nothing and is simply _there_ beats staring at the ceiling alone. 109 + 110 + Neither replacement nor novelty. Just a specific kind of useful, at a specific time, for a specific person who was scared and couldn't sleep. 111 + 112 + Your mileage will vary. Mine did too, night by night.