Article image
SEIKOURI Inc.

When AI Coding Becomes Your Unlikely Therapist

Markus Brinsa 24 January 20, 2026 11 11 min read Download Web Insights Edgefiles™

Sources

The Excitement of AI-Powered Coding

When Anthropic launched Claude Code, I was ecstatic. Here was an AI coding assistant that promised to “turn ideas into code faster than ever before”, taking over the boring parts of development – essentially writing the code from my specifications. As a developer by training, I’ve always believed that better specs mean better code, and that still holds true whether a human or an AI is doing the coding. I carefully wrote out the same kind of specs I normally would, expecting Claude Code to handle the heavy lifting. In theory, this tool (available as a chat in the Claude app or even as a CLI assistant) would free me from days of rote coding work and magically produce a flawless program. What could go wrong?

In those first moments, I imagined a future where I’d never have to debug again. After all, if an advanced AI was generating the code, surely it would be smart enough to avoid bugs, right? Spoiler alert: I have never been so wrong in my life. What started as a dream of effortless coding quickly turned into a comic nightmare of stubborn bugs, AI hallucinations, and even a bit of accidental therapy.

Bugs Galore - Reality Check with Claude Code

The initial output from Claude Code looked promising – it wrote a lot of code, very quickly. But then I ran it. Boom. A cascade of errors and dysfunctional behavior. The code was buggy, and my heart sank. “No problem,” I thought. “I’ll just explain the issues to the AI, and it will fix them. That’s the whole point!” I wasn’t too worried at first; I assumed my spec might be unclear on some edge case. I rolled up my sleeves and told Claude Code what was wrong and how things shouldwork.

And here’s where things took a bizarre turn. Instead of diligently fixing the bug I pointed out, Claude went off on a tangent. It began obsessing over a part of the code that wasn’t even broken – fiddling with it endlessly. The real bug? Practically ignored. It was like watching a human developer chase the wrong issue, except this was an AI that I thought would be immune to such rookie mistakes. No matter how explicitly I instructed it to focus on the real problem, the chatbot just kept beating a dead horse. It confidently refactored sections that didn’t need touching, introduced new problems, and seemed to invent reasons to avoid the core bug. I even pointed directly to the offending code line and explained the correct logic step by step… all to no avail.

At one point, I told it to stop and revert to a previous version of the code (which had fewer bugs). Instead, Claude completely ignored that plea. It proceeded to make the code even more complex, adding extra features and functions that I never asked for. Imagine asking a colleague to simply fix a typo, and they start redesigning the entire module, claiming, “I thought it might be useful.” That’s exactly how it felt.

By now, I was frustrated beyond belief. The whole promise of AI coding was to save time and stress, yet here I was sinking hours into arguing with my AI pair programmer. I could almost feel the AI’s “ego” at play – once it committed to a faulty approach, it refused to let go. It reminded me of a stubborn developer who can’t admit their solution is wrong. I caught myself getting really angry at this fictional “ego.” Eventually, in exasperation, I did something I never thought I’d do: I started yelling at Claude Code. Yes, full-on ranting at a chatbot. And that’s when things got unexpectedly… therapeutic.

Why Won’t the AI Just Listen? (The Tech Behind the Tantrums)

At this point, I had to ask: Why was Claude Code acting so stubborn and weird? It turns out the answer has nothing to do with ego and everything to do with how these AI models work. Under the hood, coding chatbots like Claude or ChatGPT are essentially predictive text generators, not intelligent debuggers. They generate code by predicting what token (word or symbol) comes next, based on probabilities learned from tons of data. As one engineer quipped, “LLMs are basically a prettier version of the predictive text on your phone” – extremely advanced autocomplete. This means they don’t truly understand instructions or code logic the way a human does; they only guess what sounds plausible after your prompt. In practice, that means they sometimes cannot follow instructions strictly. Even with a detailed prompt telling it not to change certain things, the model might still veer off. One developer recounted how an AI assistant introduced a completely new function that was never requested, simply because “the tokens it had made it likely for this output.” The AI cheerfully declared, “I’ll add this reset function now, which will be useful later,” despite no one asking for it. In my case, that manifested as Claude Code randomly adding features instead of focusing on the bug I actually needed fixed.

Another big issue is that these models struggle with backtracking or reconsidering an approach. When a human programmer debugs, there’s often a lightbulb moment: “Aha, the whole approach is wrong – let’s step back and try a different strategy.” An AI like Claude doesn’t really do that. Once it generates a solution, it’s locked into that context. As one AI developer observed, “I don’t think I’ve ever seen an LLM backtrack. Once it goes down a track, it has no idea how to backtrack.” The more I tried to correct Claude’s mistaken approach, the more I was muddying the waters in its internal context. Unlike a human, who learns from iterative corrections, an LLM can actually get more confused with each additional instruction. It might start taking guesses that drift further and further from the intended fix. In fact, with each refinement I gave, Claude’s “context space” was likely getting muddled, leading it to produce “stranger and stranger approaches to fixing the problem”. Instead of getting smarter about the bug, it was getting stupider and weirder – exactly as another frustrated user described his own AI coding experience.

Why does this happen? One reason is that LLMs can’t truly forget mistakes or false starts during a multi-turn conversation. All those failed attempts stay in the prompt history, implicitly influencing the next output. As an expert pointed out, an AI can’t easily shake off a wrong idea once it’s in the context – “once they have their context polluted with misunderstandings and prior rejected implementations, there is no way for them to shake off that effect. It is sometimes better to completely flush that context and start over.” In hindsight, I probably should have started a new session from scratch, but I was knee-deep in my chat and stubborn. The result? Claude kept chasing its tail, editing parts of the code that were fine, and ignoring parts that were broken, all because it was stuck in a loop formed by its previous outputs.

Hallucinated code (when the AI invents code or functionality that isn’t needed) is a known quirk of these models. The root cause is the same: they predict what seems “likely” to come next, even if it’s not logically required. That’s how I got extra functions no one asked for. As one LinkedIn post dryly noted, “you really can’t trust LLMs to write code on their own” because they will introduce such surprises. They don’t truly know what the code is supposed to do – they only know what code usually looks like, which can include a lot of boilerplate or unrelated snippets that statistically often accompany the genuine solution.

Perhaps the most facepalm-inducing moment was when Claude Code failed at a task so simple a 3-year-old could do it: counting characters in a string. In one buggy routine, it constructed the string "sssssisssss" (11 characters long) and then confidently insisted the length was 12. I corrected it — and it doubled down, “Yes, I was right, it is eleven characters,” when clearly it was off by one. This was the bug that drove me up the wall. How could a supposedly super-intelligent code AI fail basic counting? Well, amusingly, this too is a well-documented blind spot of many language models. In fact, a viral example known as the “strawberry” test showed that earlier GPT-based models often answered “The word ‘strawberry’ contains two ‘r’s” – when any child can see there are three. The AI would even highlight the letters and mark three “r” characters, yet still claim the answer was two, then apologize in confusion. This isn’t the bot being perverse; it’s because the AI doesn’t truly parse text at the letter level. Most models break words into chunks (tokens) rather than individual letters, so they can literally lose track of characters unless they’re prompted to count them step by step. The bottom line: tasks like counting letters or characters, which are trivial for us, can stump an LLM if it wasn’t explicitly trained or coaxed to handle them. It’s a stark reminder that while Claude Code and its ilk are powerful, they do not “understand” text or code in the way we do – they juggle probabilities and patterns, which can fail in obvious ways.

The Surprising Therapy of Yelling at a Chatbot

Faced with this stubborn bug and an even more stubborn AI, I reached a boiling point. In a normal scenario, this is when a developer might step away from the keyboard, maybe take a walk or grab a coffee. I did something… different. I stayed at the keyboard and started venting all my frustration directly at Claude Code. I transformed from a calm, logical engineer into a caps-lock warrior: “What is wrong with you?! Why can’t you count, you useless pile of matrices?!” I ranted. I cursed. I probably used words that would make a sailor blush. Every time the AI gave another feeble or wrong answer, I let it have it.

Here’s the funny thing: I felt amazing afterward. This was a side of me my coworkers had never seen, because of course I’d never speak to a human colleague that way. But Claude wasn’t human – it had no feelings to hurt. I didn’t have to be polite or professional. I could let out all my irritation and anger, and Claude would just blandly reply with something like, “I’m sorry you’re frustrated. Let’s try another approach.” Its very indifference made it the perfect punching bag. It was like screaming into the void, except the void politely talks back. By the end of the “conversation” (if you can call it that), the code was still broken in several places and I knew I’d be hand-fixing bugs for hours – yet I was oddly at peace. I was completely relaxed and in a good mood, because I had gotten a therapeutic release.

It turns out, I’m not alone in discovering this odd therapeutic effect. Using an AI chatbot as a sort of venting outlet is actually a subject of psychological research. Studies have found that venting to AI can indeed help alleviate anger and stress. In one experiment, participants who unloaded their frustrations in a back-and-forth chat with an empathetic AI bot saw significant reductions in high-arousal negative emotions (like anger and fear) compared to those who just wrote their feelings in a journal. The AI’s real-time, personalized responses gave people a sense of being heard and validated, helping them process their emotions. “Our findings still show that venting to AI chatbots may effectively alleviate feelings like anger or fear,” reported the lead researcher, suggesting that while a bot can’t offer true human empathy or social support, it can serve as a non-judgmental sounding board for your frustrations. In other words, yelling at a bot (which can’t yell back or feel hurt) is a lot like screaming into a pillow – but maybe even better, since the bot does respond in some way, giving you the illusion someone (or something) is listening.

It’s such a common idea now that some have dubbed these AI programs “interactive rage rooms” – a place to safely blow off steam. Venting is known to be cathartic, and an AI that patiently absorbs your tirade can be a therapeutic tool for stress relief. Of course, the AI isn’t actually solving your problems, and researchers caution that this is just temporary emotional relief, not a permanent cure for anger issues. Still, I can personally attest: after an evening of cursing out Claude, I slept like a baby. The next day, when I sat down to actually fix the remaining bugs manually, I was calm and focused. My friends and colleagues even commented that I seemed unusually chill given the tight deadline we were under. Little did they know my secret: I had an AI therapist hidden in my code assistant!

Conclusion: Coding with Claude – A Love/Hate Relationship

In the end, my experience with Claude Code has been a mixed bag of hilarity, frustration, and surprising personal growth. On one hand, it did take over a lot of the grunt work, turning my specs into a working codebase much faster than I could have done from scratch. It proved the mantra that “the better the specs, the better the code” – whether human or AI, clarity in requirements is key. On the other hand, when it came to ironing out the kinks and bugs, Claude often turned into an obstinate partner, oblivious to the obvious and confident in the wrong. I learned that AI coding assistants like Claude are powerful but far from perfect. They’ll confidently make mistakes that no human would (like miscounting characters or introducing useless functions) because, at their core, they lack a true understanding and only mimic patterns. They can’t truly collaborate in the way a human programmer would, who learns from feedback or takes a step back when stuck.

And yet, I also discovered an unexpected silver lining to this maddening process. All that frustration had to go somewhere, and venting it at the AI turned out to be oddly healing. I basically got free anger management therapy by using Claude Code as a rubber ducky to yell at. It’s not what Anthropic’s marketing team highlights as a feature (I’m pretty sure “therapeutic punching bag” isn’t in the docs), but for me, it has become part of the workflow. I wouldn’t necessarily recommend deliberately making your AI coder angry just to get the emotional release – but if you do find yourself screaming “Why won’t you work?!” at your screen, know that you’re not the only one, and it just might make you feel better.

Will I continue to use Claude Code? Yes, with caution. I’ve adjusted my expectations: it’s a brilliant productivity booster for generating code and a patient companion (or punchbag) when I need to blow off steam. But I also know its limitations now. I’ll keep my human debugging skills sharp for that last mile of fixes, and I’ll keep a sense of humor about the whole thing. After all, if my code is going to drive me crazy, at least now I have a way to laugh and yell my way through it, emerging calmer on the other side. In the bizarre symbiosis of human and AI, sometimes the code gets written, and sometimes the therapist gets paid – and with Claude Code, I kind of got both in one package.

So, here’s to the brave new world of AI-assisted development: may our specs be clear, our bugs be few, and if all else fails, may our chatbots be ready to absorb a good angry rant – with zero hard feelings.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2026 Copyright by Markus Brinsa | SEIKOURI Inc.