AI Fatigue and the Fight to Keep Thinking

GPT and I spent the weekend reflecting on AI fatigue. Yes, that sentence is ironic.

I work two developer roles that live in completely different worlds.

I start the day as a developer at a marketing agency. That means WordPress fires, custom Laravel systems, client calls, last-minute change requests, infrastructure issues, integrations, compliance work — the whole thing. Some days I’m building something from scratch. Other days I’m updating content, troubleshooting a plugin, or working one on one with a client to make sure their launch goes as smoothly as possible.

After my 9 to 5, I start my 5 to 9 at Donut Team — a modding community building multiplayer servers, mod launchers, and tooling around The Simpsons: Hit & Run. That’s C++, assembly, reverse engineering, hooking into memory, building systems that were never meant to exist. It’s engineering-heavy, technical, and very intentional.

Both of these jobs give me so much joy to see my clients, my community, my team(s) light up the room. To see their excitement behind the work we're doing and to share victories together. That reaction, that feeling, is what motivates me in every facet of my life.

However, the recent AI boom (or, bubble) has diminished some of those feelings. It has created some feelings of fatigue and burnout. It feels like a never ending chase to find reason, but at the same time, it has hampered my own ability to think and build for myself.

The Tool Is Useful — But That’s Not the Problem

AI is genuinely useful for mundane tasks — that’s the honest part. It handles boilerplate, rewrites things for clarity, sanity-checks syntax, summarizes documentation, and helps with quick research on obscure issues. In theory, that’s ideal. Offload the repetitive work so you can focus on architecture, performance, creative systems — the parts of engineering that are actually rewarding. The fatigue begins when the line between “mundane” and “meaningful” starts to blur. Is writing this function mundane? Is designing this query mundane? Is scaffolding this integration mundane?

Sometimes AI produces something that looks correct. It compiles. It runs. It even passes a quick test. Then three weeks later it crumbles under real-world usage. And that’s when the uncomfortable realization sets in: I didn’t truly learn anything while building it. I didn’t reason through the trade-offs or wrestle with the edge cases. I skipped the friction that normally builds understanding, and now I’m debugging code I don’t fully own in my head. That’s where mental atrophy starts creeping in.

The Reward System Problem

There’s another layer to this that’s harder to talk about: the health of your brain. Solving a hard problem gives you a specific kind of reward. You wrestle with it. You trace edge cases. You refactor. You test. Then it works. That relief is earned. Psychologists have long studied this “effort justification” effect — the idea that we value outcomes more when we’ve invested effort into them. Neuroscience research on dopamine and reward pathways also shows that challenge and mastery activate motivation circuits more deeply than passive completion. The struggle is part of what makes the resolution meaningful.

When AI shortcuts that process, the reward can feel diluted. The friction is gone, but so is part of the satisfaction. Researchers studying “cognitive offloading” — the habit of outsourcing mental tasks to external tools — have found that while it improves short-term efficiency, it can reduce long-term retention and skill development. Over time, work starts to feel less like problem solving and more like orchestration. And sometimes that makes it heavier, not lighter. It’s as if the brain knows you didn’t lift the weight yourself. Sustained thinking feels harder because you’re less practiced at it. That’s not a lack of ability. It’s what happens when effort is consistently outsourced — the muscle weakens from disuse.

The Industry Context Doesn’t Help

At the same time, the industry is sprinting forward with AI in ways that feel reckless. Salesforce laid off 4,000 people after leaning heavily into AI-driven automation. Less than a year later, executives admitted they were more confident in the maturity of generative AI than they should have been.

I’m fortunate to work for a company that values me, my work, and my contributions as an individual. There’s no push to replace engineers with a model, however we are encouraged to use AI in ways that can benefit us, the work and the client. Adoption is framed as augmentation, not substitution.

However, watching the broader industry charge ahead makes me uneasy. It’s hard not to wonder whether caution will eventually be labeled as hesitation — or worse, inefficiency. When the market rewards speed and cost-cutting, restraint can start to feel like a liability.

Across the industry, I’m seeing developers replaced before the tooling has actually proven itself. “AI-first” initiatives get announced before anyone has answered architectural questions about reliability, maintainability, or long-term ownership. Products are suddenly “AI-powered,” whether it meaningfully improves the experience or just satisfies a slide deck.

At the same time, hardware demand drives costs up. Layoffs get framed as efficiency. Engineers are expected to integrate AI into workflows, sometimes because it’s strategically useful — and sometimes because not doing so feels like falling behind.

None of this makes the tool inherently bad. Used carefully, it can absolutely remove friction from mundane work. But the pace at which it’s being adopted feels less like thoughtful evolution and more like a gold rush. Chasing to be the first company to change the world, when all we're chasing is how to predict the next word, next line of code, next frame better.

Turning Off the Model

Working on Donut Team can be grounding. We don’t use AI to generate mods, write our content, respond to our community, or engineer our systems. We don’t outsource our creativity, and we definitely don’t outsource our understanding.

If we hook into a memory address, we make damn sure to understand what it is doing. If something crashes, we either know why — or we’re going to figure it out ourselves. No chatbot is debugging our assembly hooks for us.

And even if we tried, tools like ChatGPT, Copilot, Gemini, or Claude don’t have the context of our systems. Memory injection and reverse-engineering a 2003 game engine isn’t the same as centering a div on a webpage. These models are trained on broad, common patterns. What we’re doing is niche, stateful, and highly specific. Without deep context, they can only guess, and if they guess, we could jeopardize stability across the hundreds of thousands of machines we're installed on.

That friction forces learning and ownership. Our team asks for help from each other, even when we're time zones apart. We trace problems, reason through edge cases. It reminds me that engineering is still about understanding systems deeply — not just assembling plausible solutions.

With that said, I don't think we're all AI doomers; I think we just share a similar sentiment. Some of us are more willing to use AI, though at least right now, it is used largely for the business side of things — the things that does not affect our understanding of our product or our community; such as automating our ClickUp, or building a system to track our expenses.

So What Is AI Fatigue, Really?

I don’t hate AI. In some ways, it’s made parts of life and work smoother. I hate not knowing where the line is. I hate feeling like the work I enjoy — the thinking, the reasoning, the deep understanding — is being diluted.

If I outsource too much thought, I lose the part of engineering that made so many of us fall in love with it in the first place: being invaluable thinkers. Engineers are not orchestrators of probabilistic outputs, we are the builders who have connected people across the world. For better, or for worse, that’s one thing I refuse to trade away.

I was recently asked what web development will look like in 24 months. I paused longer than I expected to before answering, “I’m not sure.” The more I think about it, the less predictable it feels. AI has gotten very good at replicating patterns — text, code, structure — but replication isn’t the same as understanding or building anything original. Replication without comprehension doesn’t move the craft forward; it just rearranges what already exists. And if we forget that distinction, we risk hollowing out the very skill set that built this industry.

The world needs to stop treating AI like a savior and start treating it like what it actually is — a tool. The printing press didn’t magically enlighten humanity; it amplified whatever ideas people chose to spread. AI doesn’t create wisdom or progress on its own. It scales human intent.

If there’s one message to take from this, it’s simple: don’t forget you’re human. And yes — I understand the irony that this blog post was AI-assisted. The difference is, the thinking behind it wasn’t.