Thinking Is Slower. That's the Point.

AI isn’t new. I’m not early to this. There are countless blog posts filled with smart takes on AI. This isn’t one of them. It’s just what I’ve been thinking lately, especially over the last year as I’ve started letting AI in more, mostly to avoid falling behind simply because I’ve been overly principled.

For me, it began with skepticism. It’ll never be useful. It’ll never come close to what I can do. Then came curiosity. It was impressive, almost eerie, how well it could predict text. How it could pretend to be a Linux shell and actually pull it off.

However, after a while, it started showing up differently. It crept in quietly. Not through some big moment, but through a slow, undeniable shift. A tool that once sat on the side now runs underneath everything.

It’s present when I make decisions. When I search. When I write, build, or try to think something through. Always suggesting: go faster, be smarter, skip the hard part.

The suggestion builds gradually. Even without fully trusting the system, I find myself using it consistently. I start pushing work into it, first the boring bits, then the real ones.

I noticed I started reaching for it instinctively. Not to work through a problem, but to get past it. Eventually, I realized my thinking wasn’t happening when it used to. That might be a problem.

Loops, Detours, and Accidents

I’ve always had a messy brain. ADHD does that. My thoughts loop, interrupt, derail, and double back. It’s not elegant, but it works in its own way.

Most of the useful things I’ve ever built or figured out came from that chaos. By trying something dumb, running into a dead end, or breaking it all, I often find something better along the way.

AI doesn’t always work like that. It mostly flattens the path. It gives you something clean, predictable, and useful.

Getting the right output, though, isn’t the same as understanding what I’m doing. And creating something is not the same as generating it.

Lazy Is the New Easy

Despite my hesitation, I still use it. For code, for writing, for thinking. It helps. The models speed things up, sometimes more than I’d like to admit.

But when I rely on it too much, my thinking starts to blur. The idea feels completed before I’ve even spent time with it. Like the answer has shown up before I’ve finished phrasing the full question.

There’s also an external pressure that, in my experience, isn’t acknowledged enough. Especially in IT, people expect work to be fast, polished, and automated. Choosing not to use AI can feel like being inefficient on purpose.

And for someone whose thoughts already loop, spiral, and derail, that pressure doesn’t help.

I worry we’re training ourselves to produce, not understand.

Talk. Quack. Think.

As someone with dyslexia, I often ask a model to rewrite sentences, mostly for spellcheck, and sometimes to help rewrite a sentence entirely. While the spellcheck is always on point, I usually don’t go with the rewritten text. They’re often better structured, maybe even objectively better. But they can feel stiff or uncanny. I prefer the ones that sound like me.

However, it still helps. It makes me clearer on what I’m trying to say. And I can fold that clarity back into my own words.

When I treat it like a vending machine for cleverness, it usually just end up recycling something I already knew. So I’ve stopped seeing it as the source of just the answer and started seeing it as a source for questions too. Like a sparring partner, a personal interactive rubber duck that pushes back and keeps me thinking. Something to bounce off, not defer to.

Because of this, my detours are happening more often again. More infinite loops. More derailments. And that feels like a good sign. My input into a model determines what I get out of it, so when I stopped chasing the easiest answer and started rambling again, the model started joining me on the detours. It even reminds me what I was about to say... assuming one of us hasn’t hit the context limit first.

So I’m Keeping the Mess

I don’t hate AI. I’m impressed by it every day. It’s a super useful tool.

But I’m trying to be more deliberate with it.

Whenever I hand something off to a model, I ask myself whether it was mine to give away. Whether I skipped the part where I would’ve actually learned something. Whether I took a small short-term step forward, but a long-term step back.

Sometimes the answer’s fine. Other times, it isn’t.

There’s tension, always, between speed and depth, between ease and ownership.

Still, I know this much: If I stop making space for friction, I stop thinking clearly. And when that happens, I lose the part of me that actually makes things.

Yes, I’ll use the models. But I also care about how I get there, not just where I end up.

So I’m keeping the mess.

That’s where all the real shit happens.