#blogging#tech#ai

How to Actually Learn While Coding with AI (Not Just Ship Faster)

How to Actually Learn While Coding with AI (Not Just Ship Faster)

AI coding assistants are everywhere now. Claude, Copilot, Cursor-pick your flavor. They’re incredible at speeding things up, generating boilerplate, and solving problems you’d otherwise spend hours on.

But there’s a trap nobody talks about: you can ship faster while learning nothing.

I fell into it. You probably have too.

I’m going to share three things that changed how I use AI for coding. The third one is the real game-changer-it’s a simple trick that forces you to think critically instead of just accepting AI’s solutions. But first, let me explain the problem.

The Pattern We All Fall Into

Here’s how it usually goes:

You’re building a feature. You ask your AI assistant to implement something. It generates code using some algorithm you haven’t seen before, maybe imports a library you’re not familiar with, introduces a pattern that’s new to you.

Natural next step? You ask: “Can you explain what you just did?”

The AI gives you a nice explanation. It makes sense. You think “cool, got it” and move on to the next thing.

Except… did you actually get it?

The Problem with Asking for Explanations

I realized something after doing this for a while: I was understanding the explanation, not the code.

There’s a big difference.

When AI explains its changes, it gives you a narrative. A story about why it made certain choices. “I used this algorithm because of X. I chose this library because of Y. This pattern handles Z edge case.”

Your brain processes that narrative. It makes sense. You get that little hit of comprehension. You feel like you learned something.

But you didn’t actually engage with the code. You didn’t struggle with the unfamiliar syntax. You didn’t trace through the logic yourself. You didn’t build the mental model that comes from wrestling with something new.

You outsourced the thinking along with the typing.

When This Actually Matters

Before we go further-this isn’t about every line of code AI writes.

If AI is generating a standard CRUD endpoint, basic validation, or refactoring something straightforward? Just ask for explanations. Move fast. That’s not where the learning opportunity is.

This matters when AI introduces something genuinely new to you:

  • An algorithm you haven’t seen before
  • A library or framework you’re unfamiliar with
  • A design pattern that’s outside your usual repertoire
  • A methodology or approach you don’t fully understand

These moments are gold. They’re where you can actually level up. But only if you engage with them differently.

What Works Better: Read First, Ask Second

Here’s what I do now when AI generates code with something new:

1. Read the diff yourself first

Don’t immediately ask for an explanation. Just look at the code changes.

Sit with it. Even when it’s confusing. Especially when it’s confusing.

Your brain will start asking questions:

  • “Why is it structured this way?”
  • “What’s this function actually doing?”
  • “Why use this approach instead of what I’m familiar with?”

That cognitive friction is valuable. It’s where learning happens.

2. Ask specific, pinpointed questions

Instead of “explain what you changed,” ask targeted questions about the parts that confuse you:

  • “Why use reduce here instead of a for loop?”
  • “What’s the advantage of this async pattern over Promise.all?”
  • “How does this error handling approach differ from standard try-catch?”

Specific questions lead to deeper understanding than general overviews.

You’re engaging with the actual implementation details, not just getting a high-level summary.

3. Open a fresh AI chat for new concepts (This is the game-changer)

When AI introduces something you’re unfamiliar with-a new library, algorithm, or methodology-open a completely separate AI conversation. Ask about it without any context from your project.

For example, if Claude uses a debouncing technique you don’t know well, open a new chat and ask: “Explain debouncing in JavaScript and common use cases.”

Why the Fresh Chat Matters

This seems like a small thing, but it’s huge.

When you ask AI to explain changes in the original conversation, it’s explaining in the context of the solution it just gave you. There’s an implicit bias-it’s justifying its choices.

When you ask in a fresh conversation with zero context, AI just explains the concept generally. No project context. No justification of a specific implementation.

This forces YOU to do the critical thinking.

You have to bridge the gap between the general explanation and your specific code. You have to verify whether this approach actually fits your situation. You have to think about alternatives and trade-offs.

You’re not trapped in the bubble of AI’s original suggestion. You’re actively validating it.

The Hidden Benefit: Catching Mistakes

Here’s something that became obvious once I started reading code more carefully:

AI makes mistakes.

Not constantly, but regularly enough that you need to catch them. It uses suboptimal approaches. It over-engineers simple solutions. It introduces dependencies you don’t need. It misunderstands requirements.

When you just ask for explanations and accept them, you miss this. The explanation sounds good, so you assume the code is good.

When you read the code yourself-really read it, not just skim it-you start noticing things:

  • “Wait, this could be way simpler”
  • “This doesn’t handle the edge case we discussed”
  • “Why are we importing an entire library for this one function?”

You’re doing a code review. Just like you would with a human colleague.

That’s the right relationship with AI. Not as an oracle whose suggestions you accept blindly, but as a smart collaborator whose work you validate.

The Bigger Picture

This isn’t really about AI coding assistants. It’s about maintaining agency in your own learning.

These tools are incredibly powerful. They can make you 2-5x faster at shipping code. That’s amazing.

But if you’re not careful, they also make you passive. You stop thinking deeply about implementation details. You stop building the mental models that make you a better engineer.

The approach I’m describing is about staying active. Staying engaged. Using AI as a force multiplier for both productivity and learning, not just productivity.


The key insight: Read the code AI generates before asking for explanations. Use fresh AI conversations to learn new concepts without project context. This forces you to think critically about whether solutions actually fit your use case, helps you catch mistakes, and lets you learn while shipping fast.