The Engaging Teacher — human-centered learning design at the intersection of pedagogy, clarity, and AI.

Bloom’s Taxonomy Didn’t Fail.

Our Use of It Did.

For years, Bloom’s Taxonomy has been treated like a staircase.

First, students remember.
Then they understand.
Then they apply.
Eventually—if we’re lucky—they analyze, evaluate, and create.

This model has shaped lesson plans, assessments, professional development, and entire curricula. It’s familiar. It’s comforting. And it’s deeply misunderstood.

The recent rise of AI didn’t break Bloom’s Taxonomy.
It simply exposed how thin our use of it had already become.

Bloom's isn't a ladder students climb. It's a set of lenses they learn to use.

Bloom Was Never a Ladder

Bloom’s Taxonomy was never intended to be a linear progression or a checklist. It describes types of cognitive work, not a sequence students must climb in order.

Yet in practice, we’ve often flattened it into something like this:

  • Lower levels = easier
  • Higher levels = harder
  • Creation = proof of learning
  • More output = more rigor

This framing quietly encourages a dangerous shortcut:
If students produce something that looks complex, we assume complex thinking must have happened.

AI makes that assumption impossible to defend.

What AI Actually Changed (and What It Didn’t)

AI can now:

  • Recall information instantly
  • Summarize complex material fluently
  • Apply patterns at scale
  • Generate polished drafts on demand

This has led some to argue that the “lower levels” of Bloom are obsolete — that remembering, understanding, and applying no longer matter because machines can do them faster and better.

But that conclusion misses the point.

AI didn’t replace those forms of thinking.
It replaced our ability to pretend that visible output equals evidence of thinking.

When a tool can generate an essay, a lesson plan, a code snippet, or a design artifact in seconds, we’re forced to confront an uncomfortable truth:

Much of what we’ve been assessing was never thinking in the first place.
It was performance.

The Real Gap: Judgment

The most important cognitive work was never “at the top” of Bloom’s Taxonomy.

It was running through all of it.

Judgment looks like:

  • Noticing when something is unclear
  • Diagnosing gaps in reasoning
  • Evaluating quality against criteria
  • Deciding what matters and why
  • Recognizing when an answer is insufficient, even if it’s fluent

These are not optional “advanced skills.”
They are what make remembering meaningful, understanding accurate, and application appropriate.

And they are exactly what AI cannot do on a learner’s behalf.

AI can generate.
It cannot decide.

The Mistake We’ve Been Making

For a long time, many learning environments implicitly taught this sequence:

Information first. Judgment later.

Students were expected to absorb content, produce work, and receive feedback after the fact—often from an instructor or evaluator who performed the judgment for them.

In an AI-mediated world, that order collapses.

If generation comes first, learners can bypass the very thinking we claim to value most. The result is work that looks sophisticated but is cognitively hollow.

This isn’t an AI problem.
It’s a design problem.

Reframing Bloom for the AI Era

The question isn’t whether Bloom’s Taxonomy still matters.

The question is whether we are willing to teach judgment deliberately, rather than assuming it emerges automatically at the “higher levels.”

A more honest reframing looks like this:

  • Understanding is not a prerequisite to judgment — it is shaped by it
  • Application without evaluation is risk, not rigor
  • Creation without diagnosis is noise

Judgment doesn’t float above Bloom’s Taxonomy.
It’s what makes every level meaningful.

What This Means for Learning Design

Designing for judgment requires a shift in emphasis, not a new framework.

It means creating moments where learners must:

  • Assess quality before revising
  • Diagnose problems before generating solutions
  • Explain why an approach works or fails
  • Decide when a tool’s output should be trusted — and when it shouldn’t

In practice, this often looks like pauses before production:

  • Analyze an example before writing
  • Critique a solution before proposing one
  • Identify criteria before generating content
  • Name what’s missing before asking AI to help

These moments are slower.
They are less flashy.
And they are where learning actually happens.

Why This Matters Now

AI didn’t disrupt learning.
It disrupted our illusions about how learning was measured.

We can respond by doubling down on output, surveillance, and increasingly elaborate restrictions — or we can take this moment seriously.

This is an opportunity to reclaim rigor, not abandon it.
To teach thinking explicitly, not assume it implicitly.
To design learning that values judgment as the core human contribution.

Bloom’s Taxonomy didn’t fail us.
But the way we used it left learners unprepared for this moment.

We can do better — if we’re willing to teach judgment before generation.

For older blogposts, click below (you will be taken away from theengagingteacher.com):

Home

Leave a comment