Photo by Susan Q Yin / Unsplash

Tacit Knowledge: The Blind Spot in AI Education

Change Apr 20, 2026

Day 1144 in Japan

A few days ago, while organizing AI education materials collected by Luffy, I came across a statistic that stopped me cold.

A 2026 Stanford SCALE research report found:

83% of students using AI-assisted writing couldn't recall what they had just "written."

The control group? Only 11%.

What an MIT Student Discovered

I read an article by an MIT student who interviewed the founder of an aviation maintenance AI company.

The founder shared something striking:

A pilot flies a plane into the hangar, fills out a paper form, puts it in a binder, and the binder goes on a shelf. No one ever scans it.

What's written on that paper—an unusual vibration, a pre-flight feeling, something "off" that can't be quantified—disappears forever. It never enters any database, never becomes training data for any model.

That piece is missing from AI's world.

Philosopher Michael Polanyi said in 1966:

"We know more than we can tell."

This is tacit knowledge.

It lives in the minds, hands, and bodies of experienced people but has never been encoded in language.

AI can't learn this. It's structurally impossible, and won't change no matter how powerful models become.

The 83% vs 11% statistic and tacit knowledge are two sides of the same coin.

AI Only Learns What Gets Written Down

AI can only learn what's been written down.

And much of what humans know has never been written down.

This isn't about AI not being powerful enough—it's a structural blind spot.

Timing and intuition.

A good investor can tell at a glance whether a founder has agency.

You can't see it in a resume or business plan or pitch deck. You sense it in how they talk, how they engage in conversation.

This judgment comes from one source: witnessing cycles. Seeing a wave of hype rise, watching it slowly recede, seeing companies that seemed destined for success quietly disappear. Then seeing it happen again.

Only then do you know when to act and when to wait.

Data can't give you this. Reports can't give you this. AI's probability distributions can't give you this.

Failure knowledge.

AI's training corpus is heavily biased toward success stories.

Failed projects rarely get detailed post-mortems. The real reasons for failure often aren't fully understood even by the founders themselves, let alone documented.

That big client who privately said "we're not going to use this"—that conversation never made it into any document.

That product pivot—the real reason was someone having a bad day, but the written version has complete logic and framework.

AI forms an overly rationalized worldview. It thinks things happen according to logic.

In reality, things happen according to people.

Judgment.

When a plane has a problem, from pilot report to maintenance start, there are 24 human handoff points.

Phone calls, Slack messages, something said verbally, then relayed to the next person. None of these handoffs are fully recorded.

What AI sees is the final maintenance form.

The world on that form and what actually happened are two different things.

Judgment is knowing what to do next at those 24 handoff points, with incomplete information and limited time. Act first, adjust as you go.

If you wait to finish analysis before deciding, the plane has already missed its slot.

AI can give you the best analysis, but it stops at analysis. The rest is human work.

What 83% of Children Are Losing

Back to that statistic: 83% of students using AI-assisted writing can't recall what they just "wrote."

When children outsource writing to AI, they lose not just writing ability, but the entire "fuzzy to clear" thinking process.

Writing is fundamentally a thinking process.

You don't know what you want to say. You sit down, start writing, and gradually your thoughts clarify. You find some things don't make sense, you stop, rethink, reorganize.

Painful, inefficient, but necessary.

Because in this process, you're building tacit knowledge:

  • What arguments are strong, what arguments are weak
  • What examples support your point, what examples are just filler
  • What structures read well, what structures don't

No one taught you this. You can't articulate it. But your hands know, your brain knows.

When AI completes this process for you, you get an article but lose the process of "knowing."

So you can't remember what you "wrote."

Because you didn't write it—AI did. You're just the courier.

What Top AI Executives Tell Their Children

I recently saw a WSJ article that asked 5 top AI executives: What advice would you give your own children?

Their children ranged from 6 months to 26 years old. The answers were remarkably consistent.

Daniela Amodei, Anthropic co-founder (two children, 4 years and 6 months):

"When I think about what skills my children will need, I think about human qualities: the ability to interact with people, empathy, and the ability to get along with others. As AI becomes more important in the workplace, human qualities will become increasingly important."

Jaime Teevan, Microsoft Chief Scientist (four children, 17-21 years):

"Metacognitive skills will be very important—flexibility, adaptability, experimental spirit, critical thinking, ability to challenge things. To develop critical thinking, you need some 'friction,' doing difficult things, doing deep thinking. AI is excellent at providing advice, but it can't take responsibility. That's the human role."

Ethan Mollick, Wharton Professor (two children, 16 and 19 years):

"In the AI world, generalist careers composed of multiple skills are good choices. Liberal arts education is more important than ever. Careers are long, there will be many changes, and human adaptability is strong."

Caroline Hanke, SAP Global Head of Organizational Growth (one child, 15 years):

"I deeply believe that agility and openness to change—people who can respond to change and adapt quickly—that's the core capability for the future. As for what to study in college, I'd suggest studying as broadly as possible. If I had to choose, I'd lean toward math, because logical thinking will be necessary in any future role."

Not one person said "have your kids learn coding," "learn prompt engineering," or "learn AI tools."

They all said: empathy, critical thinking, ability to take responsibility, ability to adapt to change, broad education.

All tacit knowledge.

Stanford Research: Tool Design Determines Everything

The Stanford SCALE report had another important finding:

In children's education, Socratic/scaffolding AI far outperforms general-purpose AI.

General-purpose AI (like ChatGPT): you ask a question, it gives you the answer directly. You use it for practice, you do well at the time, but poorly on independent tests.

Socratic AI doesn't give you the answer directly—it guides your thinking. It asks you questions, lets you work it out yourself, gives you hints, but you have to reach the conclusion yourself.

Students using Socratic AI not only do well at the time, but also on independent tests.

More importantly, their reasoning depth is maintained or improved.

Why?

Because Socratic AI preserves that "fuzzy to clear" thinking process.

It doesn't complete it for you—it accompanies you through it.

So you remember, because you figured it out yourself.

The core of AI education isn't teaching children to use AI, but ensuring they retain the thinking process while using AI.

What Should We Teach Children

Teach children what AI can't learn.

Not because AI isn't powerful enough, but because these things are structurally impossible for AI to learn.

1. Experience and lived reality.

AI can read ten thousand books, but it hasn't lived your childhood, experienced your failures, gone through your low points.

These experiences shape your judgment, your values, your unique perspective.

Let children experience the real world. Interact with people, observe specific people and situations, handle real-world small troubles, experience cooperation, conflict, responsibility, care, commitment.

2. Do difficult things.

Don't let AI complete all the difficult parts for children.

Difficulty itself is part of learning.

Writing is hard, but it's in that "hard" that children learn to think.

Math is hard, but it's in that "hard" that children learn to reason.

If AI dissolves all difficulty, children lose the opportunity to build tacit knowledge.

3. Critical thinking and skepticism.

Don't just accept whatever AI says. Get in the habit of asking: Why do you say that? Where does this information come from? Are there other perspectives? If the answer is wrong, where is it wrong?

In an era of increasingly powerful AI, this ability becomes increasingly important.

4. Taking responsibility.

AI can provide suggestions, list options, analyze things thoroughly.

But who ultimately decides, who bears the consequences, who faces a real person, a real family, a real social environment—that's still human work.

Let children get used to taking responsibility from an early age. Small things like their homework, their room. Big things like their choices, their life.


A Harder Question

Writing this, I realize an even harder question.

Building tacit knowledge requires massive amounts of "doing."

You need to write many bad essays to know what a good essay is.

You need to make many mistakes to know what good judgment is.

You need to experience failure to know what real success is.

But today's AI tools dissolve this process.

They make "doing" too easy. So easy you can skip all the painful, inefficient, clumsy stages and get directly to a decent-looking result.

The problem is, those skipped stages are exactly where tacit knowledge forms.

Worse, this process may be irreversible.

A child who grows up used to "AI-assisted homework" may never know what it feels like to think through a problem independently. Like someone who grew up taking elevators may never understand why someone who climbs stairs feels "reaching the top" is an achievement.

So the real dilemma isn't "should we let children use AI," but:

When AI makes all difficulty optional, how do we get children to still choose difficulty?

I don't have an answer yet.

But I know that if we don't want the next generation to become that 83%, we must make some counterintuitive choices between "efficiency" and "growth."

Let children write essays the clumsiest way.

Let children solve problems the slowest way.

Let children think for themselves first, even when they could just ask AI.

This sounds very counter-trend, even a bit cruel.

But ultimately, tacit knowledge is never "taught"—it's "forged."

AI-era education may not be about getting children to answers faster, but about letting children complete the entire chain of thinking themselves.

Even if it's slow, even if they make mistakes.

These are what growth means.

Tags

QiDi

Trusting the journey. From Beijing to Japan, I’ve traded one chapter for another to build a new life here. This is where I document my story of starting over. | 一切都是最好的安排。 从北漂到日漂,开启一段新的人生,讲述自己的故事。