Photo by Vlad Deep / Unsplash

When AI Becomes Infrastructure, What Should Children Really Learn?

Create Mar 12, 2026

A few days ago, in an AI discussion group a friend pulled me into, I saw a question that felt painfully real.

Parents want their children to start learning AI early.

At the same time, they do not want those same children to depend on it too much.

I understand that tension completely.

On one side, everyone can feel how fast this wave is moving. If you cannot use these tools today, you may already feel behind tomorrow. If a child starts too late, it is easy to feel as if they have missed the first window.

On the other side, the fear is immediate and instinctive.

What if children rely on AI so much that they stop thinking for themselves? What if they read less, write less, question less? What if every answer from AI starts to feel like the final answer?

Those worries are not irrational.

But the more I sit with this, the more I think the hardest part is not really about dependence.

It is this:

If AI slowly becomes infrastructure, what are we actually supposed to teach children?

Dependence is probably not something we can avoid

The moment people talk about AI and education, they tend to set up a simple opposition between exposure and dependence.

The ideal picture sounds something like this: children learn how to use AI well, but somehow remain fully independent, treating it as a helper and never as part of the fabric of daily life.

That sounds nice.

It is also probably not how technology works.

Every important technology arrives with the same kind of anxiety.

When cars spread, people worried that humans would lose the habit of walking.

When computers entered offices and homes, people worried that children would stop writing by hand.

When the internet became normal, people worried that memory, reading and concentration would fall apart.

Those fears were not entirely wrong.

Every technological shift does outsource some human ability.

But once a technology becomes infrastructure, the conversation changes. People stop asking whether we should depend on it at all. They start asking how to use it well, and which abilities still need to stay with us.

Almost nobody now says, in any serious way, “Do not depend too much on computers, or you might forget how to write.”

That is because computers are no longer optional gadgets. They are part of how the world runs. You would not refuse to let a child use electric light because they might become too dependent on electricity. You would not ban search engines because it would be better, in theory, to look up everything by hand.

I think AI is moving in the same direction.

It is still unstable in many ways. It still hallucinates. The product forms are still changing fast. But it is already starting to sink into the lower layers of life, work and learning in the way water, electricity, networks and smartphones once did.

So from that angle, dependence is probably not the real question.

It is going to happen sooner or later.

The real question is simpler and harder:

When dependence becomes normal, what should human beings still keep for themselves?

The awkward part is that adults do not know how much to trust it either

This, to me, is what makes the current moment so difficult for parents.

With many older education debates, adults at least felt they had solid ground under their feet.

Should children read more? Exercise more? Watch less television? Practise handwriting? Even if people disagreed in detail, most adults felt they knew the general direction.

With AI, many adults do not.

AI keeps getting stronger. It seems to know more. It speaks with growing confidence and fluency. Very often, its explanations, summaries and suggestions sound more complete and more organised than what an ordinary person could produce on their own.

And that is exactly the problem.

As AI becomes richer in information, it also becomes harder for ordinary people to tell what is true, what is false, and what only sounds true.

It gets worse when the source material itself is flawed, biased or out of date. AI can take those weaknesses and package them into something smoother and more persuasive. That often makes doubt harder, not easier.

So parents are stuck with a problem that is uncomfortable to admit:

If adults themselves do not know when AI should be trusted, how are they supposed to teach children?

That is why I do not think the heart of education can be “teaching children how to use AI.”

The tool layer changes too fast.

The most popular product this year may be replaced next year. A prompting trick that feels useful today may disappear into the default system behavior six months later. The hard-earned “playbook” an adult writes down today may be outdated long before a child reaches adulthood.

So I do not think the real centre of gravity is there.

The stronger AI gets, the more education has to return to older things

Recently I read a strong summary of how some people at the front edge of the AI industry think about their own children.

What struck me was that they were not obsessed with teaching the newest technical skill first.

The things they seemed to value most looked, in some sense, older:

empathy, communication, the ability to solve real-world problems, logic, the ability to ask good questions, broad rather than narrow understanding, and the willingness to take responsibility.

I keep coming back to that.

Because it points to something important.

The closer people are to the frontier of technology, the easier it may be for them to see a basic truth:

Tools will keep getting stronger, cheaper and more embedded in everyday life. Human value may move in the opposite direction, toward qualities that do not feel tool-like at all.

Take empathy.

AI can imitate comfort. It can mimic tone. It can produce language that sounds caring. But it has no lived experience and does not bear the human consequences of a relationship. Trust, understanding and companionship do not appear just because a sentence sounds warm.

Take the ability to ask questions.

The better AI gets at supplying answers, the more valuable it becomes to know what question is worth asking in the first place.

Take responsibility.

AI can generate options. It can suggest a path. It can make an argument sound polished and convincing. It may even help automate investment decisions or operational choices. But in the end, someone still has to decide. Someone still has to bear the consequences. Someone still has to face a real family, a real loss, a real person, a real social situation.

And then there is judgment itself.

Children are almost certainly going to interact with AI more and more. They will ask it questions, use it for tutoring, use it to write, use it to search, use it to think through choices. That trend is difficult to imagine reversing.

Which means the more normal AI becomes, the more important it is that children build an inner frame of reference of their own.

What matters and what does not. What is right and what is wrong. What deserves long-term effort and what is only a short-term temptation. When to trust a tool and when to hesitate.

None of that grows automatically out of an AI answer.

It grows out of life. Family. Reading. Conversation. Observation. Mistakes. Reflection.

That is why I now think about education in the AI era this way:

It is not mainly about raising a child who is better at using tools.

It is about raising a child who is less likely to be pulled off course by them.

For ordinary families, the answer may be more practical than grand

If we bring this back down to everyday family life, I do not think the response has to be complicated.

First, do not make “knowing how to use AI” the educational goal in itself.

Let children use it, yes. But using AI should not mean turning it into a grade machine, an answer machine or a shortcut machine. It is better understood as part of a new environment, part of a new infrastructure. Children will use it anyway. The real question is not how early they learn to press the buttons. It is whether their mind is still present while they use it.

Second, protect their habit of questioning.

Not just accepting what AI says, but asking back: why do you say that? Where does that information come from? What is another angle? If this answer is wrong, what is wrong with it?

Third, keep them in contact with the real world.

Let them deal with actual people. Let them watch how real situations unfold. Let them handle small frustrations, conflicts, responsibilities, care and commitment. AI will become stronger in digital environments, but many real problems will still require a human being to be there in person.

Fourth, put even more weight on values and logic than before.

The real danger in the future may not be that a child cannot use AI. It may be that they can use it very well but have no judgment, absorb huge amounts of information but have no stable reasoning, speak more fluently while becoming thinner inside.


The longer I think about it, the more I feel that the deepest parental fear should probably not be, “Will my child rely on AI too much?”

That part is likely to happen anyway.

The harder question is this:

When that dependence arrives, will the child still have a standard of their own?

Will they be able to tell what is worth doing? Will they be able to judge whether an answer is reliable? Will they still be able to keep some direction of their own as tools become more powerful?

If I had to compress the whole argument into one sentence, it would be this:

Once AI becomes infrastructure, the central task of education is not stopping children from relying on it. It is making sure they still have judgment while they do.

That may not be the most comforting answer.

But I suspect it is the more honest one.

Tags

QiDi

Trusting the journey. From Beijing to Japan, I’ve traded one chapter for another to build a new life here. This is where I document my story of starting over. | 一切都是最好的安排。 从北漂到日漂,开启一段新的人生,讲述自己的故事。