Human–AI Co-Evolution Is Not Automatic. Education Must Change First.

HASS
DAI
DATE
27 January 2026

Lianhe Zaobao, 邱慧娟:人机协同进化 教育须先变革

By Prof Yow Wei Quin, Head of Humanities, Arts and Social Sciences

 

(Translation)

 

Parents and workers are asking an increasingly anxious question: If AI can already do what my child or I am learning, what is the point of learning it at all?

 

The instinctive response has been reassurance. We are told that humans have always adapted to new technologies — from calculators to computers — and that AI will be no different. This is broadly true. But it is also incomplete and increasingly risky.

 

Humans adapt only when education systems evolve fast enough to support that adaptation. When they do not, technology does not elevate human potential; it exposes gaps, deepens inequality, and leaves people competing with machines at precisely the wrong level.

 

This is the real challenge posed by artificial intelligence. Not simply that machines are becoming more capable, but whether we are preparing people for the kind of human contribution that remains valuable in an AI-rich world.

 

Co-evolution does not happen by accident

Throughout history, tools have changed what it means to be intelligent. Calculators shifted emphasis from arithmetic to problem-solving. Search engines reduced the need for memorisation and increased the value of interpretation. Writing transformed memory itself.

 

Psychologist Lev Vygotsky argued that learning happens within a Zone of Proximal Development — the space where people grow through guidance, social interaction, and cultural tools. AI now functions as one such tool, extending what learners can do with support. But it does not replace the human mentorship, judgment, and social context that give learning its depth and meaning.

 

Anthropology tells the same story at a civilisational scale. Each major advance in tools has led humans to reorganise their skills upward. Writing shifted memory from the brain to paper, enabling abstraction and complex reasoning. The industrial revolution automated physical labour and expanded managerial, creative, and relational work. Computers automated calculation, freeing people for strategy and systems thinking.

 

Every generation of tools — from stone axes to steam engines to computers — reshaped human cognition. New tools did not make us weaker; they made us more human by freeing cognitive space for creativity, empathy, imagination, and judgment.

 

But this co-evolution was never automatic. It depended on how societies redesigned learning, work, and responsibility around new capabilities. Societies that benefited were those that adapted their institutions accordingly; those that did not left many behind.

 

AI accelerates this dynamic dramatically. Large language models can already summarise information, generate code, and produce fluent text. The danger is not that humans will become obsolete, but that we will train people to compete with machines at the very tasks machines are best at — instead of preparing them to do what machines cannot.

 

The problem with “AI skills” alone

Much of today’s response to AI focuses on skills training: coding, prompt engineering, or tool proficiency. These matter. But on their own, they are insufficient.

 

Knowing how to use AI is not the same as knowing when to use it, why to trust it, or how to judge its outputs. These are not technical questions. They are human ones.

 

Judgment under uncertainty, ethical reasoning, social understanding, and the ability to make sense of complex, ambiguous situations are becoming more — not less — important. Yet these capabilities are often treated as “soft” or secondary, rather than as core competencies in an AI-rich world.

 

Ironically, as AI takes over more routine cognitive tasks, the uniquely human dimensions of work become harder, not easier.

 

Data scientists are becoming sense-makers: AI detects patterns; humans decide which questions matter and what the answers mean for society. Engineers are becoming strategic decision-makers: AI can write code; humans decide what to build, how people will use it, and what could go wrong. Teachers are becoming designers of thinking: AI can generate content; educators help students question, critique, and apply knowledge.

 

Delegating thinking to machines without strengthening human sense-making risks creating dependence, not augmentation.

 

Designing for co-evolution

If human adaptation is not automatic, then it must be designed. This is where a second generation of thinking about AI — what might be called designing AI for human intelligence — becomes necessary.

 

Here, we start from a simple premise: co-evolution does not happen by accident; it requires intentional redesign of education and work, not just better tools. It asks different questions. What kinds of thinking should humans retain? Where should machines lead — and where must humans remain accountable? How do we assess learning when AI can generate answers instantly?

 

We need to rethink curricula, assessment, and professional development so that AI augments human judgment rather than substitutes for it. Technical fluency must be integrated with social science, ethics, and critical reasoning.  Curricula that separate them will produce graduates fluent in tools but weak in judgment. Treating AI as an “add-on” subject misses the deeper shift underway.

 

What is needed instead is an integration of AI literacy with social science, ethics, and critical reasoning — not to humanise machines, but to strengthen humans.

 

A conditional optimism

There is reason for optimism. Human intelligence has always evolved with its tools. AI can free people from routine cognition and elevate work toward creativity, leadership, and meaning.

 

But this outcome is conditional. It depends on whether we are willing to redesign how people learn — and what we value as intelligence — fast enough.

 

The comforting belief that “humans always adapt” should not lull us into complacency. Adaptation is not destiny. It is a choice, shaped by institutions, incentives, and courage.

 

The future of work will not be defined by what AI can do. It will be defined by whether our education systems are brave enough to evolve alongside it — and whether we prepare the next generation not just to use intelligent machines, but to remain intelligently human.

 

  • Yow Wei Quin is the Kwan Im Thong Hood Cho Temple Chair Professor of Healthcare Engineering, Professor of Psychology, Head of the Humanities, Arts and Social Sciences cluster, and Programme Director of Design and Artificial Intelligence at the Singapore University of Technology and Design. She examines how language, environment, and technology shape cognition and social interaction across the lifespan.