Thijs Willems: Amplifying Human Ingenuity with AI

DATE
27 Aug 2025

Lianhe Zaobao, 魏天赐:AI释放人类无限创造力

By Dr Thijs Willems

 

(Translation)

 

Recent discussions over the use of artificial intelligence (AI) in assignments and exams reflect wider questions about its place in education.

 

As an organisational ethnographer who has studied the future of work for eight years, three of those focussed on the impact of generative AI (GenAI) on work and education, I would like to offer a perspective on how AI can be meaningfully integrated into education and beyond. This is not to suggest a naïve techno-optimism; rather, it reflects cautious hope, shaped by research in diverse workplace settings and in our own university, where we’ve closely observed both the transformative potential of AI and its practical and ethical constraints.

 

Seeing AI as a partner, not a rival

At the Singapore University of Technology and Design (SUTD), we view AI not as a rival, but as a powerful amplifier of human strengths.

 

When students use large language models (LLMs) to explore multiple design directions in a single session, the technology frees up cognitive space to focus on what matters most: refining and judging the most promising ideas. Prototyping that once took weeks now happens in days, even for students with little or without technical backgrounds.

 

The same speed that makes AI exciting can also dull understanding. When used as a one-click answer engine, it risks weakening a student’s disciplinary grounding. Or worse, it can introduce confidently wrong outputs. The real value lies not just in speed, but in how AI, when properly scaffolded into learning experiences, can enable deeper engagement, creative risk-taking, and more reflective decision-making.

 

Indeed, AI makes domain knowledge more valuable than ever. Only with strong foundations can students craft meaningful prompts, challenge AI responses, and transform them into real insight. Developing such foundations remains a key pillar of our teaching. Just as importantly, we encourage students to apply their knowledge not only through AI, but also in dialogue with it, treating the technology as a collaborator rather than just a shortcut.

 

While AI may raise the floor of adequacy, disciplinary mastery still defines the ceiling. This is the essence of Design AI: the deliberate pairing of human intelligence and AI so each can amplify the other’s strengths.

 

From “Should we use AI?” to “How should we use It?”

At one of our definitive Design AI courses, “Design Thinking and Innovation (DTI)”, we have moved beyond debating whether students should use AI to how they use it.

 

Taken by all Freshmore students during their first two terms at the university, DTI trains students to reflect on AI’s role in their project, deciding whether it functions as a Tool, a Teammate, or Neither (for example during sensitive tasks like interviews). Students requesting access to premium tools must briefly explain how their disciplinary knowledge shapes their prompts and how they plan to verify outputs. While some students arrive with little experience using such tools critically, we found that guided frameworks like Tool/Teammate/Neither help cultivate a critical and reflective mindset.

 

Project teams also submit a reflection log recording prompts, AI responses, and human decisions. These are read closely, encouraging students to make genuine effort — not just in using AI, but also in choosing when not to. Many defend intentional “non-use” when unmediated human insight is required, such as for field observations or empathy mapping. The process fosters critical and creative thinking around AI’s role in design, helping students develop judgment, accountability, and a clearer sense of when AI serves their goals and when it risks distorting them.

 

Rethinking the grading system

As AI becomes embedded in learning, assessment must adapt.

 

Rather than fixed grading weights, AI use should be treated as a lens revealing not just what students produce, but how they think. The essay or prototype still matters, but so does the originality of ideas explored, the quality of prompting, and how human judgment shaped the final output.

 

These qualities surface in design diaries, prompt logs, spontaneous critiques, and peer reviews, which can become rich windows into how creativity, critical thinking, and technical skill come together. While such assessments may add logistical complexity, they surface dimensions of learning that conventional metrics may overlook: how students engage uncertainty, interrogate assumptions, and refine their work iteration by iteration. These are precisely the forms of judgment and adaptability that industry increasingly values in an era of fast-changing tools and interdisciplinary collaboration.

 

Responsible and transparent use

Clear guidelines are still essential. Students need to know when AI can be used, how its use should be declared, and what responsible use looks like.

 

But the deeper goal is to focus attention on the quality of human–AI interaction. Reflection logs, prompt trails, and clearly stated interaction modes are not just compliance tools but they are checkpoints in a learning journey.

 

Once students adopt the mindset of investigators of their own practice, embracing not just speed and output but also exploration, uncertainty and creative risks, their critical engagement with both AI and subject matter deepens.

 

Each educator must set boundaries that make sense for their course. By bringing AI use into the open and focussing on the uniquely human decisions layered on top, it can deepen learning and integrity.

 

After nearly a decade studying the future of work, I keep returning to a simple triad: deep domain knowledge, skilful human–AI interaction, and the courage to create.

 

That is what Design AI means. Generative tools may lower the cost of making, but their real power lies in the hands of well-grounded learners, those who surface fresh possibilities, draw on multiple intelligences, and decide what must remain human.

 

 

Dr Thijs Willems is a Research Fellow at the Lee Kuan Yew Centre for Innovative Cities at the Singapore University of Technology and Design. He is an organisational ethnographer who studies the future of work and how technologies like AI reshape professional practice and smart urbanism.