Seite auswählen

Artificial intelligence is transforming education.

At least, that’s the narrative.

We build systems that generate content, answer questions, and scale infinitely. We optimize for efficiency, automation, and performance. And yet, we overlook a fundamental question:

Why do people choose to engage with one system—and reject another?

The uncomfortable answer is this:
Functionality is not enough.

Learners do not interact with “systems.” They interact with perceived entities. They judge competence, trustworthiness, and relevance—not only based on output, but on how that output is presented.

And this is where most AI in education fails.

We design for capability.
But users decide based on perception.

Research has shown that even small differences in representation—appearance, tone, perceived personality—can significantly influence how an intelligent system is evaluated. In learning contexts, this directly affects motivation, trust, and ultimately learning outcomes.

Yet most systems still follow a one-size-fits-all approach.

The assumption behind this is rarely questioned:
If the system works, people will use it.

That assumption is wrong.

Learners are diverse. Not just in knowledge, but in expectations, preferences, and implicit biases. A representation that motivates one person can discourage another. A system perceived as competent by one learner may be rejected by another.

So the real challenge is not building intelligent systems.

It is building systems that are accepted.

This is the starting point of my work.

I explore how generative AI can be used to create personalized representations of Intelligent Learning Assistants—systems that adapt not only in what they do, but in how they appear and interact.

This goes beyond simple customization.

It requires:

  • identifying both conscious and unconscious preferences
  • iteratively refining representations through feedback
  • systematically analyzing design features and their effects

Instead of static design decisions, we need adaptive systems that evolve with the learner.

However, personalization is not a silver bullet.

It introduces new risks:

  • reinforcing stereotypes
  • amplifying bias
  • creating misleading perceptions of competence

If we do not understand these dynamics, personalization may do more harm than good.

This blog is not about celebrating AI.

It is about questioning how we design it.

I will share ideas, experiments, and challenges from my work on personalized AI systems—especially where things do not work as expected.

Because the real problem is not whether AI can support learning.

The real problem is whether people are willing to learn with it.