“Synthetic Users” – an emerging risk to the integrity of user research

PUBLISHED ON

Elizabeth Rousseau, Lead UX Researcher, DFFRNT

Generative AI has created the possibility of using synthetic users in user research. However, using synthetic users doesn’t align with the core of Human-Centered Design (HCD). AI-generated personas and responses to testing are void of the humanity and empathy crucial to understanding real human experiences.

Real users surprise us, they contradict themselves, hesitate, veer off-topic. Great ideas can come from the tangents, asides, and off-hand remarks of participants. People have nuance, they respond in more than words, they have actions, hesitations, questions. An AI hasn’t experienced noise, being behind schedule or worry, it will happily answer any question it is asked. AI lack the lived human experience that makes real research meaningful. When researchers depend on generated responses, they trade rich exploration for surface-level simulation. 

Human-Centred design is about empathy and discernment, a willingness to step into another person’s shoes and understand their experience. Tools that simulate real people may be convenient to researchers but that convenience is deceptive. A researcher can produce hundreds of personas using AI tools, and each one can superficially sound like a real persona. Synthetic users are approximations of people. An AI will tell you they have exactly 1.5 children. IDEO says it most baldly, “AI is unable to provide the key ingredients of research that are critical to our work.” 

Not only would abandoning HCD principles produce inferior results, it could also generate wrong and problematic recommendations. AI-generated personas and dialogue can look convincing but often reflect biases, clichés, or generic assumptions. UX Collective warns that tools like ChatGPT can generate “harmful or unrealistic personas”, leading teams to design for imagined users rather than actual needs.

Do we want research that examines the human condition, the real and often messy experiences of people and how they interact with the world, or do we prefer ‘research’ that only imitates that experience?  Using synthetic users may offer speed and ease but it comes at the cost of understanding ethical engagement, and violating the values of HCD.

"If we let AI stand in for people, we’re no longer designing with or for humans."

As researchers, we want our work to be useful, to offer insights and recommendations that improve tools and services we are working on. The goal of research is to make things easy, intuitive, and more satisfying for the people using the final tools. The principles and approach of HCD enable us to work with this goal in mind.

Whether during interviewing, user testing, or any other research activity, the researcher’s role is to learn from the participant through observation and questions. The researcher does not rely on what people say they do, nor does the researcher ask what participants think other people do.

Meanwhile, what is the data that underpins these synthetic users? Reams of interviews about user behaviour? Or content scraped from the internet, with all nuance lost? Substituting an LLM for a human being is like substituting the statistical (internet based) average family for any real one. If your audience is actual people, you are going to need to research actual people in order to understand them. 

Christopher Roosen provides vivid imagery when he writes “Besides, do you remember what generative AIs data based on? It isn’t a well-organised database, it’s the crazy, acid-drenched, kaleidoscopic phantasmagoria that is the internet. I’ve started to hear a common idea, that these generative AI, based on large language models, are like a blurry image of the internet.” 

Without a foundation of real research, engaging with actual people, without the work of recognising an assortment of individuals and how their individual actions may share common traits with others, these AI-produced personas cheapen the work of User Experience research. They are form over function. 

To stay true to Human-Centered Design, we must resist the pull of quick results that align to our expectations, and instead be empathetic learners willing to put in the time to listen to and connect with others. We have to engage with real people, listen with humility, design in response to lived experience, and adapt our approach based on changing human needs. This is the kind of effort that produces valuable results.

Related blog posts

To create real value, innovation labs must shift from tech-first thinking to a human-centered mindset—where solving real problems, not showcasing new technology, drives every decision.
Balancing your innovation portfolio across core, adjacent, and transformational efforts ensures relevance today while building resilience for tomorrow.