The Angry Emails About Synthetic Research

researchAImethodologytechnology

I sent a study to close to 2,000 newsletter subscribers. Got some furious emails back. Unsubscribes. Barely contained anger.

Not about the content of the study. About the fact that we’re using synthetic personas for market research.

The responses fell into two camps:

  • “I am a researcher and this is threatening my livelihood”
  • “I have tried ChatGPT, and the results are unreliable”

Based on everything I’m seeing, synthetic research (while still in its infancy) is inevitable. Like much in AI, it’s deeply misunderstood. That misunderstanding is often fueled by people who equate AI with “easy” and have been disappointed by subpar answers from ChatGPT.

These concerns are valid.

Researchers have spent careers building expertise that matters. That’s real. And yes, ChatGPT alone produces unreliable research. Also real.

But synthetic research done right (calibrated, validated, and designed by people who understand research methodology) is a different category.

I wouldn’t dismiss either group’s concerns. Researchers need to be part of this conversation. And people who got burned by ChatGPT need to understand that what we’re doing isn’t the same thing.

We need both groups to get this right.

The future isn’t researchers versus AI. It’s researchers using better tools to do better work, faster.

But we have to build that future together, not against each other.


Privacy Policy