Nature Sub-journal: Humans Lost to AI Again, Especially When It Knows Who You Are

Image

Imagine you are in a heated debate with a user in a social media comment section. Their wording is precise, their logic is tight, and their points hit home, even causing you to doubt your own stance. Do you start to wonder: could this person actually be an artificial intelligence (AI) algorithm?

Even more so, if the AI knows a lot about you—your gender, age, education, work, hobbies, etc.—this persuasiveness will become even stronger.

This conclusion comes from a research team at EPFL (Swiss Federal Institute of Technology Lausanne) and their collaborators. They found that in simulated online debates, GPT-4 was rated as more persuasive than humans 64% of the time when it tailored its arguments based on opponents' personalized information.

The related research paper, titled On the conversational persuasiveness of GPT-4, has been published in the Nature sub-journal Nature Human Behaviour.

Image

Paper link: https://www.nature.com/articles/s41562-025-02194-6

Today, as people increasingly rely on large language models (LLMs) for tasks, homework, documents, and even therapy, it is crucial for human users to remain cautious about the information they receive.

The research team states:

Main Tag:Artificial Intelligence

Sub Tags:PersuasionMicrotargetingOnline DebateLarge Language Models


Previous:GitHub Launches New Coding Agent

Next:Interview with Step Ahead's Duan Nan: "We Might Be Touching the Upper Limit of Diffusion's Capability"

Share Short URL