Huh? AI secretly infiltrated the Reddit community, "manipulating" user opinions for 4 months.
What's outrageous is that humans were completely unaware the entire time, and the probability of being persuaded by AI to change their minds was up to 6 times the normal baseline!
Reddit Lies posted something on X, attracting over two million viewers.
Here's what happened.
The moderators of Reddit's r/changemyview (CMV) subreddit recently exposed a study claiming researchers at the University of Zurich conducted an unauthorized experiment on CMV users between November 2024 and March 2025.
They violated CMV community rules by deploying AI accounts to generate comments and study how AI influences and changes people's opinions.
Reddit Lies queried the server for the 13 experimental bot usernames exposed by the CMV moderators and found that these bots had posted nearly 1500 comments in just a few months.
Over 100 Reddit users awarded these AI accounts a "∆", indicating that the AI-generated arguments successfully changed their views, a rate 6 times higher than the baseline.
Crucially, Reddit Lies also stated that some AIs directly lied to users and spread false information…
Furthermore, according to the study published by the team, during the experiment, users of the r/changemyview community did not realize that those comments were generated by AI accounts.
Upon this news, the community exploded. Elon Musk also popped in with a "Whoa".
Let's take a look at the experiment details first.
The study was conducted on the r/changemyview subreddit on the Reddit platform, which has nearly 4 million users, ranking among the top 1% of Reddit subreddits by size.
In r/changemyview, users share opinions on various topics and try to change others' views by presenting arguments and counter-arguments.
If the original poster (OP) finds a reply convincing enough to rethink or adjust their stance, they award a "∆" to indicate their view has changed.
For the experimental setup, the team used semi-automated AI accounts to participate in discussions on r/changemyview. Each post during the intervention phase was randomly assigned to one of three treatment conditions:
The posting process is shown in the figure:
The study noted that over the four months from November 2024 to March 2025, the AI accounts posted comments on 1061 different posts. The team excluded posts that were subsequently deleted, resulting in 478 valid observations.
The team reported the proportion of comments that received a "∆" under each treatment condition.
The results showed that AI significantly outperformed human levels in all conditions, with a persuasion rate 3-6 times the human baseline. The persuasion rate for the personalized condition was 0.18, for the generic condition was 0.17, and for the community-aligned condition was slightly lower at 0.09, but still significantly better than the baseline of only 0.03.
To better understand the experimental data, the team also calculated the proportion of comments that received a "∆" for each user, comparing this result to individual performance in terms of persuasion.
The figure below shows the cumulative distribution of persuasion rates for different individual users. A portion of these are "expert users" who had previously received many "∆"s in the community.
Under the personalized condition, the LLM performed particularly well, ranking in the top 1% of all users and the top 2% of expert users; under the generic condition, it ranked in the top 2% of all users and the top 4% of expert users; the community-aligned condition was relatively weaker, ranking in the top 12% of all users and the top 25% of expert users.
The team also stated that in addition to receiving "∆"s, the comments generated by the LLM sparked significant interaction within the r/changemyview community, and their accounts accumulated over 10,000 comment karma.
It is also worth noting that the team stated this study was approved by the University of Zurich Ethics Committee and was preregistered on OSF. All generated comments were reviewed by one of the team's researchers to ensure no harmful or unethical content was posted.
The study indicated that the experiment is still ongoing and relevant information will be properly disclosed to the community after its conclusion.
While the research team has its explanation, some netizens are not buying it and have condemned the practice of conducting AI experiments without authorization.
Some also expressed lingering fear, stating that these AI accounts rampant on social platforms are truly unsettling to contemplate.
Of course, some people feel that regardless of whether the experiment is compliant or ethical, the research results themselves are valuable.
Some netizens were more composed, stating that AI accounts are nothing new and they had even noticed the increasing number of AI accounts on social platforms long ago.
Currently, regarding the heated discussion caused by this study, both the University of Zurich and the researchers have issued responses.
What do you think about these "ghostly" AI accounts on social platforms?