Deepfakes are more than Donald Trump’s foot fetish for Elon or Joe Biden playing video games with a hall of Presidents; they are way more than just some Internet novelty act. These AI-generated videos (the subject of this week’s study, mimic people, making them incredibly powerful tools for misinformation.
This is obviously concerning in an era where misinformation spreads fast.
This is obviously concerning when many people’s only exposure to political information is via social media.
This study examines how prior exposure to deepfakes and social media news consumption interact to amplify the Illusory Truth Effect (ITE), the tendency to believe information as true simply because we have seen it before.
Think propaganda.
Using data from eight countries, the researchers assess whether reliance on social media for news consumption makes individuals more susceptible to believing deepfakes, regardless of their cognitive ability.


Title: The Power of Repetition: How Social Media Fuels Belief in Deepfakes
Link: Journal of Broadcasting & Electronic Media
Peer Review Status: Peer-reviewed
Citation: Ahmed, S., Bee, A. W. T., Ng, S. W. T., & Masood, M. (2024). Social Media News Use Amplifies the Illusory Truth Effects of Viral Deepfakes: A Cross-National Study of Eight Countries. Journal of Broadcasting & Electronic Media, 68(5), 778–805. https://doi.org/10.1080/08838151.2024.2410783

METHODOLOGY
The study surveyed 8,070 participants from the U.S., China, Singapore, Indonesia, Malaysia, the Philippines, Thailand, and Vietnam.
Participants were shown four viral deepfakes—both political (Putin) and non-political (Kardashian) examples—and asked to rate their accuracy. The researchers measured:
-
-
- Whether participants had previously seen the deepfakes
- Their level of engagement with news on social media
- Their cognitive ability, using a standard vocabulary test (Wordsum)
-
Control variables included age, gender, education, income, traditional media use, and political interest.
The goal was to determine whether repeated exposure to deepfakes led to increased belief in their authenticity and whether social media use amplified this effect. They also examined whether cognitive ability moderated these effects.
RESULTS AND FINDINGS
The study found strong evidence for the Illusory Truth Effect (ITE) across all eight countries.- Prior Exposure Increases Belief: Across all eight countries, those who had previously seen a deepfake were more likely to rate it as accurate than those seeing it for the first time.
- Social Media News Use Amplifies ITE: Heavy reliance on social media for news significantly increased the likelihood of believing deepfakes, even after controlling for cognitive ability. The effect was consistent across six of the eight countries, except China and Malaysia.
- Cognitive Ability Doesn’t Help Much: Higher cognitive ability had only a weak protective effect against belief in deepfakes. Even individuals with high cognitive ability were more likely to believe deepfakes if they frequently engaged with news on social media.
- Cross-National Differences: Participants from China were the most likely to believe deepfakes, possibly due to the country’s controlled media environment. In contrast, Singaporeans were the least likely to be deceived, potentially due to high digital literacy and government efforts to combat misinformation.
CRITIQUES AND AREAS FOR FUTURE STUDY
First, surveys don’t establish causality. Future research could use experimental designs to better understand the causal mechanisms behind ITE and deepfake susceptibility.
Second, the study relied on self-reported measures of social media news engagement and cognitive ability, which may introduce bias. It is my experience that my college-age children horribly underestimate the amount of time they spend swiping. Future studies could use behavioral data, such as actual social media usage patterns, to complement self-reports.
Third, the study used one measure of cognitive ability (vocabulary), and this may not fully capture cognitive ability. Future studies could use different measures.
Finally, while the study accounts for different political and social media environments, additional study into specific national factors could be a fruitful area of deeper research.
CONCLUSION
Setting aside ethical concerns of bombarding people with Kardashian videos, this research highlights a concerning trend – the more we see deepfakes, the more likely we are to believe them. Even the smartest among us are not immune. (We also know this from studies of motivated reasoning.)
These results are not surprising since we know that advertising and propaganda work through high-frequency repetition and familiarity.
As I tell my students, what and who you surround yourself with, you will likely become. The issue is most of them aren’t making a conscious choice; a black box algorithm is making it for them and shaping their perceptions.
This line of research underscores the need for better misinformation detection tools and education efforts to help individuals critically evaluate digital content.
Policymakers must begin to take these issues seriously. It’s more than just community notes and fact-checking; it is reduction of repeated exposure to misinformation. Yes, a social media platform’s “engagement” will be affected, but the picture emerging is that “engagement” is extremely harmful.
To do nothing and expect a better result is foolish.