As a side gig, I teach college politics. The minutes before class starts offer a glimpse into what shapes young minds today.
Recently, a group of young men discussed what their social media feeds were showing them.
The conversation concerned me enough that I created a test account posing as an 18 year old male. Within a day, the algorithm buried this account in AI generated hoaxes, conspiracy theories, and porn bots. I deleted it immediately.
We have discussed studies showing that people find obvious bots credible when exposed to them repeatedly. This observation raised a harder question: How much nonsense does it take to change actual behavior?
A study from Taiwan provides a hint.
The Matrix Choice
Politics presents daily choices between accepting comfortable narratives and investigating uncomfortable truths. The 1999 film The Matrix dramatized this as choosing between the red pill and the blue pill. During the COVID-19 pandemic, these choices moved beyond cinema into public health. Decisions to accept or reject vaccines depend on lifestyle, perceived risk, side effects, and information found online. Misinformation spreads like a plague and pushes individuals toward self-harming health decisions. A study from Taiwan quantifies this effect and identifies the moment when fake news begins to suppress vaccination rates.
Study Profile
- Title: The Prevalence and Impact of Fake News on COVID-19 Vaccination in Taiwan: Retrospective Study of Digital Media
- Link: https://www.jmir.org/2022/4/e36830
- Peer Review Status: Peer Reviewed
- Citation: Chen, Yen-Pin, Yi-Ying Chen, Kai-Chou Yang, Feipei Lai, Chien-Hua Huang, Yun-Nung Chen, and Yi-Chin Tu. 2022. “The Prevalence and Impact of Fake News on COVID-19 Vaccination in Taiwan: Retrospective Study of Digital Media.” Journal of Medical Internet Research 24 (4): e36830.
The Problem: A Flood of Information
Taiwan received its first batch of COVID-19 vaccines on March 3, 2021. Public vaccination began on June 12, 2021. This period coincided with the first large-scale community infection wave in the region. The internet filled with news about the virus and vaccines. Researchers observed that misinformation about side effects and disease severity lowered vaccination intent. Previous studies relied on surveys to track these attitudes. This study aimed to quantify the relationship between the reach of misinformation and actual vaccination decisions.
Methodology
Researchers conducted a retrospective study of Taiwan’s entire population: 23 million people. They analyzed digital media from March 1, 2021, to December 25, 2021.
The team used the Islander news analysis system to process big data. This system has three parts: a web crawler to collect news in real-time, an analysis model to judge news objectively, and a user interface. The Islander system uses a deep learning language model called RoBERTa. Experts trained this model on the Chinese valence-arousal text data set (CVAT) to identify bias and subjective claims.
The study analyzed:
- 791,183 COVID-19 and vaccine news items.
- 26 different digital media sources.
- Google Trends scores to measure how many people searched for and encountered this news.
Researchers resampled daily data into weekly intervals. They used multivariable linear regression to find connections between available doses, news quality, search trends, and the following week’s vaccination numbers.
Key Findings: The 39.3% Threshold
The study revealed a clear link between the volume of fake news and the number of doses administered.
Fake News Spikes During Crisis: The proportion of fake news rose from 35.8% to 37.7% once public vaccination began. 11 of the 26 media sources showed a significant increase in fake news during this stage.
Interaction Matters: Fake news alone does not stop a vaccine drive: people must find it. The interaction between high search volume (Google Trends) and high fake news percentages predicted a drop in vaccinations the following week.
The Tipping Point: The researchers used the Johnson-Neyman procedure to find a specific threshold. When the percentage of fake news in the digital environment exceeded 39.3%, high search activity had a significant negative effect on vaccination doses.
The regression model showed a positive coefficient for vaccine availability ($\beta = 0.98, P = .002$) but a negative coefficient for the interaction between fake news and search trends ($\beta = -3.21, P = .04$).
“Reducing the amount of fake news and increasing public immunity to misinformation will be critical to maintain public health in the internet age.”
Deep Dive: Behavioral Psychology and Search Trends
Why does 39.3% matter? The study suggests this point represents where public resistance to misinformation fails. When people search for vaccine information, they encounter associated links and recommendation systems. If nearly four out of every ten articles use subjective or incited language, the collective weight of those stories changes minds.
The study emphasizes that misinformation does not act in a vacuum. It is the interaction between the prevalence of fake news and the magnitude of information propagation (measured by Google Trends) that drives behavior. When the percentage of fake news is low, searches can actually lead people to helpful information. However, once the environment is saturated with fake news, increased searching actually accelerates the decline in vaccination doses.
Media outlets often adopt attractive titles and sentimental discourse to increase click-through rates. This commercial preference undermines public trust. The internet has accelerated a shift toward biased and affective reporting. In August 2021, Taiwan saw a slow vaccination rate when fake news reached 38.5%, approaching the danger zone.
Practical Implications for Policy Makers
- Monitor Digital Quality: Use automated systems like Islander to track the “suspicion score” of the information environment.
- Address the Tipping Point: Take action before fake news reaches the 39% mark to prevent a decline in health program participation.
- Promote Objectivity: Encourage news outlets to return to the essence of journalism to maintain social credibility.
- Hold Platforms Accountable: Section 230 protection made sense when platforms were neutral conduits. Algorithmic curation is editorial choice. Platforms choose what to amplify. They profit from engagement metrics that reward sensationalism over accuracy. If a news outlet faces consequences for publishing lies, why do platforms escape liability for systematically promoting them? They choose profit over truth. Perhaps they should choose accountability instead.
Practical Implications for Public Affairs Officials
- Inoculation: Get over the fear of inoculation and use the tactic preemptively.
- Strengthen Public Immunity: Treat news analysis systems like an attenuated vaccine for the mind. These tools help the public identify media goals and think critically.
- Counter Search Trends: Search volume drives the spread of misinformation. Officials must populate search results with high-quality, objective data.
- Focus on Media Literacy: Develop programs that teach people to recognize “incitement scores” and “suspicion levels” in their daily news feeds.
Critiques and Future Research
This study provides a roadmap for understanding the infodemic, but it has limitations. Researchers lacked detailed demographic data on vaccine recipients. They could not determine if certain age groups or education levels were more susceptible to digital news.
The Islander system is currently restricted to Chinese news. Adapting these findings to Western societies or other languages requires further study and frankly raises privacy concerns. Future work should test inoculations against information flows to explore systems that can withstand floods of misinformation.
Why This Study Is Hard to Replicate in the United States
Taiwan’s centralized National Health Insurance database linked vaccination decisions to news exposure with precision. The United States has no equivalent. Medical records scatter across private insurers, state registries, and hospital networks. HIPAA blocks the real-time tracking that made the Taiwan study possible.
Taiwan’s digital surveillance relied on cultural acceptance of collective protection over individual privacy. American constitutional protections and public skepticism make that model politically impossible here. American researchers must work with voluntarily provided data and tools like Google Trends. These methods offer shadows of the full picture Taiwan achieved.
The question is not whether misinformation reaches a tipping point in America. The question is whether our privacy framework will allow us to see it coming.
Final Thoughts
The internet shortens the distance between people but lengthens the distance between truth and fiction. State actors and hucksters flood the zone with nonsense, and the platforms profit from the chaos.
The Taiwan study offers a number: 39.3%. Once misinformation crosses this threshold in a saturated information environment, behavior changes. Vaccination rates drop. Trust erodes. Public health suffers. The consequences are real.
Back in my classroom, those young men scrolling through AI slop and bots exist in an ecosystem designed to keep them engaged, not informed. The algorithm feeds them what drives clicks, not what serves truth. When I created that test account, the platform took less than a day to bury an 18-year-old male user in garbage.
Generative models and bots now package unverified information as attractive news and memes. My teaching gig is proving media literacy is a necessary skill for everyone. If we can monitor the health of our information environment as closely as we monitor our physical health, we can protect the public from the next wave of digital misinformation.
Those young men in my class deserve better than an internet that treats them as marks.
We have a hint to the tipping point. We are beginning to be aware of the damage.
The question is whether we have the will to act before the next national emergency and the 39.3% threshold gets crossed.