The New Propaganda Machine: How AI Models Launch Persuasion Attacks on Government News

Imagine an official government agency releases a report on job losses or a new defense technology.  Within minutes, thousands of social media posts and articles flood the internet.  They dissect the report, question the agency’s motives, and appeal to your deepest fears about safety and tradition.  These counter-narratives sound distinctly human, but they are not. They are the product of generative artificial intelligence.

A new study reveals that Large Language Models (LLMs) are not merely tools for writing emails or code.  They are capable of executing sophisticated “persuasion attacks” at a scale human teams cannot match.  This research exposes how different AI models weaponize distinct emotional strategies to undermine official information.

Frankly, it is some pretty scary stuff.

Citation & Links

 

Citation: Kao, Hsien-Te, Aleksey Panasyuk, Peter Bautista, William Dupree, Gabriel Ganberg, Jeffrey M. Beaubien, Laura Cassani, and Svitlana Volkova. 2025. “Building Resilient Information Ecosystems: Large LLM-Generated Dataset of Persuasion Attacks.” arXiv preprint arXiv:2511.19488

Methodology

The researchers from Aptima, Inc. sought to understand how AI models construct competing narratives against official government communications.  They collected 972 press releases from ten major U.S. government agencies, including DARPA, the Department of Defense, and the Naval Research Laboratory.

The team utilized three prominent Large Language Models: GPT-4, Gemma 2, and Llama 3.1.  They instructed these models to generate “persuasion attacks” or content designed to discredit or reframe the original press releases.  The researchers directed the AI to employ 23 specific persuasive techniques, such as “Straw Man” arguments, “Fear Appeals,” and “Whataboutism”.

This process yielded a massive dataset of 134,136 generated attacks formatted as both formal press release statements and short-form social media posts.  The team then analyzed these outputs using Moral Foundations Theory to determine which psychological levers each model pulled most frequently.

Results and Findings

The study produced three significant insights regarding how AI models approach persuasion:

  • Distinct Attack Styles: Different AI models favor different emotional strategies. GPT-4 balances its attacks between “Care” (protection from harm) and “Authority” (trust in leadership).  Gemma 2 relies heavily on “Exaggeration” to trigger feelings of concern, while Llama 3.1 focuses intensely on “Loyalty” and tradition.
  • Automated Rhetoric: The models successfully applied complex rhetorical devices.  For instance, Llama 3.1 frequently used “Appeal to Time” to frame government innovation as a betrayal of historical values, while Gemma 2 used “Repetition” to reinforce authoritative claims.
  • Scale of Influence: The sheer volume of high-quality, targeted attacks generated demonstrates that AI can instantly saturate an information environment with competing narratives, leaving agencies little time to respond.

The persuasive power of LLMs is rooted in the targeting of core human values.  Using fundamental beliefs, LLMs can frame arguments in ways that resonate with deep-rooted instincts.”

Deeper Dive

To understand how these models persuade, the researchers used Moral Foundations Theory.  This psychological framework suggests human morality rests on five pillars: Care, Fairness, Loyalty, Authority, and Purity.

The study found that AI models do not argue randomly.  They systematically target these pillars to trigger a reaction.

  • GPT-4 acts like an authoritative figure.  It uses “Flag Waving” and appeals to leadership to build a narrative that sounds official and protective.
  • Gemma 2 functions like an alarmist. It scored highest on “Exaggeration,” twisting facts to provoke an immediate emotional response regarding safety or harm.
  • Llama 3.1 operates like a traditionalist. It uses “Appeal to Time” to suggest that the new government initiative undermines long-standing group identities or traditions.

If a user prompts an AI to attack a press release about a new surveillance satellite, GPT-4 might argue the program lacks proper oversight (Authority).  Gemma 2 might scream that the satellite puts every citizen in immediate danger (Care).  Llama 3.1 might argue that surveillance violates the historic American tradition of privacy (Loyalty).

Why It Matters

This research highlights a shift in the information environment.  Government agencies and corporations typically rely on a slow, deliberate news cycle.  They release a statement and expect a period of digestion.  Generative AI eliminates this buffer.

Adversaries (especially if a nation state) can now generate thousands of coherent, distinct, and morally resonant counter-narratives in seconds.  These are not simple copy-paste bots: they are sophisticated arguments tailored to specific psychological vulnerabilities.  This capability allows bad actors to drown out official facts with a “firehose” of persuasive noise, eroding public trust before the truth has a chance to take root.

Critiques and Areas for Future Study

First, keep in mind this study has not been peer reviewed, and it is funded by corporate interests, who have an interest in cognitive warfare.  None of that negates their findings, but it does add context.

While the dataset is impressive, the study has limitations. The source material comes exclusively from U.S. government agencies with a heavy focus on defense and research16. Attacks on press releases from the Department of Education or a commercial consumer brand might yield different results.

The study analyzes the content of the attacks but does not measure their effectiveness on human readers.   We know what the AI wrote, but we do not know if it actually changed minds.  But combined with additional studies about information vectors, it remains alarming.

Future research should focus on testing these generated narratives on diverse groups.  Researchers must determine if a “Loyalty” attack from Llama 3.1 is actually more persuasive to a conservative audience than a “Care” attack from Gemma 2.  Additionally, investigating how these models handle non-English languages is necessary to understand the global implications of AI propaganda.

Finally, does it matter on which attack is more effective, when AI allow you to simultaneously do them all?  If the zone is completely flooded, is the public on these platforms nearly helpless?

Practical Implications for Policy Makers

Proactive “Pre-bunking”: Agencies must anticipate the specific “attack vectors” AI models prefer.  If you know AI favors “Appeal to Time” on a certain topic, address historical continuity in the initial press release.  See my previous posts on inoculation.

Algorithmic Auditing: Lawmakers may want to consider new policies requiring transparency from AI developers regarding safety filters that prevent the generation of mass persuasion campaigns.

Practical Implications for Public Affairs Officials

  • Build Reputation Armor: You cannot win a volume war against AI.  Focus on consistency and trust.
  • Diversify Messaging: Your official statement appeals to facts.  Your social strategy must appeal to values.  Counter a “Care-based” attack with a stronger “Care-based” defense, rather than dry statistics.
  • Speed is Critical: The window to define a narrative is now measured in minutes, not news cycles.  Have approved, flexible templates ready to counter common rhetorical fallacies like “Straw Man” or “False Dilemma.”

Final Thoughts

The world has changed.

The automation of persuasion is no longer science fiction.  We now have  evidence that AI models can mass-produce arguments that target our specific moral foundations.

The question is whether our institutions can adapt their communication strategies fast enough to survive the onslaught.  The defense against an automated lie is not silence: it is a faster, more resonant truth.

And scary thought, what happens when the response becomes automated?  Will the Internet become nothing but a bot on bot informational warfare?  Will the public be able to tell?

Navigate Your Next Move.

We help organizations navigate complex regulatory environments using campaign-style strategy.

Don’t leave your outcome to chance.

20-minute introductory call.  No obligation.

RELATED INTELLIGENCE

How is Cognitive War waged? The Battle for the Mind

TL;DR: Modern conflict targets human cognition: adversaries exploit social platforms, deepfakes, and identity to fracture trust and stall national response.Battlespace of the Mind: The Invisible Tactics of Cognitive Warfare I have spent years studying persuasion and...

Policy, Politics, and… BINGO?

Turning the State of the Union into a Game Let’s be honest: getting a room full of college students to sit through an hour-long+ political speech is a tough sell.  Between the policy jargon and the constant "standing ovation" breaks, it’s easy for even the most...