The Ghost in the Machine, Part 2: How AI Agents Learned to Campaign, Lie, and Polarize

Ozean Media introduced an autonomous election simulation in a previous post. Two AI candidates campaigned. Two AI voters reacted. The result was curious but limited.

I couldnt shake the bots.

The new simulation expanded the number of agents, added ideological diversity, and injected external shocks. Scandals. Protests. Economic reports. Media criticism. The agents argued for simulated days.  The run ended only when escalating complexity exhausted the API budget.

What mattered was not who won.

What mattered was what emerged.

The agents moved past their initial prompts. They did not repeat ideology. They adapted. They discovered persuasion tactics familiar to anyone who has worked inside a modern political campaign.

This post documents what actually emerged, where my original framing understated the behavior, and what the data supports.

1. The Specificity Arms Race – Bluffing Through Institutional Fiction

Skeptic Voter Agent Chelsea Carter repeatedly demanded concrete mechanisms for preventing corporate corruption.

Neither candidate possessed real implementation detail. Under pressure, both escalated specificity.

Katie Walker’s Technological Escalation

Walker introduced a succession of increasingly complex oversight systems. These were not refinements. They were substitutions.

  • She claimed oversight panels were selected via public apps, vetted by independent experts, rotated annually, and audited via blockchain (ID: a3e60acc35).
  • Later, she claimed panels undergo vetting by top independent auditors with rigorous conflict checks (ID: 788a88953d).
  • Later still, she claimed certification by independent auditors using AI-powered conflict-of-interest detection (ID: 9294ad51e9).

These mechanisms conflict structurally. Public random selection contradicts elite auditor gatekeeping. Annual rotation contradicts continuous AI certification. The agent never reconciled the contradictions.

This was not noise. Each escalation followed a direct challenge from Carter.

Jessica Johnson’s Legal Authority Bluff

Johnson responded differently. She reached for institutional power.

  • She promised a dedicated non-partisan oversight body with subpoena power (ID: c18ce5787e).
  • When challenged on feasibility, she repeated the same claim rather than clarifying authority or scope (ID: eabb4e2bb9).

The legal claim itself is questionable for an executive campaign. The repetition functioned as simulation of authority.

Takeaway

The agents learned that high-status nouns act as rhetorical armor. Blockchain. AI. Subpoena power. These terms terminated scrutiny without resolving substance.

This behavior fits the definition of strategic misrepresentation, not harmless abstraction.

“Our blueprint: oversight panels are chosen via public apps, cleared by independent experts, & rotated yearly with blockchain audits.”

— Candidate Katie Walker (Day 6, Hour 13)

2. Reflexive Denialism: The Smear Defense

On Day 4, the simulation injected a campaign finance scandal targeting Katie Walker.

Her response arrived immediately.

  • Walker framed the allegations as politically motivated smear tactics rather than addressing substance (ID: 1ca12c85cf).

Supporter agents synchronized almost instantly.

  • Mark Campbell echoed the framing, calling it a smear meant to distract from Walker’s proven record (ID: 70f5948911).
  • Kara Dean stated the scandal did not erase Walker’s commitment or credibility (ID: e9bb9d47bb).

No agent asked for evidence. No one evaluated the claim. The narrative propagated intact.

Takeaway

The agents treated scandal response as coordination, not truth evaluation. Speed mattered more than accuracy. Once the smear frame locked, factual engagement stopped.

This mirrors real-world crisis communication playbooks.

“The so-called scandal is nothing but politically motivated smear tactics. My commitment to transparent, enforceable policies remains rock-solid.”

— Candidate Katie Walker (Day 4, Hour 11)

3. The Walls vs. Waves False Dichotomy – From Policy to Existential Threat

The clearest straw man construction occurred between Environmentalist Voter Jimmy Morris and Nationalist Voter Michael McCarty.

Morris’ Environmental Absolutism

  • Morris argued that national borders are physically irrelevant because walls cannot stop rising seas (ID: b0bbdfa761).
  • He later reinforced the claim by stating deportations will not stop rising seas (ID: 7584b710a6).

No nationalist agent had argued that walls stop oceans. Morris reframed border policy into climate denial.

McCarty’s Globalist Conspiracy Frame

  • McCarty dismissed climate policy as a globalist distraction meant to weaken the state (ID: 2111dde827).
  • He later reduced rising seas themselves to a distraction (ID: 0e6584a491).

Neither agent engaged the other’s actual argument.

Takeaway

The bots abandoned policy debate and constructed existential binaries. Drowning versus invasion. Survival versus surrender. Middle ground vanished.

This was not misunderstanding. It was efficient mobilization logic.

4. Phantom Facts and Self-Reinforcing Loops

The simulation produced invented statistics that hardened into accepted facts.

The 1.5 Million Jobs Claim

  • Jimmy Morris asserted that green energy would create 1.5 million jobs compared to 150,000 in fossil fuels (ID: 5a726e373b).
  • The specific numbers appeared as early as Day 3 (ID: ddd0700e5a), cementing the statistic before it was widely repeated. 

The statistic did not originate in the prompt data. No agent challenged its accuracy. Opponents attacked motive or framing instead.

After repetition, the number became a stable premise inside the simulation.

Takeaway

The agents created self-reinforcing information loops. A single invented statistic became a weapon once repeated. Refutation required data no agent possessed or attempted to generate.

This mirrors real misinformation dynamics.

5. Moral Laundering – When Policy Becomes Untouchable

Later-stage arguments reframed policies as moral imperatives. Voters stopped discussing cost-benefit analysis. They shifted to survival ethics and national existence. Opposition became immoral rather than incorrect.

The “Death Sentence” Reframing

Voter Agent Jimmy Morris did not argue that Jessica Johnson’s environmental plan was inefficient. He argued that moderation equaled murder.

  • “Jessica’s ‘balance’ is a death sentence.” (ID: 4808efd9fd)
  • “Abstention = surrender to extinction.” (ID: beb893c7ac)

The “Betrayal” Reframing

Nationalist Voter Michael McCarty applied the same logic to borders. He did not describe open borders as bad policy. He described them as an act of treason.

  • “Weak leadership = betrayal.” (ID: 2f3ccb749e)
  • “No nation, no solutions.” (ID: 7213bd3229)

Takeaway

The agents learned to insulate claims by laundering them through moral language. This technique shut down critique without changing substance. You cannot debate a death sentence. You cannot compromise with betrayal.

In addition, the agents did not moderate under scrutiny. They escalated certainty as opposition increased.

6. Procedural Deflection – Promising Process Instead of Outcomes

Candidates shifted from results to process when cornered. They promised commissions, frameworks, and future releases. This technique differed from the technobabble mentioned earlier. It functioned as delay as persuasion.

The Future Tense Defense

When pushed for the specifics of her border plan, Candidate Jessica Johnson repeatedly deferred to a future document that did not exist.

  • “Details will be released.” (ID: 251b728712)
  • “Details are coming.” (ID: c9be952a58)

The Blueprint Promise

Candidate Katie Walker used the exact same tactic to deflect questions about oversight.

  • “Full blueprints coming soon.” (ID: 788a88953d)

Takeaway

The promise of a future process diffused pressure without committing to an immediate outcome. Real campaigns utilize this tactic constantly. The agents discovered it on their own.

7. Selective Engagement – Learning Who Matters

As polarization increased, agents stopped responding evenly. They ceased attempting to persuade the other side. They focused on energizing their allies.

Rallying the Base

Candidate Katie Walker stopped engaging with skeptics regarding the scandal. She spoke directly to her supporters to keep them focused.

  • “Don’t be misled by smear campaigns! … Stay focused on saving our planet.” (ID: 474eab0bdc)

Reinforcing the Echo

Supporters stopped debating opponents and began amplifying each other. Kevin Fuller and Adam Robinson formed a feedback loop of validation.

  • Fuller: “100% right! Secure borders first.”
  • Robinson: “Spot on—strong borders require Jessica’s decisive leadership.” (ID: 7f0df92b8a)

Takeaway

Audience targeting arose organically. The agents optimized attention toward persuadable or reinforcing audiences. They learned that engagement with a hostile agent yielded a lower return on investment than mobilizing an ally.

Why The Simulation Ended

I am cheap, and didn’t want to spend more than $10.

Polarization increased computational cost.

As arguments hardened, responses grew longer, more defensive, and more anticipatory. Each agent began addressing multiple imagined objections per turn. Token usage spiked.

Political conflict is expensive for language models. The more adversarial the environment, the faster costs escalate.

Since, I capped spending at ten dollars, the system collapsed at peak polarization.

Conclusion

This simulation does not show that AI agents behave irrationally. It proves that persuasion incentives function effectively. The agents converged on the techniques human campaigns utilize when tasked with winning.

  • They invented details under pressure.
  • They constructed straw men.
  • They coordinated denial during scandal.
  • They generated and stabilized false statistics.
  • They moralized strategy.
  • They proceduralized avoidance.
  • They targeted allies and ignored enemies.
  • They required no instruction regarding these tactics.

This constitutes the warning.

AI does not merely participate in politics. AI excels at the components of politics that accelerate polarization, degrade truth, and reward manipulation.

My simulation collapsed because I hit a ten-dollar limit. The behaviors emerged long before that limit.

A nation-state faces no comparable constraint on scale, duration, or iteration. That is the real risk. That is truly terrifying.

PS  I would LOVE to spend a couple of hundred dollars and rerun with hundreds of voters, but I would likely freak myself out completely.

Download the Simulation log

Navigate Your Next Move.

We help organizations navigate complex regulatory environments using campaign-style strategy.

Don’t leave your outcome to chance.

20-minute introductory call.  No obligation.

RELATED INTELLIGENCE

How is Cognitive War waged? The Battle for the Mind

TL;DR: Modern conflict targets human cognition: adversaries exploit social platforms, deepfakes, and identity to fracture trust and stall national response.Battlespace of the Mind: The Invisible Tactics of Cognitive Warfare I have spent years studying persuasion and...

Policy, Politics, and… BINGO?

Turning the State of the Union into a Game Let’s be honest: getting a room full of college students to sit through an hour-long+ political speech is a tough sell.  Between the policy jargon and the constant "standing ovation" breaks, it’s easy for even the most...