Using AI to Simulate Congress – It’s a Whole New World

Using AI to Simulate Congress – It’s a Whole New World

A recent discussion about AI and virtual agents led to an intriguing question: Could they be trained to predict public opinion?

There are companies attempting to train agents by census data, voter files, and other assorted data then spinning them up and polling the agents with typical political polling.  It’s wild.  

This, naturally, spiraled into jokes about living in a simulation. But the idea stuck with me, fueling my curiosity about AI’s role in politics.

I then came across this paper that explores whether large language models (LLMs) can simulate senatorial decision-making and, more importantly, whether they can identify conditions that encourage bipartisanship.

Researchers Zachary Baker and Zarif Azher created AI-driven agents representing real U.S. Senators, placing them in a virtual simulation of the Senate Intelligence Committee.

The results suggest that under certain conditions, these agents exhibit realistic debate patterns and even compromise across party lines.

Title: Simulating the U.S. Senate: Can AI-Powered Agents Model Bipartisanship?

Link: Link

Peer Review Status: Under Review

Citation: Baker, Z. R., & Azher, Z. L. (2024). Simulating The U.S. Senate: An LLM-Driven Agent Approach to Modeling Legislative Behavior and Bipartisanship. arXiv preprint arXiv:2406.18702.

Introduction

Political gridlock and polarization defines modern legislative processes, with bipartisan cooperation often seeming elusive.

This study explores whether large language models (LLMs) can simulate senatorial decision-making and, more importantly, whether they can identify conditions that encourage bipartisanship.

Researchers Zachary Baker and Zarif Azher created AI-driven agents representing real U.S. Senators, placing them in a virtual simulation of the Senate Intelligence Committee.

The results suggest that under certain conditions, these agents exhibit realistic debate patterns and even compromise across party lines.

Methodology

The researchers designed virtual senators using GPT-3.5, equipping each agent with key traits, policy positions, and memory functions. The study focused on six senators from the 2024 Senate Intelligence Committee, including Mark Warner (D), Marco Rubio (R), Susan Collins (R), John Cornyn (R), Ron Wyden (D), and Martin Heinrich (D). These AI senators engaged in structured debates on two key issues:
  • U.S. aid to Ukraine.
  • A general discussion on necessary legislative actions./li>
The simulation ran in multiple rounds, with agents engaging in discourse, recalling past statements, and summarizing their stances. To assess realism, a high school government teacher and a former congressional staffer evaluated the AI-driven debates, rating them on a 0-10 believability scale.

Results and Findings

AI Agents Could Engage in Realistic Debate

The agents demonstrated an ability to recall prior discussion points, build arguments, and reflect on their positions. Their reflections aligned with their initial stances, reinforcing the validity of the simulation.

For example:

Agent Rubio: “During the committee meeting, I strongly advocated for substantial military aid to Ukraine… I believe we can’t afford to wait, and our response needs to be swift and decisive.”

Agent Wyden: “I raised concerns about balancing domestic needs with the urgency of supporting Ukraine. While I understand the gravity of the situation, I stressed the importance of accountability.”

Expert Evaluations Showed Moderate to High Believability

The expert reviewers assigned mean believability scores above 5 across all tested scenarios. The funding-for-Ukraine debate received an average score of 7.45, suggesting that the AI agents’ discussions mirrored real-world legislative arguments convincingly.

Bipartisanship Emerged When External Factors Shifted

One of the study’s most intriguing findings was how agents reacted to external perturbations. When the simulation introduced new intelligence indicating an imminent Russian breakthrough in Ukraine, previously hesitant senators became more willing to compromise. This shift suggests that real-world bipartisanship may hinge on clear, immediate external threats.

Critiques and Areas for Further Study

Limited Scope and Sample Size

The study only included six senators from one committee, limiting its generalizability. Future research should expand to larger legislative bodies and different committees to test whether similar bipartisan trends emerge.

Lack of Real-World Verification

While the AI agents’ actions were rated as “believable,” the study did not compare their decisions to real-world legislative outcomes. A follow-up study could test whether historical simulations align with actual votes and policy developments.

Simplified Agent Memory and Interaction

More sophisticated training structures and multi-agent training could enhance realism.

Chat GPT 3.5 Was Used

AI Senators were created with ChatGPT 3.5. Noting the expendential improvement with later models.

Conclusion

For me, this study is less about specific findings and more of a proof of concept—an early glimpse into what AI-driven simulations could mean for legislative analysis and decision-making. The possibilities are vast.

Imagine a public affairs specialist training an AI model on a government official. The technology is nearly there. Could one use Google’s Notebook LM to upload hundreds of sources on a policymaker and their key issues, then query against it? Absolutely. Could one simulate meetings? Likely. Predict outcomes? Maybe not yet, but it’s coming.

What if you trained agents on entire legislative bodies? Florida’s regular legislative session just started this week. Predicting legislative outcomes of floor votes is relatively straightforward—partisanship and leadership preferences dictate much of the process. But the real power isn’t in forecasting final votes; it’s in modeling committee member markups, where deals are made and policies are shaped. Could AI map those interactions? That’s where this research gets interesting.

My head is spinning with what makes the most sense in training data and what other factors to consider.  

As AI agents and models improve, these simulations could become invaluable for political research, policy development, lobbyists, and public affairs officials.

The ability to test legislative scenarios before they unfold could transform how decisions are made and policies are shaped.

Is the Use of AI by Knowledge Workers Reducing Critical Thinking?

Is the Use of AI by Knowledge Workers Reducing Critical Thinking?

Introduction

Generative AI (GenAI) tools like ChatGPT and Microsoft Copilot are transforming how we work, how we study, how we prepare for meetings, but what does this mean for our critical thinking skills?

This study explores how generative AI tools influence critical thinking among knowledge workers. As these tools become more common, they raise questions about how they affect cognitive effort and confidence. This research surveyed 319 knowledge workers, collecting 936 examples of how they used generative AI in their tasks.

The study examined two main questions:

  • when and how do users engage in critical thinking with AI, and
  • when does AI make critical thinking easier or harder?

Title: The Impact of Generative AI on Critical Thinking: Reductions in Cognitive Effort and Confidence Effects

Link: The Impact of Generative AI on Critical Thinking

Peer Review Status: Peer Reviewed, presented at CHI Conference on Human Factors in Computing Systems

Citation:
Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. CHI Conference on Human Factors in Computing Systems, Yokohama, Japan.

Methodology

The researchers used an online survey (n=319) targeting knowledge workers who use generative AI at least once a week.

Participants provided detailed examples of using AI for three types of tasks: creation, information processing, and advice. They rated their confidence in completing the tasks with and without AI, as well as the cognitive effort required for six types of critical thinking activities based on Bloom’s taxonomy: recall, comprehension, application, analysis, synthesis, and evaluation.

The study also measured each participant’s general tendency to reflect on their work and their overall trust in AI.

Results and Findings

The study found that knowledge workers perceive reduced cognitive effort when using AI, especially when they have high confidence in the tool’s capabilities.

Conversely, those with high self-confidence reported more effort in verifying and integrating AI outputs.

Key findings include:

Critical Thinking Shifts Toward Verification and Integration
When using GenAI, knowledge workers reported spending less time on information gathering and more time verifying the accuracy of AI outputs. For example, participants often cross-referenced AI-generated content with external sources or their own expertise to ensure reliability. This shift reflects a move from task execution to task stewardship, where workers focus on guiding and refining AI outputs rather than generating content from scratch.

Confidence in AI Reduces Critical Thinking Effort
The study found that higher confidence in GenAI’s capabilities was associated with less perceived effort in critical thinking. In other words, when workers trusted AI to handle tasks, they were less likely to critically evaluate its outputs. Conversely, workers with higher self-confidence in their own skills reported engaging in more critical thinking, even though they found it more effortful.

Motivators and Barriers to Critical Thinking
Participants cited several motivators for critical thinking, including the desire to improve work quality, avoid negative outcomes, and develop professional skills. However, barriers such as lack of time, limited awareness of the need for critical thinking, and difficulty improving AI responses in unfamiliar domains often prevented workers from engaging in reflective practices.

GenAI Reduces Effort in Some Areas, Increases It in Others
While GenAI tools reduced the effort required for tasks like information retrieval and content creation, they increased the effort needed for activities like verifying AI outputs and integrating them into workflows. This trade-off highlights the dual role of GenAI as both a facilitator and a complicator of critical thinking.

Critiques of the Research or Additional Areas of Potential Study

To start, I am always weary of research conducted by the industry itself, and Microsoft is a huge player in today’s AI.  So I tend to read these studies extremely critically and take them as always needing and begging for further study.

The study relies on self-reported data, which may be subject to bias or inaccuracies.

This study lacks any cross-cultural perspectives in that it was conducted in Enlish only, and focused on youngish, tech savy workers.  

Additionally, it does not account for long-term impacts on critical thinking skills.

Future research could explore:

  • Longitudinal studies to observe changes in critical thinking over time.
  • Rapid, evolving studies due to the rapid evolution of AI tools.
  • Experiments that measure critical thinking performance rather than self-perception.
  • Cross-Cultural Studies
  • Studies across all age groups
  • Studies with non Knowledge Workers
  • Studies with students.  

Conclusion

I keep a folder of quotes from pundits lamenting the death of civil society with each new technological advancement for my class on “The Media.” The printing press, radio, television, cable TV, the Internet, and social media—all were predicted to destroy us.

Meh.

I do believe generative AI tools are massively disrupting workflows, effort,  outputs, and critical thinking. I see it firsthand in the college classroom.

As Generative AI evolves, tools must support – not undermine- critical thinking. While AI can enhance efficiency and reduce effort, it also risks fostering overreliance and diminishing critical engagement.

By reducing the perceived effort of critical thinking, Generative AI may weaken independent problem-solving skills. As users shift from direct engagement to oversight, they must balance efficiency gains with maintaining cognitive skills.

AI designers should prioritize features that promote critical thinking while preserving efficiency. This research highlights the need for systems that encourage reflective thinking and help users critically assess AI-generated outputs.

Ultimately, the study underscores the importance of maintaining a critical mindset in an increasingly AI-driven world.

P.S.

Now that you’re here thinking about critical thinking, I would be remiss if I didn’t recommend one of the guides I keep within arm’s reach at my desk. This book contains over 60 structured techniques that enhance thinking processes, and I highly recommend it.

Structured Analytic Techniques for Intelligence Analysis

Fake News and the Sleeper Effect: Why Misinformation Lingers in Memory

Fake News and the Sleeper Effect: Why Misinformation Lingers in Memory

Ever shared a post only to realize later it was fake news? You’re not alone, and psychology explains why. The “sleeper effect,” a phenomenon where a message’s influence grows over time as its source fades from memory, has gained new relevance in the age of social media misinformation. A foundational 2004 meta-analysis by Kumkale and Albarracín unpacks the mechanics of sleeper effects in persuasion.  (and who doesn’t love a meta study?), while a 2023 study by Ruggieri et al. examines how this effect applies to fake news about COVID-19 workplace safety. Together, these studies reveal why false claims stick in our minds and what makes them so hard to correct.

Sources

Title: The Sleeper Effect in Persuasion: A Meta-Analytic Review

Link: NIH Peer Review Status: Peer-reviewed Citation: Kumkale, G. T., & Albarracín, D. (2004). The sleeper effect in persuasion: A meta-analytic review. Psychological Bulletin, 130(1), 143–172.

Fake News and the Sleeper Effect: How Misinformation Persists Over Time

Link: Fake News and the Sleeper Effect in Social Media Posts: The Case of Perception of Safety in the Workplace Peer Review Status: Peer-reviewed Citation: Ruggieri, S., Bonfanti, R. C., Santoro, G., Passanisi, A., & Pace, U. (2023). Fake news and the sleeper effect in social media posts: The case of perception of safety in the workplace. Cyberpsychology, Behavior, and Social Networking, 26(7), 554–562.

Methodology

Kumkale and Albarracín (2004)

This meta-analysis compiled data from 72 experiments to examine the sleeper effect across multiple contexts. The study investigated conditions influencing delayed persuasion, including the timing of discounting cues and the audience’s ability and motivation to process messages. The researchers analyzed the persistence of message impact when the source’s credibility faded from memory, thus isolating key factors that contribute to the sleeper effect.

Ruggieri et al. (2023)

This study involved 324 Italian white-collar workers who viewed Facebook posts about COVID-19 workplace safety. Participants were exposed to three types of posts: real news, real news with a discounting cue, and fake news. Researchers measured participants’ perceptions immediately and one week later, focusing on memory recall and belief in the information. They categorized participants as either “believers” or “nonbelievers” of the fake news to analyze differences in perception and memory retention over time.

Findings

Kumkale and Albarracín (2004)

The meta-analysis confirmed the sleeper effect’s occurrence under specific conditions: when discounting cues followed persuasive arguments and when recipients had high motivation or ability to process the message. Persuasion increased over time as memory of the noncredible source decayed. The review emphasized the importance of the timing of discounting cues and the cognitive engagement of the audience, suggesting that motivated audiences are more susceptible to the sleeper effect.

Ruggieri et al. (2023)

Participants remembered fake news better than real news, even when they initially recognized it as false. Fake news is often more emotionally provocative, novel, or sensational, making it more memorable. The study also posits that the narrative structure and vividness of fake news stories can enhance recall.  In the end, memory of the message persisted, but memory of the source diminished over time, suggesting a sleeper effect. Those who initially believed the fake news maintained or increased their positive impression of the content over time. Conversely, nonbelievers showed a slight increase in acceptance but to a lesser extent. The study highlights how fake news influences perception long after the source is forgotten.

Critiques of the Research or Additional Areas of Potential Study

Kumkale and Albarracín (2004)
The meta-analysis provides robust evidence for the sleeper effect but relies on aggregated data from diverse studies with varying methodologies. As with all meta studies, the lack of uniformity across experiments presents a challenge in isolating causal mechanisms. Further research should explore real-world applications (as in Ruggieri), such as political messaging or health communication, to test the sleeper effect outside controlled environments. Investigating long-term behavioral changes could also deepen understanding of its societal impact.

Additionally, the meta study was published in 2004 –  predating social media’s rise.

Ruggieri et al. (2023)
The study effectively demonstrates the sleeper effect in the context of workplace safety perceptions but is limited by its sample of educated white-collar workers.  In addition, Ruggieri’s study tested memory after one week—what happens after months?  Future research should explore different demographic groups to determine if educational background affects susceptibility to misinformation.   Future research should explore media diets to determine if media mode affects the sleeper effect.  Memes and now deepfake pictures or video are likely to stick around a lot more than a text based message. Additionally, the study focuses on COVID-19-related content, which may limit generalizability due to potential confounding factors. Examining other controversial topics could provide a broader understanding of the sleeper effect’s impact.   I may as so bold as to suggest UFO’s?

Comparative Analysis
Kumkale and Albarracín offer a foundational, theoretical perspective on the sleeper effect, establishing cognitive mechanisms and general conditions for delayed persuasion.
In contrast, Ruggieri et al. apply these principles to a specific real-world context, highlighting how emotionally charged and vivid fake news influences memory. The former provides broad insights into persuasion dynamics, while the latter demonstrates practical implications in digital misinformation.  Future studies should integrate both approaches, combining theoretical rigor with real-world relevance to better understand and combat misinformation.

Neither study offers much on how to combat the sleeper effect, merely suggest implications for countering misinformation. Kumkale and Albarracín (2004) highlight the importance of disrupting the dissociation process by ensuring the credibility of the source remains linked to the message. Ruggieri et al. (2023) imply that repeated corrections and reminders of the source’s noncredibility could mitigate the sleeper effect. Future research should explore these mitigation strategies more systematically, particularly in digital environments where misinformation spreads rapidly.

“A lie can travel half way around the world while the truth is putting on its shoes.”

Mark Twain

Conclusion: Why This Matters

The sleeper effect isn’t just an academic curiosity—it’s a weapon in the misinformation playbook.

Kumkale and Albarracín (2004) provide a theoretical framework for the sleeper effect, showing its occurrence when discounting cues follow persuasive arguments and when audiences engage cognitively. Their meta-analysis emphasizes cognitive mechanisms and general conditions for delayed persuasion.

Ruggieri et al. (2023) apply this framework to real-world misinformation about COVID-19, revealing that fake news persists in memory even when initially identified as false. Their findings demonstrate how emotional and vivid content enhances recall, highlighting practical implications in the context of social media.

In a world barrelling towards deep fakes, and where where misinformation spreads faster than facts, understanding the sleeper effect isn’t just smart—it’s survival!
Preparing the Political Environment Before Introducing A Policy Change

Preparing the Political Environment Before Introducing A Policy Change

When I was a young, student leader at the University of Florida, I had an eye-opening experience during a meeting with a Congressperson in D.C. I had meticulously prepared my case for a legislative change affecting students and felt confident as the discussion progressed. The Congressperson was engaged, asking insightful questions, and I believed I was making real progress.

As the meeting ended, we walked to the door, and the Congressperson put a hand on my shoulder and said, “I agree with you, but I can’t help you—yet. It’s too soon. Your job is to build the pressure on me and my friends.”

That moment taught me a critical lesson: successful policy change requires understanding and preparing the political environment before moving forward.

“I agree with you, but I can’t help you—yet. It’s too soon. Your job is to build the pressure on me and my friends.”

Member (ret), US House of Representative

Why Preparation Matters

Policy change is a complex, multifaceted process that demands a strategic approach. Without the right groundwork, even the most compelling proposals can falter. Decision-makers are influenced by political realities, public opinion, and stakeholder pressures. To succeed, businesses and advocacy groups must align their goals with the broader political landscape and create the conditions for policymakers to act.

The Outside Game: The Role of Political Affairs Experts

Political affairs experts are to understand and shape the political environment.  The work lays the foundation for successful advocacy efforts and creates momentum for change through key strategies:

Strategic Vision & Planning: They analyze the political landscape, identify the stakeholders, power players, and the relationships, and develop a comprehensive roadmap (to include processes) and budget for the entire policy campaign.

Stakeholder / Power Mapping & Engagement: They identify key stakeholders, including agencies, industry groups, and advocacy organizations, and develop engagement strategies to build coalitions and garner support.

Issue Identification & Research: They meticulously analyze the policy problem, gather data, and conduct comprehensive research to understand its impact and potential solutions.    This may involve data analysis, exploring other jurisdiction’s approach, identifying knowledge gaps to be filled by experts, and conducting public opinion polling and focus groups.

Opposition Research & Counter-Messaging: They delve deep into the opposition’s arguments, funding sources, and potential weaknesses, developing counter-messaging strategies to neutralize their influence.

Narrative Development: They craft compelling narratives that resonate with lawmakers, stakeholders, and the public, framing the issue in a way that demands attention and action.    This often entails translating complex information into more digestible materials.

Policy Whitepaper Development: They synthesize research findings and stakeholder perspectives into persuasive policy whitepapers that provide evidence-based arguments for change.

Targeted Messaging & Channel Optimization: They tailor messaging for specific audiences, including lawmakers, the public, and media outlets, ensuring consistent and persuasive communication across all channels.  This can include collateral to digital programs to full-blown advertising campaigns.

Resource Allocation & Coordination: They strategically budget and allocate resources, including funding, personnel, consultants, and media outreach, to maximize impact and efficiency throughout the campaign.

The Inside Game: The Role of Lobbyists

Lobbyists are the boots on the ground, leveraging their relationships and expertise to navigate the legislative process. They work in tandem with political affairs experts to present the research, the proposal, and push policy change through formal channels.

Legislative Champion Recruitment: They identify and engage lawmakers with influence and interest in the issue, building relationships and securing sponsors for the bill.

Pre-Filing & Legislative Advocacy: They conduct briefings with legislative committees, staff, and leadership and ensure buy-in from key committees before formal introduction of the bill.

Committee Process Management: They organize expert testimony for public hearings, negotiate amendments with stakeholders, and work behind the scenes to secure votes in committees.

Floor Vote & Passage: They orchestrate last-minute advocacy pushes, ensure cross-chamber coordination in bicameral systems, and engage the Governor’s office to secure support or negotiate potential veto overrides.

Governor & Executive Branch Engagement: They engage with the Governor’s office, the Governor’s staff, and relevant agencies to ensure smooth implementation and minimize potential roadblocks.

Post-Passage Implementation: They work closely with regulatory bodies during the rulemaking process, mobilize supporters to advocate for favorable implementation rules, and monitor compliance and performance metrics.  (Note sometimes the implementation lobbyists may change depending on area of expertise.)

The Power of Synergy in Policy Change

Policy change is not a linear process. Political affairs experts and lobbyists must work simultaneously, both inside and outside the legislative arena, to build momentum and overcome resistance. Their collaboration ensures:

  • Policy initiatives are well-researched and strategically positioned.
  • Stakeholders are engaged and mobilized effectively.
  • Policymakers are equipped with the right information at the right time.

Key Takeaways for Businesses

Preparing the political environment requires a strategic, long-term approach. Businesses seeking significant policy changes should:

  1. Invest Early in Research: Understand the landscape and anticipate challenges.
  2. Build Relationships: Cultivate trust with key stakeholders and decision-makers.
  3. Craft Compelling Narratives: Simplify complex issues and align messaging with audience values.
  4. Mobilize Support: Leverage grassroots and grasstops advocacy to build momentum.
  5. Be Patient: Policy change is often slow; persistence is key.

Conclusion

Effective policy change doesn’t happen in a vacuum. By preparing the political environment, businesses can increase their chances of success and create lasting impact. Whether engaging political affairs experts, lobbyists, or both, the key lies in strategic planning and consistent effort.

Campaigns, Ads, and Experiments: How Political Science Meets Persuasion

Campaigns, Ads, and Experiments: How Political Science Meets Persuasion

The new edition of the American Political Science Review (Feb 2024) published a study that I was extremely excited to read. It explores political ads and persuasion—those annoying little things on which millions of dollars are spent, and for the most part, are written and produced based on rules of thumb passed down from mentors. But what really works and actually moves voters? In this polarized world, does persuasion even work, or are we just trying to mobilize voters? Well, How Experiments Help Campaigns Persuade Voters: Evidence from a Large Archive of Campaigns’ Own Experiments makes a valiant effort, but falls short in a couple of areas. While this literature does add to our knowledge, it has a significant blind spot. Due to the research design, only ads from Democratic campaigns were studied—but more on that later. Additionally, we must critically evaluate the findings, noting that it appears the ad testing company, Swayable, has researchers listed as authors on the paper, and the findings seem to be extremely favorable to the company’s revenue goals. Noting these two major concerns, let’s explore this study to see how political campaigns (remember—Democratic campaigns only) use experiments to figure out which ads work best. Spoiler alert: It’s more complicated than you think.

Title: How Experiments Help Campaigns Persuade Voters: Evidence from a Large Archive of Campaigns’ Own Experiments

Link: Read the study

Peer Review Status: Peer-reviewed

Citation: Hewitt, L., Broockman, D., Coppock, A., Tappin, B. M., Slezak, J., Coffman, V., Lubin, N., & Hamidian, M. 2024. “How Experiments Help Campaigns Persuade Voters: Evidence from a Large Archive of Campaigns’ Own Experiments.” American Political Science Review, 118(4): 2021-2039. doi:10.1017/S0003055423001387.

Methodology

The study analyzed a treasure trove of data from 146 experiments run by the tech platform Swayable, which worked with Democratic and left-leaning campaigns. These experiments tested 617 ads on over 500,000 respondents. Here’s the gist:

  • Type of Study: Randomized survey experiments
  • Sample Size: Over 500,000 respondents, including diverse demographic and political groups
  • Experimental Design: Ads were tested on treatment groups, while control groups viewed neutral videos (e.g., public service announcements).
  • Measures: Respondents rated their likelihood to vote for a candidate and their favorability toward that candidate on a scale of 0 to 10. Results were adjusted for variables like gender, age, and partisanship to ensure accuracy.

Each experiment aimed to measure the persuasive impact of specific ad features, such as emotional tone, messenger characteristics, and informational content. Data collection spanned two election cycles, offering a comprehensive look at how ad effectiveness varies by context.

Factors Considered in Ad Effectiveness

Figure 3 of the study provides a detailed analysis of the features evaluated for their impact on ad effectiveness. The following factors were explored:

  • Tone of the Ad: Whether the ad was positive, negative, or contrast.
  • Message Content: Inclusion of new facts, emotional appeals (anger, enthusiasm, fear), or policy details.
  • Messenger Characteristics: The demographic attributes, partisanship, or credibility of the spokesperson.
  • Production Quality: The perceived professionalism or “polished” nature of the ad.
  • Pushiness: How assertive the ad was in delivering its message or call to action.

The results revealed a lack of consistency across contexts. For instance, ads emphasizing enthusiasm might perform well in one election but have no discernible effect in another.

Similarly, while emotional appeals were hypothesized to boost effectiveness, the actual impact was context-dependent and often minimal. This inconsistency underscores the challenge of predicting ad success without experimentation.

Results and Findings

  • Overall Effectiveness: Ads had small but significant persuasive effects. On average:
    • Vote choice shifted by 2.3 percentage points in 2018 down-ballot races.
    • Effects dropped to 1.2 points in 2020 down-ballot races and
    • Effects dropped to 0.8 points in the 2020 presidential race.
  • Variation in Effectiveness: Persuasion varied significantly among ads. Some were 50% more effective than the average, while others were 50% less effective.
  • Unpredictable Features: Conventional wisdom about what makes ads effective—like emotional appeals or testimonials—had limited and context-dependent predictive power. What worked in 2018 didn’t necessarily work in 2020.
  • Implications for Campaigns: Experimentation is invaluable. Simulations showed that campaigns investing in testing ads could dramatically improve their impact, especially in close elections.

Simulations and Their Impact

In what could almost be considered a side note, the researchers conducted simulations to explore the potential value of ad experimentation in campaign strategy. Here’s what they did and what they found:

  • Method: They modeled scenarios in which campaigns invested resources in testing ads to find the most persuasive ones. This allowed them to model/estimate how much more effective overall advertising efforts could become with such targeted approaches.
  • Findings: The simulations showed that even modest investments in experimentation could yield significant returns, particularly in close elections. Choosing a highly persuasive ad over a less effective one could shift the needle in tight races, demonstrating the importance of data-driven decision-making.

Once again, it’s important to note that this research was published in conjunction with an ad testing platform.

Critiques of the Research

  • Generalizability: As discussed, the data came exclusively from Democratic campaigns using the Swayable platform. No conservative ads were included in the analysis. Given the significant personality differences often found between liberals and conservatives, these results may not be generalizable to right-leaning campaigns or voters.
  • Generalizability (con’t):  Additionally, the analysis did not include ballot iniatives or other type of influence campaigns,  which are typically more nonpartisan. 
  • Attrition and Sampling Issues: Some experiments lacked complete data on respondents who dropped out, potentially skewing results.
  • Timing Limitations: Findings are specific to the 2018 and 2020 elections and may not generalize to future elections with different political dynamics, particularly as campaigns continue to learn and improve.
  • Industry Sponsorship/Relationship: The research was conducted only with ads tested by the Swayable platform. Some of the findings support the use of ad testing. While I’m not accusing the researchers of anything, it’s important to note the relationship. It’s crucial to be critical when interpreting the results.

I will note the authors do a good job and are transparent in offering these caveats.

Additional Areas of Potential Study

  • Bipartisan Analysis: Expanding the dataset to include Republican campaigns could reveal whether persuasion techniques differ by political party / ideology.
  • Non-partisan Analysis: Expanding the dataset to include ballot initiatives and issue ads could reveal whether persuasion techniques differ from partisan activities.
  • Primary / General Election Analysis:  Expanding the ads studied to group intra-party (primary) versus inter-party (general) could reveal some significant differences.  This is especially critical given the substantial number of elections decided in the primary, influenced by sorting and gerrymandering.
  • Field Experiments: Testing ads in real-world settings—not just surveys—would enhance ecological validity.  Would the results be ‘replicated’ in focus groups?
  • Ad Context: Future research could explore how competing ads or media coverage influence ad effectiveness.

Conclusion

For me, this is one of those frustrating studies. When you read the title, the title promises answers, but the study raises more questions than it answers.

This research sheds light on how Democratic campaigns use experiments to refine their strategies, emphasizing the need for ad testing in a rapidly changing political environment. However, as Steve Schale, a Democratic operative from Florida, points out, ad-testing can become an obsession that overshadows the larger narrative of the campaign.  He writes, “”We (Democrats) were addicted to ad-testing, to the point that it drove decision-making more this cycle than the desire or need to tell a story.”

The study concludes that ad effectiveness is deeply tied to the specific context of each election, meaning strategies that worked in 2018 might not have the same impact in 2020, let along in 2026 or 2028.  This highlights the challenge for campaigns: predicting what will work is difficult, and there are no guarantees. In fact, attempting to replicate past success is likely to be a futile endeavor.

A cynic would write, the authors conclude that persusion is extremely context driven, and what may work in one election may not work in another….whomp whomp.  Therefore, campaigns should test ads.

Nevertheless, the cumulative impact of even small shifts in ad effectiveness can influence election outcomes, particularly in tight races. The study underscores the growing importance of data-driven decision-making in modern campaigns, but it also leaves many questions unanswered—questions that future research will need to address to refine our understanding of political ad effectiveness.