Do You Have a Nose for Nonsense? Science Says Some People Fall for ‘Pseudo-Profound Bullshit’

Do You Have a Nose for Nonsense? Science Says Some People Fall for ‘Pseudo-Profound Bullshit’

Ever scroll through social media and come across a post that sounds incredibly profound, only to read it again and realize it makes absolutely no sense? You’re not alone. These are examples of what researchers call “pseudo-profound bullshit.” (like a quote over a mountain covered in fog?)

It’s a topic that’s been on my mind lately, fueled by some recent online interactions (seriously, I have to remember to just stick to talking about my dog and the Gators!).

These social interactions reminded me of a study by Pennycook and colleagues I read some time ago investigating why certain people are more receptive to these seemingly wise but ultimately empty statements.

Citation & Links

Title: On the reception and detection of pseudo-profound bullshit

Link: download a pdf

Peer Review Status: Yes

Citation: Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549–563.

Methodology

Researchers presented participants with statements that appeared meaningful but were actually random buzzwords strung together.  For example, one statement was, “Wholeness quiets infinite phenomena.” Participants rated how profound they found each statement.  Researchers then analyzed these ratings alongside measures of cognitive style, beliefs, and intelligence.

Results and Findings

  • People vary in their ability to detect bullshit.
  • Those who easily fall for bullshit tend to be less reflective thinkers.
  • People who are more gullible also tend to have lower cognitive ability and believe in the paranormal.
  • A bias toward accepting statements as true may contribute to bullshit receptivity.

“The vagueness of the statements may imply that the intended meaning is so important or profound that it cannot be stated plainly.

Deep Dive: Discerning Deceptive Vagueness

The study suggests that detecting bullshit involves more than just skepticism.  It requires the ability to recognize vague language that seems impressive but lacks real substance. Some people may have a stronger tendency to accept things at face value.  This can make them more susceptible to bullshit.

The researchers also found that people who are more reflective and have higher cognitive abilities are better at spotting bullshit.  This suggests that critical thinking skills play a role in bullshit detection.

Why It Matters: Bullshit in the Real World

Pseudo-profound bullshit is common.

You likley encounter it in everyday conversations (especially those taking place on social media), political rhetoric, marketing, and even academia.

Robert F. Kennedy Jr., the current Secretary of Health and Human Services, has made pseudo-profound claims. For instance:

He said, “By September, we will know what has caused the autism epidemic and we will be able to eliminate those exposures.” 

This statement contains all the hallmarks of pseudo-profound language – it sounds momentous and offers a dramatic promise while being scientifically implausible.

Understanding bullshit like this and how people perceive and process bullshit can help us become more discerning consumers of information.

Limitations/Caveats

The study focused on pseudo-profound bullshit, but there are other kinds.   For example, “conversational bullshit” occurs when people speak without concern for the truth, such as in a casual bull session with friends. Also, partisanship and polarization is likely to affect how people rate the profundity of statements.

Final Thoughts / Conclusion

Bullshit is widespread (especially online without any editors), but we can learn to identify and resist it.

Critical thinking skills should be mandatory so that we call can better navigate the sea of information and avoid falling for deceptive vagueness.

PS Recommended Reading

Want more?  Check out Calling Bullshit, a book and course by two university professors to address “what we see as a major need in higher education nationwide.”

The site has free lectures, case studies, and tools: callingbullshit.org

I have read their book, and I highly recommend it.

Mind Over Masses: Why Your CRAZY Uncle Won’t Change His Mind, But the Country Might Shift Opinions

Mind Over Masses: Why Your CRAZY Uncle Won’t Change His Mind, But the Country Might Shift Opinions

I am told that the best political discussions come while sharing a beer.  Let’s test that theory.

I was having a beer with a political science student, and the topic was how difficult it is to change minds in the hyper-partisan atmosphere.  We went through the psychological underpinnings of political decision-making and agreed influence is extremely difficult, especially as political involvement increases.

Then the kicker: “If individual opinions are so difficult to change, then why does public opinion change quickly on some issues, for example, same-sex marriage and marijuana legalization?”

It feels like a paradox: individual rigidity, collective change.  It’s not.  It just means persuasion works differently at different levels.

The Puzzle: Individual Walls vs. Collective Waves

You’ve been there:

  • You argue politics with a friend, facts in hand. They dig in.
  • Meanwhile, public opinion on major issues swings dramatically in just a few years.
  • Even partisan groups sometimes pivot quickly.

So why does one person stay unmoved while millions change their minds?

Why Individuals Resist Change

Changing individual minds is hardwired to fail most of the time. Several psychological defenses get in the way:

  • Motivated reasoning: People interpret facts to support what they already believe.
  • Confirmation bias: They seek out supporting evidence, avoid what challenges them.
  • Disconfirmation bias: They argue harder against facts that conflict with their views.
  • Cognitive dissonance: Contradictory information creates discomfort. Most reject it rather than reconsider.
  • Identity protection: Political beliefs often tie into group membership. Challenging them feels personal.
  • Reactance: Push too hard, and people resist to assert their independence.
  • Active resistance: They discredit sources, counter-argue, or double down on prior beliefs.

These defenses don’t just slow persuasion. They flip it. Attempts to persuade can actually reinforce opposition.

How Public Opinion Shifts Anyway

Even with all that resistance, public opinion moves. Big changes happen, just not the way most think:

  • Generational replacement: Older cohorts die. Younger cohorts with different views come of age.
  • Social norm cascades: Once enough people express a new view, others follow to avoid social costs.
  • Elite cues: Trusted leaders signal shifts, and partisans often follow without deep reflection.
  • Media framing: News outlets shape what facts people focus on and how they interpret them.
  • Major events: Crises, court rulings, or wars can jolt opinion in new directions.

None of these rely on changing each individual’s mind one-on-one. They shift the environment around the individual.

Not a Paradox, Just Different Layers

The seeming contradiction dissolves when you separate levels of influence:

Micro: Individuals defend their identities and beliefs. Persuasion is rare and hard.

Macro: Groups shift through cohort turnover, social pressure, elite signaling, media narratives, or events.

Think of a forest: each tree resists bending, but the whole forest can sway with the wind.

NOTE:

If this topic interests you, read Thomas Schelling’s Micromotives and Macrobehavior.
It explains how individual choices, even when rational and modest, can produce unexpected and sometimes extreme collective outcomes.
His Nobel Prize work helps make sense of why political opinion can be both stubborn at the individual level and fluid at the societal level.

What This Means for Persuasion and Politics

Understanding the difference between individual resistance and collective change clears up what only looks like a contradiction.

Your crazy uncle may never budge in a political argument, but that doesn’t mean the electorate stands still.   That alone should provide hope to all those who feel like they are banging their heads against a wall.  

Public opinion does change.  It shifts when generational turnover, social norms, elite cues, media framing, or major events realign the context.  

Change rarely happens through argument alone.   It happens when the ground beneath our feet moves, and moving that ground is difficult.  

But move it, you can.

    Social Media, Deepfakes, Lies and the Perception of Truth: Why We Believe Deepfakes

    Social Media, Deepfakes, Lies and the Perception of Truth: Why We Believe Deepfakes

    Deepfakes are more than Donald Trump’s foot fetish for Elon or Joe Biden playing video games with a hall of Presidents;  they are way more than just some Internet novelty act.  These AI-generated videos (the subject of this week’s study, mimic people, making them incredibly powerful tools for misinformation.

    This is obviously concerning in an era where misinformation spreads fast.

    This is obviously concerning when many people’s only exposure to political information is via social media.

    This study examines how prior exposure to deepfakes and social media news consumption interact to amplify the Illusory Truth Effect (ITE), the tendency to believe information as true simply because we have seen it before.

    Think propaganda.

    Using data from eight countries, the researchers assess whether reliance on social media for news consumption makes individuals more susceptible to believing deepfakes, regardless of their cognitive ability.

    Title: The Power of Repetition: How Social Media Fuels Belief in Deepfakes

    Link: Journal of Broadcasting & Electronic Media

    Peer Review Status: Peer-reviewed

    Citation: Ahmed, S., Bee, A. W. T., Ng, S. W. T., & Masood, M. (2024). Social Media News Use Amplifies the Illusory Truth Effects of Viral Deepfakes: A Cross-National Study of Eight Countries. Journal of Broadcasting & Electronic Media, 68(5), 778–805. https://doi.org/10.1080/08838151.2024.2410783

    METHODOLOGY

    The study surveyed 8,070 participants from the U.S., China, Singapore, Indonesia, Malaysia, the Philippines, Thailand, and Vietnam.

    Participants were shown four viral deepfakes—both political (Putin) and non-political (Kardashian) examples—and asked to rate their accuracy. The researchers measured:

        • Whether participants had previously seen the deepfakes
        • Their level of engagement with news on social media
        • Their cognitive ability, using a standard vocabulary test (Wordsum)

    Control variables included age, gender, education, income, traditional media use, and political interest.

    The goal was to determine whether repeated exposure to deepfakes led to increased belief in their authenticity and whether social media use amplified this effect. They also examined whether cognitive ability moderated these effects.

    RESULTS AND FINDINGS

    The study found strong evidence for the Illusory Truth Effect (ITE) across all eight countries.  
    1. Prior Exposure Increases Belief: Across all eight countries, those who had previously seen a deepfake were more likely to rate it as accurate than those seeing it for the first time.
    2.  
    3. Social Media News Use Amplifies ITE: Heavy reliance on social media for news significantly increased the likelihood of believing deepfakes, even after controlling for cognitive ability. The effect was consistent across six of the eight countries, except China and Malaysia.
    4.  
    5. Cognitive Ability Doesn’t Help Much: Higher cognitive ability had only a weak protective effect against belief in deepfakes. Even individuals with high cognitive ability were more likely to believe deepfakes if they frequently engaged with news on social media.
    6.  
    7. Cross-National Differences: Participants from China were the most likely to believe deepfakes, possibly due to the country’s controlled media environment. In contrast, Singaporeans were the least likely to be deceived, potentially due to high digital literacy and government efforts to combat misinformation.
    8.  

    CRITIQUES AND AREAS FOR FUTURE STUDY

    First, surveys don’t establish causality. Future research could use experimental designs to better understand the causal mechanisms behind ITE and deepfake susceptibility.

    Second, the study relied on self-reported measures of social media news engagement and cognitive ability, which may introduce bias. It is my experience that my college-age children horribly underestimate the amount of time they spend swiping. Future studies could use behavioral data, such as actual social media usage patterns, to complement self-reports.

    Third, the study used one measure of cognitive ability (vocabulary), and this may not fully capture cognitive ability. Future studies could use different measures.

    Finally, while the study accounts for different political and social media environments, additional study into specific national factors could be a fruitful area of deeper research.

    CONCLUSION

    Setting aside ethical concerns of bombarding people with Kardashian videos, this research highlights a concerning trend – the more we see deepfakes, the more likely we are to believe them. Even the smartest among us are not immune. (We also know this from studies of motivated reasoning.)

    These results are not surprising since we know that advertising and propaganda work through high-frequency repetition and familiarity.

    As I tell my students, what and who you surround yourself with, you will likely become. The issue is most of them aren’t making a conscious choice; a black box algorithm is making it for them and shaping their perceptions.

    This line of research underscores the need for better misinformation detection tools and education efforts to help individuals critically evaluate digital content.

    Policymakers must begin to take these issues seriously. It’s more than just community notes and fact-checking; it is reduction of repeated exposure to misinformation. Yes, a social media platform’s “engagement” will be affected, but the picture emerging is that “engagement” is extremely harmful.

    To do nothing and expect a better result is foolish.

    Using AI to Simulate Congress – It’s a Whole New World

    Using AI to Simulate Congress – It’s a Whole New World

    A recent discussion about AI and virtual agents led to an intriguing question: Could they be trained to predict public opinion?

    There are companies attempting to train agents by census data, voter files, and other assorted data then spinning them up and polling the agents with typical political polling.  It’s wild.  

    This, naturally, spiraled into jokes about living in a simulation. But the idea stuck with me, fueling my curiosity about AI’s role in politics.

    I then came across this paper that explores whether large language models (LLMs) can simulate senatorial decision-making and, more importantly, whether they can identify conditions that encourage bipartisanship.

    Researchers Zachary Baker and Zarif Azher created AI-driven agents representing real U.S. Senators, placing them in a virtual simulation of the Senate Intelligence Committee.

    The results suggest that under certain conditions, these agents exhibit realistic debate patterns and even compromise across party lines.

    Title: Simulating the U.S. Senate: Can AI-Powered Agents Model Bipartisanship?

    Link: Link

    Peer Review Status: Under Review

    Citation: Baker, Z. R., & Azher, Z. L. (2024). Simulating The U.S. Senate: An LLM-Driven Agent Approach to Modeling Legislative Behavior and Bipartisanship. arXiv preprint arXiv:2406.18702.

    Introduction

    Political gridlock and polarization defines modern legislative processes, with bipartisan cooperation often seeming elusive.

    This study explores whether large language models (LLMs) can simulate senatorial decision-making and, more importantly, whether they can identify conditions that encourage bipartisanship.

    Researchers Zachary Baker and Zarif Azher created AI-driven agents representing real U.S. Senators, placing them in a virtual simulation of the Senate Intelligence Committee.

    The results suggest that under certain conditions, these agents exhibit realistic debate patterns and even compromise across party lines.

    Methodology

    The researchers designed virtual senators using GPT-3.5, equipping each agent with key traits, policy positions, and memory functions. The study focused on six senators from the 2024 Senate Intelligence Committee, including Mark Warner (D), Marco Rubio (R), Susan Collins (R), John Cornyn (R), Ron Wyden (D), and Martin Heinrich (D). These AI senators engaged in structured debates on two key issues:
    • U.S. aid to Ukraine.
    • A general discussion on necessary legislative actions./li>
    The simulation ran in multiple rounds, with agents engaging in discourse, recalling past statements, and summarizing their stances. To assess realism, a high school government teacher and a former congressional staffer evaluated the AI-driven debates, rating them on a 0-10 believability scale.

    Results and Findings

    AI Agents Could Engage in Realistic Debate

    The agents demonstrated an ability to recall prior discussion points, build arguments, and reflect on their positions. Their reflections aligned with their initial stances, reinforcing the validity of the simulation.

    For example:

    Agent Rubio: “During the committee meeting, I strongly advocated for substantial military aid to Ukraine… I believe we can’t afford to wait, and our response needs to be swift and decisive.”

    Agent Wyden: “I raised concerns about balancing domestic needs with the urgency of supporting Ukraine. While I understand the gravity of the situation, I stressed the importance of accountability.”

    Expert Evaluations Showed Moderate to High Believability

    The expert reviewers assigned mean believability scores above 5 across all tested scenarios. The funding-for-Ukraine debate received an average score of 7.45, suggesting that the AI agents’ discussions mirrored real-world legislative arguments convincingly.

    Bipartisanship Emerged When External Factors Shifted

    One of the study’s most intriguing findings was how agents reacted to external perturbations. When the simulation introduced new intelligence indicating an imminent Russian breakthrough in Ukraine, previously hesitant senators became more willing to compromise. This shift suggests that real-world bipartisanship may hinge on clear, immediate external threats.

    Critiques and Areas for Further Study

    Limited Scope and Sample Size

    The study only included six senators from one committee, limiting its generalizability. Future research should expand to larger legislative bodies and different committees to test whether similar bipartisan trends emerge.

    Lack of Real-World Verification

    While the AI agents’ actions were rated as “believable,” the study did not compare their decisions to real-world legislative outcomes. A follow-up study could test whether historical simulations align with actual votes and policy developments.

    Simplified Agent Memory and Interaction

    More sophisticated training structures and multi-agent training could enhance realism.

    Chat GPT 3.5 Was Used

    AI Senators were created with ChatGPT 3.5. Noting the expendential improvement with later models.

    Conclusion

    For me, this study is less about specific findings and more of a proof of concept—an early glimpse into what AI-driven simulations could mean for legislative analysis and decision-making. The possibilities are vast.

    Imagine a public affairs specialist training an AI model on a government official. The technology is nearly there. Could one use Google’s Notebook LM to upload hundreds of sources on a policymaker and their key issues, then query against it? Absolutely. Could one simulate meetings? Likely. Predict outcomes? Maybe not yet, but it’s coming.

    What if you trained agents on entire legislative bodies? Florida’s regular legislative session just started this week. Predicting legislative outcomes of floor votes is relatively straightforward—partisanship and leadership preferences dictate much of the process. But the real power isn’t in forecasting final votes; it’s in modeling committee member markups, where deals are made and policies are shaped. Could AI map those interactions? That’s where this research gets interesting.

    My head is spinning with what makes the most sense in training data and what other factors to consider.  

    As AI agents and models improve, these simulations could become invaluable for political research, policy development, lobbyists, and public affairs officials.

    The ability to test legislative scenarios before they unfold could transform how decisions are made and policies are shaped.

    Is the Use of AI by Knowledge Workers Reducing Critical Thinking?

    Is the Use of AI by Knowledge Workers Reducing Critical Thinking?

    Introduction

    Generative AI (GenAI) tools like ChatGPT and Microsoft Copilot are transforming how we work, how we study, how we prepare for meetings, but what does this mean for our critical thinking skills?

    This study explores how generative AI tools influence critical thinking among knowledge workers. As these tools become more common, they raise questions about how they affect cognitive effort and confidence. This research surveyed 319 knowledge workers, collecting 936 examples of how they used generative AI in their tasks.

    The study examined two main questions:

    • when and how do users engage in critical thinking with AI, and
    • when does AI make critical thinking easier or harder?

    Title: The Impact of Generative AI on Critical Thinking: Reductions in Cognitive Effort and Confidence Effects

    Link: The Impact of Generative AI on Critical Thinking

    Peer Review Status: Peer Reviewed, presented at CHI Conference on Human Factors in Computing Systems

    Citation:
    Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. CHI Conference on Human Factors in Computing Systems, Yokohama, Japan.

    Methodology

    The researchers used an online survey (n=319) targeting knowledge workers who use generative AI at least once a week.

    Participants provided detailed examples of using AI for three types of tasks: creation, information processing, and advice. They rated their confidence in completing the tasks with and without AI, as well as the cognitive effort required for six types of critical thinking activities based on Bloom’s taxonomy: recall, comprehension, application, analysis, synthesis, and evaluation.

    The study also measured each participant’s general tendency to reflect on their work and their overall trust in AI.

    Results and Findings

    The study found that knowledge workers perceive reduced cognitive effort when using AI, especially when they have high confidence in the tool’s capabilities.

    Conversely, those with high self-confidence reported more effort in verifying and integrating AI outputs.

    Key findings include:

    Critical Thinking Shifts Toward Verification and Integration
    When using GenAI, knowledge workers reported spending less time on information gathering and more time verifying the accuracy of AI outputs. For example, participants often cross-referenced AI-generated content with external sources or their own expertise to ensure reliability. This shift reflects a move from task execution to task stewardship, where workers focus on guiding and refining AI outputs rather than generating content from scratch.

    Confidence in AI Reduces Critical Thinking Effort
    The study found that higher confidence in GenAI’s capabilities was associated with less perceived effort in critical thinking. In other words, when workers trusted AI to handle tasks, they were less likely to critically evaluate its outputs. Conversely, workers with higher self-confidence in their own skills reported engaging in more critical thinking, even though they found it more effortful.

    Motivators and Barriers to Critical Thinking
    Participants cited several motivators for critical thinking, including the desire to improve work quality, avoid negative outcomes, and develop professional skills. However, barriers such as lack of time, limited awareness of the need for critical thinking, and difficulty improving AI responses in unfamiliar domains often prevented workers from engaging in reflective practices.

    GenAI Reduces Effort in Some Areas, Increases It in Others
    While GenAI tools reduced the effort required for tasks like information retrieval and content creation, they increased the effort needed for activities like verifying AI outputs and integrating them into workflows. This trade-off highlights the dual role of GenAI as both a facilitator and a complicator of critical thinking.

    Critiques of the Research or Additional Areas of Potential Study

    To start, I am always weary of research conducted by the industry itself, and Microsoft is a huge player in today’s AI.  So I tend to read these studies extremely critically and take them as always needing and begging for further study.

    The study relies on self-reported data, which may be subject to bias or inaccuracies.

    This study lacks any cross-cultural perspectives in that it was conducted in Enlish only, and focused on youngish, tech savy workers.  

    Additionally, it does not account for long-term impacts on critical thinking skills.

    Future research could explore:

    • Longitudinal studies to observe changes in critical thinking over time.
    • Rapid, evolving studies due to the rapid evolution of AI tools.
    • Experiments that measure critical thinking performance rather than self-perception.
    • Cross-Cultural Studies
    • Studies across all age groups
    • Studies with non Knowledge Workers
    • Studies with students.  

    Conclusion

    I keep a folder of quotes from pundits lamenting the death of civil society with each new technological advancement for my class on “The Media.” The printing press, radio, television, cable TV, the Internet, and social media—all were predicted to destroy us.

    Meh.

    I do believe generative AI tools are massively disrupting workflows, effort,  outputs, and critical thinking. I see it firsthand in the college classroom.

    As Generative AI evolves, tools must support – not undermine- critical thinking. While AI can enhance efficiency and reduce effort, it also risks fostering overreliance and diminishing critical engagement.

    By reducing the perceived effort of critical thinking, Generative AI may weaken independent problem-solving skills. As users shift from direct engagement to oversight, they must balance efficiency gains with maintaining cognitive skills.

    AI designers should prioritize features that promote critical thinking while preserving efficiency. This research highlights the need for systems that encourage reflective thinking and help users critically assess AI-generated outputs.

    Ultimately, the study underscores the importance of maintaining a critical mindset in an increasingly AI-driven world.

    P.S.

    Now that you’re here thinking about critical thinking, I would be remiss if I didn’t recommend one of the guides I keep within arm’s reach at my desk. This book contains over 60 structured techniques that enhance thinking processes, and I highly recommend it.

    Structured Analytic Techniques for Intelligence Analysis