As always, some of the best questions come from readers of the blog. “Why don’t third parties win US presidential elections?” came to us via email.
It is also timely with the recent discussion from Kristol and polling information from Data Targeting.
Polling Third Parties
In a traditional poll, pollsters may ask a question like, “Would you consider supporting a third party candidate? Yes or No?” Traditionally, because the question is asked like this, support for third parties is overstated, and the question has proven to be a poor predictor of voting behavior.
Another way to ask the question is in a horse race question like, “If the election were held today, who would vote for?” Because the answer has no ramifications, again support for third parties is overstated in polling.
So, if people tell pollster they support the idea of an independent candidate then why don’t they actually vote for them? Why do most people vote for their party’s nominee?
While there are structural barriers in place for third parties: ballot access, campaign finance laws, polarization of the party elites preventing cross-over third party validators, sparse media coverage, debate exclusion, and the lack of belief that a third party can win, there is a more fundamental way to look at this: the psychology of the voter.
Our political system is dominated by the two party system. Voters operate in primarily in this two party system and many voters take short cuts or cues provided by the parties to simplify their decision making. To step outside the two party system takes considerable effort.
Rosenstone developed a “Four Part Test for Third Party Support” and I believe it provides a framework to answer the question.
On average, for a voter to consider voting for the third party candidate, they have to reject 4 things. Not 2 of 4, but rather all 4. They have to reject both parties’ candidates and both parties, then and only then will they consider a third party. Then the voter has to find a third party candidate they consider to be legitimate and finally, vote for them. A tall task.
Additional Polling
We can see Rosenstone’s framework at work in a CNN poll in May 2016. When respondents were asked if their support for a candidate was an vote of support of opposition, we observe the following: (editor note: quite a sad commentary on our current choices.)
Conclusion
In part, this explains why Donald Trumps’ numbers have improved in the past 4 weeks. His support is growing from Republicans who rejected his primary campaign, but are “coming home” to support the party and/or reject Hillary Clinton/democratic nominee.
In part, this also explains, in part HRC’s soft numbers. The Democrats are still split between HRC and Sanders and have yet to coalesce around a candidate. We should expect the Democratic nominee’s numbers to improve after the voters move through Rosenstone’s framework.
As you can see, third parties have a difficult road ahead of them and the most likely path is not to win, but rather to prevent someone else from winning or the “spoiler” role.
The sheer number of polls this political cycle is amazing. The sheer number of bad polls this political cycle is stunning.
Regardless, the manner the press reports on polling is just God-awful.
Biggest Polling Complaint
My largest complaint is that the amount of uncertainty in a poll is not reported, is under-reported or misunderstood.
A poll is only a sample (hopefully of a random one that is well constructed) of a population. When any pollster moves from describing the sample to inferring meaning in a population, a known amount of uncertainty is introduced into the results.
The press or most poll consumers are explicitly considering nor communicating the amount of uncertainty in ANY poll.
So here is a quick primer on “How to read a damn political poll”:
Don’t forget the inherent uncertainty in polling
Margin of error – This is the stated uncertainty involved when moving from a description of the sample to an inference towards the population. Often you will see the margin of error expressed as +- x%.
Some often to forget this margin of error is supplying a range of correct answers.
For example, let’s say candidate Ben Smith is polling with high negatives.
52% of the voters in our sample, think Ben Smith is a full on jerk. The margin of error in this survey is the standard +-5%.
This margin of error is a function primarily of the size of the sample – the larger the sample, the smaller the margin of error.
NOTE: Margin of Error does NOT take into consideration or capture errors associated with question order, format, sample error or other factors that could systematically bias a poll.
This means when we move from describing a sample to the population as a whole, ANYWHERE between 47% and 57% of the population think Ben Smith a jerk. Statistics tell us the correct answer is likely within this entire range.
visualizing uncertainty in a poll
We say likely because of the seldom reported confidence intervals.
Confidence Intervals – most of the time, most poll statistics are analyzed at a 95% confidence interval.
This means roughly, if we theoretically repeated this exact poll 100 times, 95 of the 100 times we would expect the real number to be in the range indicated by the margin of error.
HOWEVER, you must also notice 5 out of 100 times our answer could be OUTSIDE the correct range.
Combine both of margin of error and confidence intervals- even before humans are entered into the process – and you see there is uncertainty built into the very framework of polling.
It’s math.
Don’t forget bad polls
Now, let’s consider the human factors.
On top of all inherent uncertainty in ‘perfect’ polling, there are some political and media players taking short cuts in polls compounding errors.
One of the basic prerequisites of good polling is that anyone within your sample frame has an equal and random chance of being selected.
There are two current factors acting on the polling industries:
Response Rate Declines – the polling industry is seeing declining response rates, meaning you’re not answering your phone.
While the research is mixed on the effect of declining response rates with Pew finding in 2012, “despite declining response rates, telephone surveys that include landlines and cell phones and are weighted to match the demographic composition of the population continue to provide accurate data on most political, social and economic measures.” (source: Pew Research)
The Rise of Cell Phones
In 2014, the Centers for Disease Control and Prevention, estimated that 39.1% of adults and 47.1% of children lived in wireless-only households. This was 2.8 percentage points higher than the same period in 2012.
This change dis-proportionally affects young people (Nearly two-thirds of 25- to 29-year-olds,) and minorities (Hispanics most likely to be without land line). (source: Pew Research)
The rise of cell phone only homes is problematic for pollsters, because the law forbids pollsters from using automation software to place the calls. This prohibition is a direct cause in the increase of polling costs.
Combination of Response Rates and Cell Phones
We can safely assume these trends of declining response rates and the increasing cell phone only voters continue.
In summary, voters with landlines aren’t answering and reaching cell phone-only voters is expensive. It is this combination that makes good, quality research difficult and expensive.
The media and polling
I loathe to blame the media, but in this case, there may be some justification – the media’s treatment and use of polling results is awful.
Explaining polling is nerdy. The media may report on a poll’s margin of error, but never explains it.
I often asked people what they think ‘margin of error’ means. A sample of replies:
5% of the results are wrong
The results can be off as much as 5%
Anything within 5% is a statically tie
We are confident the answer is within 5% of the result
NONE of these are correct. It seems people often forget the ‘plus’ or ‘minus’ part of the stated error.
The media hardly ever states the confidence levels.
The media due to the economics of the news industry wants polling done cheaply. They will use samples that exclude the costly cellphone only homes.
The media due to the nature of news (‘if it bleeds, it leads’) wants the horse race. The need the excitement. They simply can’t have a headline or a report that extolls “the race may or may not be tied, we don’t really know due to uncertainty.”
The bottom line is some media outlets aren’t concerned with accuracy or nuance. Others are cheap include only landline homes, excluding a significant part of the population.
In summary, the media presents results with a level of certainty that doesn’t exist. For example, a presidential candidate is shown to have a 2 point move from last weeks polling numbers. The poll has a +-5% margin of error. In this case scenario, the number is bumping around with the range we would expect. It is nothing more than the expected sampling error.
However, some media would have a headline “Candidate X Surges 2% in Newest Poll”
Lastly, don’t forget purposeful manipulation.
Let’s face it, there are a lot of political operatives and media outlets playing games with polls.
Only selected results are released
Questions are purposely written to shade responses
Awful samples (self-selected) are used and passed off as actual research
Results are attributed with certainty when it doesn’t exist
These political manipulators understand the powerful effect other peoples’ behavior has on voters. Everyone loves a winner and the herd mentality takes over.
“Candidate X Surges 2% in Newest Poll” is most likely spin from a political operative.
How to read a poll
So as a consumer of a poll, there are some things you can do to increase your understanding of what a poll says and doesn’t say.
STEP 1 – Understand Poll Methodology
Before you read a poll results, read the methodology.
The methodology should tell you how the pollster conducted the poll.
Who sponsored the survey and who conducted the survey,
Exact wording of questions and response items,
Definition of population under study,
Dates of data collection,
Description of sampling frames, including mention of any segment not covered by design,
Name of sample supplier,
Methods used to recruit the panel or participants, if sample from pre-recruited panel or pool,
Description of the Sample Design,
Methods or modes used to administer survey and languages used,
Sample Sizes,
A description of how weights were calculated,
Procedures for managing membership, participation and attrition of panel, if panel used,
Methods of interviewer training, supervision and monitoring if interviews used,
Details about screening procedures,
Any relevant stimuli, such as visual or sensory exhibits or show cards,
Details of any strategies used to help gain cooperation (e.g., advance contact, compensation or incentives, refusal conversion contacts),
Procedures undertaken to ensure data quality, if any. e. re-contacts to verify information,
Summaries of the disposition of study-specific sample records so that response rates for probability samples and participation rates for non- probability samples can be computed,
The unweighted sample size on which one or more reported subgroup estimates are based, and
Specifications adequate for replication of indices or statistical modeling included in research reports.
Make no mistake, seldom does any political pollster release all this information. (Academics and government surveys often will), but there are some minimal, critical things to consider:
Sample Size – sample size drives margin of error. The larger the sample, the smaller the MoE.
Definition of people being studied? Registered versus Likely Voters?
Sample Frame – how is a pollster defining who has a chance to be polled. Is a prerequisite past voting behavior? Is it simply registered voters? Land line only? Combination? Does Sample Frame match as closely to the Definition as possible? Who is left out?
Mode(s) used to administer the survey? Telephone type, Internet, door to door? Combination and if so what proportion?
Is the poll weighted? If so, weighted to what model/universe?
What is the Margin of Error?
Step 2 – Understand the Polling Demographics
After finding this information, the next is to look at is the Demographics of the poll. Do things look correctly and in proportion?
If you don’t know what the proportions should be for the population you are studying, results presented could be biased.
Due to the importance of partisanship in our political system, take a close look at the partisan breakdown.
If things look off in the demographics, be cautious and skeptical.
Are you looking at weighted numbers? If so, what assumptions are built into the weights?
Step 3 – Look at the Polling Questions
Are the polling questions clear? Are the polling questions loaded with explosive or loaded language?
Are the polling questions not disclosed?
Step 4 – Look at the Polling Credibility Items
What is disclosed?
If NOTHING is disclosed, stop reading the poll or press story. You’ll likely get the same information from reddit.
Does the poll pass the smell test?
Step 5 – Read All Polling Results with skepticism
Finally, if everything so far passes the smell test, then look at the polling results UNDERSTAND that any number presented as a finding has a range of correct answers. I find it helpful to restate a result to make remind myself of the uncertainty inherent in polling.
If a candidate has 40% hard name id with a 5% MOE –
You can state this mentally, “that candidate x is known by 35%-45% of the population studied, AND 65%-55% of the population studied doesn’t recognize him/her. PS – THERE IS STILL A CHANCE THIS IS WRONG.”
Here is the bottom line on Reading Polls:
Remember, the quality of the polling data will be no better than the most error-prone features of the survey.
All polling when making inferences of a population contains inherent uncertainty due to fundamental math.
Polling well is difficult and expensive.
Always ask yourself, is a political operative attempting to manipulate you?
Yes, the Politico article in someways scooped a theme I have been working on for sometime on this blog. In the past I have been exploring the formation of political environments and asking “Do Campaigns Really Matter?”
Time for Change Model – Predicting POTUS elections
There are many reasons for developing and using models. Often models are used to present a hypothesis in a clear manner. We argue about models, back test them, refine them, and then use them to predict outcomes.
One of the most interesting models in Presidential elections is Alan Abramowitz’s Time for Change Model based on what is now referred to as the campaign fundamentals. Abramowitz has since revised his model, and we will look at both versions of the models.
The first Abramowitz model was:
PV=47.3+(.107*NETAPP)+(.541*Q2GDP)+(4.4*TERM1INC)
PV stands for the predictive share of the majority party vote of the incumbent president
NETAPP stands for the incumbent president’s net approval rating (approval-disapproval) in final Gallup poll in June
Q2GDP stands for the annualized growth rate of real GDP in the second quarter of the election year, and
TERM1INC stands for the presence or absence of an first term incumbent in the race
“This basic model has correctly predicted the winner of the popular vote in the last 5 presidential elections with an average error of 2 percentage points.” -Abramowitz
So, why change the model?
Because in the last 4 Presidential elections, the basic model overestimated the winning candidate’s share of the votes. “This suggests that the growing partisan polarization is resulting in a decreased advantage for candidates favored by election fundamentals including first term incumbents.: – Abramowitz
POLARIZATION – takes on the value of 1 when there is a first term incumbent running or in open-seat elections when the incumbent president has a net approval rating > 0; it takes on a value of -1 when there is not a first-term incumbent and the incumbent president has a net approval rating <0.
“Adding the Polarization correction to the model substantially improves its overall accuracy and explanatory power” – Abramowitz
FRIENDLY REMINDER: The outcome is determined by the electoral college, not the popular vote predicted by this model.
(What is surprising for most in Conservative circles is just how much the President’s net approval rating has been positive for his two terms. His average approval from the beginning of his first term to the writing of this post is 47%.)
Q2GDP (annualized growth rate of real GDP in the second quarter of the election year). The number obviously has not been released yet, but we can look at trends.
GDP (% change from Preceding Period in Real Gross Domestic Product)
2013
2014
2015
Quarter
I
II
III
IV
I
II
III
IV
I
II
GDP
2.7
1.8
4.5
3.5
-2.1
4.6
5
2.2
-0.2
TBD
TERM1INC – We know there is no incumbent in this election, so we KNOW this variable is 0.
POLARIZATION – We assume President Obama has a net + approval rating, so the variable will be set to 1.
Thought Exercise
While the model specifically states the GDP needs to be from Q2 of the election year (2016), as nerds we can have some fun.
If we were to perform the calculation now simulating an election this year with the following variables:
using the current net approval rating of +3,
the Average of the GDP Change over Obama’s term of +2.175%, (using the -.2 would be cruel)
TERM1INC =0, and
POLARIZATION = 1,
we calculate the incumbent party (Democrat) predicted vote % to be 45.8%.
Variables
NETAPP
3.00
Q2GDP
2.18
Q2GDP Increment
0.50
ignore
TERM1INC
0
Presence or Absense of Incumbent (1 Incumbent, 0 no incumbent)
POLARIZATION
1
1 – first time incumbent or in open seat incumbent appr >0, = -1 no first time incumbent and incumbent approval <0
As you can see, the model weights the GDP much higher than the net approval rating (by a factor of 6x), but the power of incumbency is considerable.
The model provides the incumbent party with a base of 46.9%, then adjusts for Net Approval, then adjusts for GDP, then adjusts for incumbency advantage/incumbency fatigue.
For example, if President Obama is +8 on Net Approval, the GDP change would need to be +8% change in Q2 2016 for the incumbent, Democrat party to break 50% with no incumbent running. (Note don’t forget to change TERM1INC variable to 0)
Admittedly, it is too early to use a model that explicitly states it is Q2 of the election year, but we can clearly remember Carville’s “It’s the economy stupid!”
If you believe the model, the next 12 month’s events are critical in determining the 2016 outcome and there is little the 40 people running for President can do about it – except drive down the net approval rating of the President. Get ready!
Ramifications for Local Elections
As we observe with my prior posts, Politico’s article, and the exploration of Alan Abramowitz forecasting model, the central thesis is this: the political environment is formed for success or failure well before any candidate announces a run for office. Campaigns do not change political environments but rather are a product of them.
These macro fundamentals are largely out of the control of Presidential candidates and even more so out of the control of state and local candidates.
However, local political environments can be shaped with local issues with a dedicated and consistent effort. Therefore, I’ll say it again: Attention all interest groups and political actors interested in the upcoming local elections: To be strategic in local elections, the time to form the political environment is a year before the elections, not the 6 weeks of a campaign before the election date.
I’ve been thinking about the amount of effort required to have an opinion.
What drove this strain of thought: I was having a conversation with a subject matter expert and I voiced my opinion.
He replied, “That is certainly an opinion, but to have an accurate opinion, you need to do a lot more work.”
How true.
The amount of effort to have an opinion is zero. My 11 year old son will have an opinion about anything you ask him. Trust me, A.N.Y.T.H.I.N.G.
The amount of effort to have an accurate opinion is a lot more than you may realize.
Sometimes it is the cursedly clear and unwelcomed set of answers provided by straight thinking that makes us mental slackers.” -Robert Cialdini
To be a successful analytical thinker, you must be willing to consider competing ideas. Sometimes you may not welcome those competing ideas. What if they are valid?
Analytical thinking is difficult and requires effort, but attempting to identify and mitigate biases may be more difficult.
Anyone can have an opinion, but to have an informed opinion, the effort is considerable.
Biggest Change
One of the biggest changes in how I approach a problem or analysis has been the effort to move from a binary, black and white thinking to a probability thinking.
An example, I try and no longer say silly things such as “Candidate A will not win that campaign.” That is simply not a true, logical statement. Candidate A by being on the ballot has some probability of winning that race. The probability may be extremely low, but there is some probability.
“Candidate A chances of winning that race are below 10%” is a much better way of expressing my thinking.
When one starts thinking in probability terms, one’s entire perspective changes. Things not possible, become remotely possible. Things that are 51% certain are no longer “certain.”
Conclusion
One becomes a clearer thinker when one becomes aware of the probability of being wrong; thus forcing one to put in the required effort to arrive at a better analysis/opinion.
Bottom line, I think the political scene needs more humility in our thinking.
An astute listener to the radio show, the Ward Scott Files, that deals with political strategy and polling asks, “What causes bumps in polling?”
Bumps in Polling
Often after a candidate announces they are running for President, or after a major party’s political convention or some other major event, we will observe a bump in polling numbers. In most cases, if we wait several weeks, the bump will disappear.
What are we actually observing with these “bumps” in polling numbers?
In an attempt to further answer the question, I came across a paper, “The Mythical Swing Voter” by Andrew Gelman, Sharad Goel, Douglas Rivers, and David Rothschild of Columbia University, Stanford University and Microsoft Research.
Political scientists have debated whether swings in the polls are a response to campaign events or are merely reversions to predictable positions as voters become more informed about the candidates (page 2)
The paper follows an interesting methodology in that it conducted 750,148 interviews with 345,858 unique respondents on the X-box gaming platform during the 45 days preceding the 2012 Presidential election – creating a large on-line panel for studying shifts.
What is interesting is that the data showed with demographic adjustments, the data reproduced swings found in media polls during the 2012 campaign. HOWEVER, if we don’t use demographics, but instead use partisanship and ideology “most of the apparent swing in voter intention disappear.”
What the authors found was selection bias playing a role in the “bumps” candidates receive after a large announcement, meaning certain groups of people were much less/more likely to participate in a survey, and if we correctly estimate voters using items other than demographics, most of the apparent swings were sample artifacts, not actual change.
Only a small sample of individuals (3%) switched their support from one candidate to another.
We estimate that in fact only 0.5% of individuals switched from Obama of Romney in the weeks around the first debate, with 0.2% switching from Romney to Obama.
Conclusion(s)
The paper’s authors conclude “that vote swings in 2012 were mostly sample artifacts and that real swings were quite small.”
In today’s highly polarized partisan politics, the percentages of the Mythical Swing Voter is much smaller than polling would indicate.
Meaning, there is little to no actual bump or people switching sides – there is more of a selection bias in the polls. “The polls do indeed swing—but it is hard to find people who have actually switched sides.” (page 2)
The temptation to over-interpret bumps in election polls can be difficult to resist, so our findings provide a cautionary tale. The existence of a pivotal set of voters, attentively listening to the presidential debates and switching sides is a much more satisfying narrative, both to pollsters and survey researchers, than a small, but persistent, set of sample selection biases.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.