52 Reasons Why You Should Not Believe Polling Data For The US Presidential Elections

Photo by W. Eugene Smith

Photo by W. Eugene Smith

Skewed Polls

November 8, 2016

Allan Stevo

When running for office, practically anyone who speaks to a candidate is trying to convince that candidate of something. I had that experience while running for the US House of Representatives in 2008.

A veteran political consultant who worked on my campaign at the time said to me once: “When someone is trying to convince you of something, always ask yourself, what does this person have to gain by convincing me he is right? What does this person stand to gain by getting me to believe in something I wouldn’t otherwise, on my own, believe in? If you don’t know the answer, don’t trust what you are hearing from that person.”

This suggestion from that consultant has been a useful mantra in encouraging healthy skepticism and disbelief as a default option unless someone proves that they and their information are otherwise trustworthy. Public life should always tend to the default of skepticism. Private life, where you are surrounded by those who have earned your trust, is best treated as quite the opposite, with a generosity of trust.

In the public polling industry – the rather imprecise polls released to the public, as opposed to the quite precise polls collected, studied, and maintained by a campaign in complete secrecy as proprietary information – someone is often putting a spin on the information the public is receiving.

While you may not always be certain what that spin is, I write this to ask you to take all public polling with a grain of salt. Never would the low quality polling data you read about in newspapers be presented to a paying candidate by a polling company. A paid customer would receive much more reliable information, with the biases of the polling clearly shared. Using a few examples from the tremendous amount of public polling data I’ve come across this presidential election cycle, I’d like to encourage you not to take any of it at face value.

1)There Is Too Much Of A Discrepancy Among The National Polls For Them To Be Meaningfully Compared

As of the start of this writing a week ago, there was approximately a 24 point discrepancy among some major polls between how Clinton does with Independents and how Trump does with Independents. In polls with a stated margin of error of 3% or 4%, and no other inconsistency in data identified in the methodology, there is something amiss with such a large discrepancy. This is a huge red flag.

Polls with such a high standard deviation must not be compared to each other. And this data of how independents rate the candidates is vitally important in an election, for the moderate and independent voters who remain undecided at this late hour in this long election cycle are the ones who decide elections. There is a list of polls that indicate Hillary Clinton is doing well and a list of polls that indicate that Donald Trump is doing well.

In terms of quantity, more polls favor Hillary, but as I will point out later, that is not a helpful figure in determining popular opinion and quite to the contrary can be rather deceptive. Since I did not collect the data myself, I tend to believe neither the polls that show a Trump win nor the polls that show a Hillary win. I would have to agree with Trump’s insistence that there is a media bias when I see the polls that show Hillary Clinton in a more favorable light are widely reported in the corporate media, while the polls that show Donald Trump in a favorable light are not at all or very lightly mentioned.

Additionally, when the methodology of a poll is called into question in the corporate media, it tends to be only those polls that favor Trump that have their methodologies challenged. Instead, all methodologies of all polls should be called into question every time that their data is used. That should be a standard practice anytime data is utilized by journalists.

I recognize the challenge of being circumspect toward data. It can be hard to read through a sometimes lengthy methodology report and simultaneously keep in mind that all such data is inaccurate until proven otherwise. It can be even harder to read that way when the data shares the perspective of the writer seeking to utilize the data. That is however when it is most needed. It is harder yet when a journalist is under a tight deadline, as is always the case, harder still when an editor is not likely to challenge the data used, as is often the case, and even harder when a reader is not likely to challenge the use of data significantly enough to cause the writer any discomfort, which is almost always the case.

While the extent to which such data is laxly used by professionals in the media can be understandable, it is not forgivable. What journalists tend to do with polling data is to tell lies on a grand scale. I ask that you, the consumer of their content, not fall into the habit any longer of believing those lies. That intentional or careless misuse of data is truly sinister when one considers the impact that mass movements and collective opinion have in the world around us.

With great power comes great responsibility. We each have a responsibility to be better than that, to be better than letting figures in the corporate media effectively tell lies through the careless use of public polling results and other data.

2)The Most Accurate National Poll Is Not Being Reported On

The Investors Business Daily/TIPP Presidential Tracking Poll, which has been the most accurate national poll for three election cycles – has Donald Trump ahead of Hillary Clinton. The results should really be considered a “statistical tie” because it is within the margin of error. This most accurate poll over the last 12 years. Nate Silver of the well-regarded polling techniques website FiveThirtyEight Blog, out of 23 national polls in 2012 called it “the most accurate.” Yet the IBD/TIPP poll is not even mentioned in the corporate media.

The IBD poll was so accurate in the last election that it predicted the exact percentage that Barack Obama would beat John McCain by: 7.2%. If an accurate presentation of information on public opinion were what a journalist were seeking, it would baffle the mind for that journalist to be caught not citing this most repeatedly accurate poll (showing Trump ahead), and instead citing repeatedly less accurate polls (like the ABC/Washington Post Poll) with very different results (a double digit victory for Clinton).

The likely reason that journalists are not using the IBD poll is because it has consistently shown Trump beating Hillary. I don’t know why that is a bad thing to report the option of. Perhaps the journalists or editors are letting personal preference get in the way of unbiased reporting or perhaps it’s just the usual difficulty of sticking one’s neck out when one’s colleagues’ insistences are saying something else.

I’m having a hard time coming up with a good reason a journalist seeking to portray public opinion as accurately as possible would avoid mention of the IBD/TIPP poll. To be fair, once again there are a higher quantity of national polls that favor Hillary. That fact is an unhelpful and largely distracting detail when attempting to get to the heart of what popular opinion is saying. That a dollar store sells more off market brands of tooth paste and only one major national brand of toothpaste does not make the more prominent and numerous off market brands ideal. Nor does averaging the recipes of the toothpaste make for a good toothpaste recipe.

That the polls are run according to bad methodologies happen to support Hillary doesn’t forgive those pollsters or the quoting journalists from using those bad methodologies.

3)Though Plentiful In Number, Most Public Political Polls Are Garbage

Some polls have Hillary up. Some have Trump up. The L.A. Times poll and Rasmussen polls generally have Trump up the past several weeks. The majority of national polls have Hillary up, such as the ABC News/Washington Post Poll, the Economist, CBS/NYTimes, Reuters. National polls in which the candidates fall within the margin of error of one another include ABC News, Economist, CBS/NYTimes.

These polls generally are deceptively reported on not as being statistical ties, but instead are reported as being “Clinton likely winner” or “Clinton wins” with other data that identifies a Clinton advantage highlighted. All polls, of course, have limitations and are just an approximation. Just because there are many polls agreeing in general, does not make any of those individual polls more accurate.

Popular has never been enough for something to be accurate or even factual. Generally, if one had to be strictly black and white about the matter, it is more often to the contrary – that which is popular tends to be wrong. Mark Twain famously wrote “If you find yourself on the side of the majority, it is time to reform.”

It is challenging to go against the current in an industry and push for exacting polling standards. It is even more challenging when you do so and produce results for a candidate like Donald Trump who behaves like a loose canon, is anti-establishment, and is widely disliked by many in your industry and by many of the journalists and editors who are the consumers of your data.

Like in many other areas of life, it is vastly easier to go with the flow and do what is popular. More badly run polls do not make for more accurate polls only the appearance of independent verification.

4)”Within The Margin Of Error” Often Means The Results Are Meaningless

If the margin of error is 4% and the results show a 3% difference, then it actually makes no difference who is up in the poll, because no reliable information has been conveyed in that situation. Many polls that show Clinton leading with results that fall within the margin of error should be ignored. It can be so hard to look at a poll though and ignore the bias created by the fact that someone appears to be “ahead.” A 3.4% margin of error for example means the pollsters are 95% certain that their results are within 3.4 points of what the actual results would be at that moment. There is no “ahead” when results fall within the margin of error.

5)Outside The Margin Of Error Also May Not Mean a Lot

Like everything else in statistical estimates, there are models, estimates, and personal judgment built into each statistic. The margin of error, if correctly calculated, is effective. If not correctly calculated, it too then becomes unreliable. There is no truly reliable way to assess how accurate the estimated margin of error is. It is simply, like the rest of this process, a continuous guess-and-check, rinse, repeat, guess-and-check, rinse, repeat.

6)Never Trust The Real Clear Politics Average Or Similar Aggregates Of Bad Data

The Real Clear Politics Average is considered quite the meaningful number because it is easy to find, free, and easy to cite. It is additionally widely used by journalists so almost no one is going to challenge a journalist for using a number that has become a standard. However, for that number to be more statistically useful, one must be able to make an apples-to-apples comparison of the data. There are Real Credibility Problems here.

Where methodologies are not similar, averaging the results provides little of statistically useful value. The problem with using the dissimilar is obvious. Also, to be kept in mind the problem with using the similar. Similar is generally not good enough. If the polls are not identical, then the question becomes – what is the proper way to aggregate the results?

Some aggregates simply average polls with identical methodology, along with polls with similar methodology, and polls with dissimilar methodology and report on them as one big number. That number has little real meaning and its very existence is incredibly deceptive.

As a testament to how deceptive it is – how many times have you heard a journalist citing an aggregate number without even knowing how the aggregate number is generated? If you are a political junky, during a contentious national election, you probably come across these polling aggregates 3-5 times a day without seeking them out. This averaged polling number is essentially worthless as a barometer of voter sentiment.

Other aggregates don’t just average the results, but go through a more sophisticated process intended to recognize many of the different factors that the included polls do not share in common. Aggregates such as these will often have curators who weight different data differently. There are no “rules” or “truths” to this process of weighting data. From my perspective, this simply introduces a greater level of human error into the process. Many of the experts disagree with this fact.

The experts are all right until they are no longer right. Even after they are no longer right they will still talk as if they had always been right. A great truth to running campaigns is that there is always guesswork to the highest quality polling estimates and if you put too much faith in that one aspect of a campaign, you are sure to be disappointed by not more judiciously leveraging your resources.

It’s helpful to watch out for “professional biases,” or “professional deformations” as some might call them – namely that there are many moving parts to a campaign and the pollsters always seem to encourage more polling as the way they assure a win can be secured, and the ad buyers say the same about TV and radio, the direct mail vendors say the same about direct mail, and the robocallers pitch the robos that same way.

If you are a hammer, everything tends to look like a nail. None of it is a panacea. If you really must take the data into account, study the methodology thoroughly, read criticism and peer review of each methodology, seek out contrasting perspectives, and take a huge step back and consider the polling data as one item in your tool belt for understanding what’s going on. Consider it very flawed and limited item at that, one that is hardly a substitute for having a personal relationship with every voter, but an almost necessary evil that provides surprisingly limited actionable data for the amount of hot air and ink that is spent conveying so much of that unreliable data as if it were unimpeachable truth.

If you aren’t prepared to do all that, something I barely have time for, I would prefer you entirely dismiss the polling data all together, because used without care it is such an effective way, unfortunately, to end up with a distorted view of what is happening, and yet it is so often used daily several times in such a careless way.

7)Many Sophisticated Campaigns Realize The Limitations Of Polling Data And Use It Accordingly

Leaked emails have been quite a big deal this election. Having worked many campaigns, I didn’t need to see any leaked emails to tell you the truth of this. It is absolutely common sense that a campaign of veteran politicians and strategists would do all they can to discourage the other side and bolster their own.

Of course such a campaign uses every resource at its disposal to win an election for its candidate. Anyone doubting that is being naive. You, the decided Hillary voter, you the decided Trump voter, you the undecided voter are being bombarded right now with messages that have been, in some cases, years in the planning. The bombardment with polling data right now is just one part of that strategy. Accuracy is not their goal. Winning an election is the goal. He who confuses accurate reporting with self-interested marketing is letting himself be deceived.

8)Intentionally Skewing Polls Can Be One Of Those Resources Relied On By A Campaign

If a campaign can push a pollster to change a few factors in the polling model, either by a polite one-on-one discussion or by public attack, that campaign can skew that data a little. Everyone is subject to public pressure. Some candidates would probably like to just be the ones giving the pollsters their own polling data that says what they want. Real Clear Politics, as an example, went so far as to include Republican US Senator Mitch McConnell’s internal polling data in its Kentucky statewide poll. I already think polling data has an impartial bend to it. If a candidate committee is paying for that data and it is cherry picking the data to hand over, then there is likely to be even more bias to the process.

9)The Complete Methodology Can Be So Hard To Find

In print you never see even the most basic details of a poll’s methodology reported on. Online it is relatively easy to find methodology reports of polls. Seldom will you see it provided for you linked from the reported on poll results though. You can of course take matters into your own hands by just Googling the name of the poll and the word “methodology,” and you’ll almost always get the report. The reports always leave me with unanswered questions.

If I am doing research for a media organization or have a professional reason to learn more, I can get questions answered further by emailing or telephoning the pollsters. That’s not something that should be expected of the average voter. If I’m not researching to the data for professional reasons – as would be the case for the average voter – it’s less likely I’ll get a response. The incompleteness of methodology reports related to a poll generally leads me to the conclusion that a polling organization is not interested in being entirely transparent about their methods.

It is almost always the case that the reports feel incomplete. Methodology reports leave much to be desired and they are certainly not complete enough to allow for the experiment to be repeatable, which is a necessary step in the scientific process. More complete methodology reports would go a long way in providing greater transparency in public polling and demonstrating greater trustworthiness.

10)Oversampling Is An Easy And Common Way To Skew A Poll

This is a nuts-and-bolts concept of polling here. Polling is an attempt to figure out what a large group of people are thinking. It is an estimate. Pollsters take a small group of people and figure out ways for that small group to be a fairly accurate representation of a larger group.

That small group is called a sample. Your poll results can only be as reliable as your sampling. If the electorate is about 30% college educated, then a more accurate poll will take that into account in the sampling, especially if that is a significant demographic issue that has a significant impact on how one votes.

It appears that women are more likely to vote for Hillary Clinton, so more accurate poll would take that into account, but since polling is more art than science, it would be a judgement call on the part of the pollster – how will the pollster estimate the female turnout so that the sampling is a true representation? It appears that those with college degrees are more likely to vote for Hillary. Those who own their own businesses are more likely to vote for Trump. The wealthy are more likely to vote for Hillary. Those in academia and journalism are more likely to vote for Hillary. Those in the middle class are more likely to vote for Trump.

All these aspects can be carelessly handled to make an outcome less accurate or carefully handled with a detail to accurate sampling to likely make it more accurate. At the same time these details can be carefully handled with attention to intentionally inaccurate sampling to make an outcome intentional skewed. This is also known as dishonesty. The practice of changing a sample so that it reflects higher proportions of a meaningful aspect of a population is called oversampling.

There is significant, repeated oversampling to the point that it seems like the intent by the public pollsters is to intentionally make Trump appear less popular than he actually is. This is along with other limiting factors of polling should mean that no public poll mentioned in the corporate media should be trusted by anyone at present.

As a blatant example of oversampling, the Sunday, October 23, 2016 ABC News/Washington Post Poll indicates in the methodology “Partisan divisions are 36-27-31 percentage, Democrats – Republicans – Independents.” Democrats have never enjoyed a 9 point registration gap at any time over the past quarter century that The Pew Research Center has been recording party identity among American voters.

The Pew Research Center indicates the split currently as 33-29-34. That’s 33% of registered voters identifying as Democrats, 29% as Republicans, and 34% as Independents. In this poll Democrats are oversampled by 3% while Republicans are under sampled by 2% and Independents undersampled by 3%, with no mention in the methodology report of why the pollsters would use a net 8% oversampling of Democrats while presenting the data as an accurate sampling of likely American voters. Not mentioning that is dishonest. While the historically inaccurate ABC News / Washington Post Poll has a partisan division of 36-27-31, the highly accurate IBD/TIPP poll uses a 37.5 – 34.25 – 28.25 partisan division. These are very different numbers and produce very different results. It’s important for the consumer of the data to recognize a great deal of conjecture is used in assembling polling methods.

That is only one data point that the pollsters clearly skewed – party affiliation. The methodology does not even make mention of the potentially complicating factors of gender, race, age, income, language in the sampling, though those questions were also asked about in the ABC News/Washington Post poll so the data was clearly known to the pollsters.

Paying attention to these factors in creating the sample would have created considerably more work for the pollsters in order for the poll results to be accurate. This of course would cost considerably more money. Mentioning the information in the methodology report and not correcting for that would have caused the poll to look less accurate to the few people who would read the report. Oversampling can be an easy way to skew poll results. Whether intentional, or unintentional, if the sample is inaccurate, the poll must also be inaccurate.

11)The Way To Know If A Poll Is Oversampled Is Through Conjecture And Opinion

There’s going to be a little bit of a catch 22 here that isn’t that big of a catch 22, but is a concern to the astute observer. You need to know what factors will affect your polling model in order to have a good polling model. That happens through polling and use of historical records created through polling. So to make the accurate assumptions to have a poll, you essentially need to have an accurate poll already. This is not as hard as it seems, but it is not a perfect process. There is conjecture and opinion involved. Like anything else with conjecture and opinion involved, it is probably not appropriate to present it as certainty, as is often done with so much polling data.

12)Pollsters With Some Level Of Experience May Forget That There Is Opinion Involved

Polling is accurate until it isn’t. Some pollsters who have some polling experience tend to believe their opinion is so well informed as to essentially be fact. This is an error made by many who use statistics frequently throughout the course of the day.

You can blind others with numbers and you can, if not careful, ultimately blind yourself with numbers. That’s a professional risk. That you believe your own BS, doesn’t change the fact that it is still BS. Opinions are exactly that – opinions, and even the most well-researched, well-defended opinion can be wrong.

Allowing for that potential for doubt can be hard for many people the more expert they become in a field. Many pollsters when there is skin in the game and enough on-the-job experience see their models fall flat enough that they regain their humility about the topic.

There is nothing like telling a numbers guy that his numbers aren’t enough because the foundation of the model is merely his opinion. You inevitably appear like the largest knucklehead to him. Rather than you speaking from a position of ignorance, as the data guy thinks, it is in fact wise and proper to be circumspect with all data. The more campaigns you are around the more evident this becomes.

13)Beware Of The Data Blind

To someone who is certain of the great certainty of his data, the brightness of the data can be blinding. It can crowd out so many other worthwhile ways to perceive the world that may not be data driver or may be driven by other data built upon entirely different models and assumptions. One finds illumination in the sun, but that doesn’t make it sensible to stare into the sunlight. Data can provide some illumination, but it doesn’t mean one should spend the entire day staring into those models seeking the ultimate source of truth in the world. Those who are blinded by the data will lead you astray if you let them.

14)Model Is A Synonym For Assumption(s)

“When you assume, you make an ass out of you and me,” pointed out British comedian Benny Hill, and likely many others before him. It’s a joking play on the word assumption of course but there’s some truth to it too. An over-reliance on assuming tends to produce a highly inaccurate view of the object of the assumption.

It’s important to keep in mind that a model is just an assumption or collection of assumptions. Some of those assumptions may be more founded or less founded than others, but they ultimately remain assumptions. Also synonymous with assumption are the words hypothesis theory premise supposition postulate and the word algorithm. While all these words should be mindfully watched for, the term algorithm deserves special mention.

Algorithm does not always mean assumption, but is an especially tricky word because of the dual nature of its definition in common usage. Algorithm in both situations means “a series of steps followed.” While there are algorithms like Euclid’s Algorithm for the Greatest Common Divisor of Two Numbers in which case the word “algorithm” means “well-established steps to follow to arrive at mathematical solution tested as effective over centuries,” there are also algorithms that means “some steps I came up with last night for a computer to follow in order to process some data and to come up with some other data that I theorize will somewhat accurately give greater meaning to that first set of data.”

The fact that both concepts are talked about using the same word and with the same level of authority is dismissive to the fact that they are so very different concepts. Of course they both still mean “a series of steps followed.” It is generally safe to assume that when one hears the word “algorithm” one can replace it in ones head with “assumption” and very little if any meaning will be lost. Beware of those prone to use that word algorithm for they are most likely to be the type of person that is data blind. They are starting to believe their own BS. As sincere as their belief in that BS is, their sincerity should not be enough of a test for you too to buy into that BS.

The word algorithm means assumption. How accurately foolish it would sound if disabused of the technicians jargon, you wouldn’t be told “I will put the data into our statistical algorithm and will be able to tell you with certainty who will win the election,” but instead “I will put the information that was imperfectly collected in interviews and then transcribed so this glorified calculator could read it and now I will use the smoke and mirrors field of assumptions to provide you with a generally accurate fortune telling so you can know how the future will look, and all I have to do first is to tell this glorified calculator to assume 50 assumptions that many assumers over the years have made up when they had nothing better to do and another 35 assumptions that I just sort of felt were mostly right. But don’t worry, our assumptions are something you can really trust.”

Nope.

15)Some Polls Don’t Even Use Cell Phones

Some polls don’t even use cell phones, but entirely use landlines partially because of the added cost and difficulty involved. As You could imagine from your own life and the lives of many around you, that does not produce a reliable sampling of the voting public. The IBD poll for example uses approximately 65% cell phone contact versus a 35% landline contact. This is an important confounding factor that many polls do not carefully assess. The IBD/TIPP poll uses only live callers. Some polls use in-person interviews. Some use robocalls. Many people with cell phones are unwilling to pick up for an unknown or unrecognized number unlike the way one would behave on a landline. Getting ahold of a person to poll has become a challenge and it’s not clear what representation is lost from a sample as people on cell phones as a primary and sometimes sole phone become harder to contact. Though polling is not a nascent field, we are still in the middle of the transition toward cell phones and the impact of cell phones on national polling is still being worked out.

16)There Are Many Linguistic Variables To Polling

There are many other linguistic variables unrelated to sampling that must be dealt with if polling data is going to be as accurate as possible. If I do not conduct a poll myself many variables come into play: how a question was asked, how a question was phrased, what tonality was used, how was the interviewer coached, what order were the answers given in and with what tonality, what was the bias of the interviewer? I have trained people and conducted push polls (when a desired outcome is pursued by the pollster and the voter ultimately swayed with some of the techniques I mention here), and I have trained people and conducted polls in which the interviewers have close to no bias and detail was paid to the results being more scientific (though still far from perfect).

17)Some People Lie When Responding To Polls

There are many ways to handle a pollster’s questions. One of them is to be completely open and honest with a total stranger calling from an unknown number bothering you as you’re in the middle of something. The other option is to be less than truthful.

People are increasingly lying in polls as it becomes clear to many is society that databases for virtually everything are being collected and stored for later use. I call it the longevity disincentive. Whatever you say to a pollster will live forever.

Polls that you answered one afternoon in July 2002 and have long forgotten about remain tied to your name and are part of the data that the two parties and their vendors maintain about you and sell to others. The data, if it was even entered into the computer correctly, is part of the profile that data scientists build of you and may include in models. Anyone from Facebook to your employer to your phone company to your bank may be buying and selling this data and building models to understand you better.

That poll more than a decade ago may be why you get certain phone calls from certain charities or why certain candidates send one issue or demographic specific mailers to you and another mailer to your neighbor. As people become more wary of data collection, and savvy about their privacy, pollsters are having a harder time working accurate data out of an interviewee during an interview. The longevity disincentive is a real effect of consumers growing more savvy about their personal data.

Identify theft has also become increasingly rampant and the concern of personal data being misused by a bad player enters into the factoring of the longevity disincentive, as does the voluntary posting of personal data online on a social network. You do not know how any of this might come back to harm you.

It only takes one experience like this coming back to bite a person in order for that person to realize that pollsters collect and sell their personal information that they have helped those pollsters collect. This may cause a person to adjust their behavior at the longevity or information or may cause a person to shrug their shoulders and become ambivalent. The longevity disincentive and the common desire to simply lie to protect privacy are real issues that are hard to figure into a model with tremendous accuracy. The way to answer that issue would be to interview people.

18)Social Bias: The Bradley Effect, The Shy Tory Effect, And The Shy Trump Effect

There is a strong cultural bias for and against each candidate. Making privacy a higher priority for some this election and making inaccurate responses all the more likely. This has been shown repeatedly in be past and theorized about. You’ve surely experienced it in your own life – either as the subject of others intolerance for your political opinions or for in being a source of intolerance as others attempted to share their political opinions with you.

It’s a tendency some pollsters are keenly aware of and other pollsters tend to ignore, for its a very difficult to measure variable.

The Bradley Effect was a term theorized after the outcome of a California election were studied. Pollsters, instead of blaming their models, insisted that the electorate must have been more racist than they thought. In 1982, the polls for the gubenatorial race showed Los Angeles Mayor Tom Bradley winning. He narrowly lost on Election Day. The theory behind the Bradley Effect is that respondents to polls wanted to appear like they were willing to vote for a black candidate, but when it came down to it, and voters were able to cast a vote in private, they made the choice not to vote for a black man.

Sounds like a case of pollsters taking the low ground and insisting “it’s always someone else’s fault.” The often useful 800 year old axiom, Occam’s Razor says that the simplest reason the polling models produced inaccurate results is that the polling models were off. One of the many reasons the polling models were off could have had to do with race. It could also have had to do with many other factors – like a bad sample.

Finger pointing is accepted by some parents from a child at a very young age and that child sometimes has a hard time breaking that reliable way of relieving blame. Though the Bradley Effect feels like a pollster and perhaps an entire campaign staff doing exactly that, it still has a valuable place in the discussion of polling as an example of social bias, something that must be watched out for and taken into account as best as possible in a model.

Another example, talked about among pollsters in the UK is the “Shy Tory Effect.” Tory is another word for “Conservative” in UK politics. Some in the UK, unwilling to deal with the fact that saying “I vote Tory” may be unpopular in some social circles are inclined to not answer a poll with that information, yet they will enter the voting booth and vote Tory. This has a history of happening in much higher numbers than the polls reflect.

Like the Shy Tory Effect, the Shy Trump Effect is a likely concern that must be dealt with in polls as well if an accurate outcome is sought. This is a challenge considering the fact that it is quite popular to use negative superlatives in public to identify supporters of a candidate who are obviously very widespread. A yard sign purportedly from Virginia posted on social media read “Vote Trump – No One Has To Know,” illustrates this social bias at work, addresses it and implements it into the sign’s persuasive technique.

By numerous measures he has run an outstanding campaign – breaking Republican records for total number of small dollar donors and breaking Republican records for primary voters (13.3 million primary votes) – so clearly his supporters are out there and are numerous. There is a question though of how to identify his supporters through polls when a supporter does not want to be identified as such.

There is a strong social bias against Donald Trump in some circles and against Hillary Clinton in other circles. The level of social bias against Donald Trump though among the wealthy, those with a high level of education, those with strong democratic leanings has been shocking to me. The very way the aforementioned carry themselves at such moments is part of the reason why there are so many Trump voters. There is disconnected elitism that has disproportionate political power and leads to alienation and the rise of a populist candidate like Trump or Sanders

I am not even convinced that I will be voting this election, but it has astounded me the level of intolerance that has existed on the left this election cycle. As I’ve written about before in these pages, it’s been a sad moment for me to watch. Consequently something called “The Shy Trump Effect” is theorized to be occurring in polls. No one knows how extensive it may be, or even if it is significant. Some percentage of Trump supporters however keep their mouths shut about Trump because they don’t want to suffer the intolerance toward free speech and free expression that 2016 has been so full of.

19)Drama Distorts

There is unnecessary drama involved in political polling, and drama tends to distort the reality of what numbers say. For YEARS before a national election, we hear from the media a horse race style reporting of who has polled ahead and who has no chance, in a presidential election that will be literally decided by undecided voters in the last 12 hours of that election. Really there isn’t a need for a single poll until those last 12 hours then right? That “poll” in the last 12 hours is called an election and the results are known when the ballots are counted.

The 24 hour news cycle needs something to attract eyeballs, and this near meaningless horserace style reporting does the job. The ethics of it are questionable, but when have journalists as a whole been overly worried about ethics? A journalist will use a statistic that is entirely inaccurate and will blame the statistician if there is ever negative consequences.

“Big data” and its effect on elections are often talked about at present, but “big drama” would be a more accurate description of these situations. In this environment of big drama, the truth of the data takes a back seat. Statisticians and journalists and pundits and campaigns all gather together and collectively blame each other in a game of plausible deniability, where no one is held accountable. To the contrary, anyone who uses data should be held accountable.

I write this because I’m asking you to join me in holding all of the people using polling data or other data accountable, with the biggest emphasis on those who are using the data to try and convince you or others of something.

20)Polling Methodology Has Limits That Must Be Respected

Dear pollster, what are the limits of this poll? Dear journalist, what are the limits of this poll? Dear pundit, what are the limits of this poll? Dear political consultant, what are the limits of this poll? This question should be asked if one is trying to be responsible. The person might lie to you, but it is incumbent on you to at least ask this question that so few people ever ask about public polling.

Anyone who responds with “How dare you challenge my numbers?” should be dismissed as a crooked mechanic who gives you a quote followed by a “How dare you challenge my numbers?” attitude would be dismissed. A question about the limits of a poll is a very important and realistic question to expect the statistician to answer, and one that is often overlooked in this data heavy environment.

Data is a lot of smoke and mirrors and once you see how easily polling data is thrown around carelessly in support of an argument, it will be easy to see how so much other data is used carelessly from economic data to sociological data to environmental data. All data deserves a circumspect look by all people who use it.

Of course the answer given by the statistician about the limits of their methodology is not an answer you have to take at face value. The answer given though can be a very insightful beginning to your inquiry about the matter.

21)Polls Have A Left-leaning Bias

A widespread left leaning bias exists in polling according to the scientific journal Nature. If I didn’t read it in Nature, the premier journal on science, I wouldn’t write it here. The snarky response would be “of course polls have a left leaning bias, that’s because reality has a left leaning bias.” Nature concludes otherwise however – that public polling for some reason has been empirically proven in at least 45 countries to tend to skew to the left. Whether you are on the left or the right or neither please keep this in mind as you evaluate polling data.

As Ramin Skibba points out in the pages of Nature, The Brexit polls were flawed, the 2014 US midterm election polls were flawed, the 2013 British Columbia polls were flawed, as were the 2015 UK general election polls – all of which entirely predicted the wrong winner.

The author goes on to state “…pollsters have systematic biases in their samples. They tend to have too many Labour supporters at the expense of Conservative ones.They had applied weighting and adjustment procedures to the raw data, but this has not mitigated the bias problem.

“The bias in favour of left-leaning parties is not unique to the United Kingdom. The inquiry analysed more than 30,000 polls from 45 countries and found a similar, although smaller, bias. The report did not give an explanation for why, but some pollsters in the United States and Britain attribute the trend to inaccurate predictions of who will turn up to vote.” Because of the negative effects across the political spectrum in believing inaccurate polling data, this bias should be recognized and of concern to all.

22)The Accuracy Of A Poll Depends On How Well A Predicted Voter Is Modeled

Sam Wang who has been predicting elections since 2004 as a hobby and has successfully called elections by building accurate voter turnout models. The model is based on a series of assumptions. If the model is inaccurate, the outcomes of the poll will be inaccurate. There’s the constant question for pollsters creating a model of who will turn out. How many people will turn out? In what proportions will they turn out? What demographic measures will matter when they turn out? And of course, how will they vote when they turn out? This speculation underlies all polling.

23)State Level Polling Data May Be Significantly Different From National Polling Data

This would of course encourage one to figure out which is more accurate, state polling data or national polling data and To use that which is more accurate. State polling data is the answer according to Sam Wang, of the Princeton Election Project, a prognosticator of polling with a strong record going back almost four presidential elections. The fact that four elections is a strong record demonstrates how ineffective polling numbers actually are in making predictions. Prediction models should be built off of state-level polling data according to Wang, for greater accuracy, accuracy that has been proven right in four whole elections ! Most likely the last journalist who tried to use a poll to convince you of something did not specify state level or national level polling data.

24)Limited History Of Effectiveness

“Everyone has a plan until they get punched in the face,” said Mike Tyson the American heavyweight boxer. There are self-proclaimed polling gurus who talk a tough game and they often sound like jokes to me. If you don’t have lots of mistakes under your belt and a lot of humility, you are more likely to sound ivory tower to me than someone who’s gotten punched in the face a few times.

Polling, in many regards, especially the building of statistical models, is an ivory tower pursuit. Politics and winning elections is something that happens on the street. It’s dirty and grimy. There is room for both, ideally – the theoretical and carefully thought out aspects and the rugged and brusque aspects.

If I had to choose between the insight of the lowest guy on the totem poll in a party’s official structure the 40 year veteran precinct committeeman who is the foot soldier on the ground, or a PhD statistician whose model hadn’t been wrong in at least 12 years of research and refining and testing, I think you know who I’d choose. Luckily I don’t have to choose one over the other. I can choose lots of different inputs. Though the dingbat at the newspaper reporting on an election might want me to believe the election is as good as over just because the PhD said so. That’s not enough for me and I hope you’ll agree it’s not enough for you either.

25)Low Voters Turnout Makes For Harder To Predict Numbers

This aspect of polling would be a different issue if 29% voter participation and 43% voter participation ensured an identical outcome. It doesn’t. In general, lower turnout in US elections will benefit the Republicans and higher turnout will benefit the Democrats. This is the common trend and at the root of the partisan tension over voter laws.

The more likely anyone is to walk into a polling place to vote, the more it generally helps Democrats so generally Democrats will favor ultra lax voting laws and perhaps non-enforcement of existing laws. The more burdens that are required to walk into a polling place, the more it generally helps Republicans, so you often end up with Republicans encouraging stiffer voter laws and perhaps extensive enforcement of existing laws.

I do not argue that partisan self-interest is the only aspect of this, but I’m often surprised by how clearly partisan this issue of poll access seems to be divided. Possibly, part of the mindset of one on the left may be “anything goes,” while one on the right may be “law and order,” which would make this more of a philosophical disagreement than a purely partisan one. In all likelihood, both the philosophical and the partisan play into the issue of the bitter divide on polling access.

Consequently these perspectives affect the outcome of an election and the accuracy of polls. If voter fraud occurs and people vote who are not supposed to have access to the franchise; it dilutes all votes. This happening is less than ideal. I have literally been to the funeral of someone who voted two weeks later, and maybe for years after that. I have acquaintances who push the envelope with absentee ballots, probably beyond what is ethical, and ensure a high number of favorable votes are cast in their presence by the home-bound and disabled and those who might not otherwise vote, ensuring essentially that areas are guaranteed safe for them and their candidates long before an election takes place.

If election judges do not check for IDs and local authorities do not prosecute illegal voting, perhaps people who shouldn’t vote will vote at higher rates. A Democratic member of the Manhattan Board of Elections Commissioner Alan Schulkin recently talked on camera with Project Veritas about the impact of campaigns illegally bussing people in to vote. He also talked about the poor decision of the city to give a city ID to nearly anyone who asked for one with little proof of who the person was. New York Mayor DeBlasio asked for him to step down for talking about the local party secrets with such lose lips.

Conversely, enforcing existing laws more stringently makes for lower voter turnout. This gets called “voter disenfranchisement” in political parlance. Adding an ID requirement would ostensibly have the same effect. The question then may be – did the new higher bar to vote stop a legitimate voter from voting because they couldn’t legally vote or did it stop an illegal voter from voting when a laxer system allowed them to? That’s a hard question to answer. In response to the Democratic Board of Elections Commissioner referencing voter fraud, Mayor Deblasio took the party line, “Again, this is just urban legend that there is a [voter] fraud problem. There isn’t. There’s no proof of it whatsoever.”

If an issue as unpredictable as voter turnout, can render a polling model ineffective – and voter turnout is truly unpredictable, then it would be fair of both me and you to be quite hesitant to take polling as gospel truth. To add to this, the next point demonstrates that turnout numbers can be as fickle as the wind.

26)Even The Weather Is A Factor

When was the last time the weatherman got the weekly forecast right? Not often. In fact, a long standing statistic about meteorologists is that they get the weather right only about 1/3 of the time. It would be a help to pollsters if they could predict the Election Day weather weeks out, because even that affects the outcome of an election. Alas, they can’t and neither can pollsters predict an election with certainty weeks out.

Michael Moore, while introducing his movie Trumpland at IFC Center in New York joked recently that Trump voters are up a lot at 5am. And that you can be sure Trump voters will be up early that day voting. If you could vote from your XBox controller Hillary would win. But you can’t. Old working class people and retired voters are out voting early. Commented Moore.

November 8 is not November 1. A week deeper into winter-like weather may not mean much in Mississippi, Florida or Alabama, but it could mean quite a bit in places like Michigan, Wisconsin, Ohio, Pennsylvania, New Hampshire, and Maine – all potential swing states that may factor into the election on the national level.

The harsher the weather the lower the turnout. The lower the turnout the more likely the more dedicated voters are to disproportionately turn out. The change candidate – Trump this election – has those supporters. The status quo candidate – Hillary – has the less motivated supporters.

27)What Pundits Say Is Different Than What Raw Data Says

Of course raw data is worthless to most politically interested people who might just happen across it. Such users of data need some level of interpretation. Many data scientists do their fair share of interpreting and it’s rather alluring to cross into the area of punditry or journalism.

This certainly lends itself to more confusing data and a less than objective presentation of that data. Data scientists tend to blame pundits for incorrectly presenting their work. Pundits tend to blame data scientists for incorrectly collecting and processing the data.

No skin in the game, lots of finger pointing, and next to no accountability means public polling and its interpretation by the punditry is hardly a professional environment in which to place your faith. Almost everyone who talks about data is spinning it – even I’m spinning it. My spin is “please ignore what all the self-professed ‘experts’ around you are saying and think for yourself.”

28) No Skin In The Game And Lots Of Finger Pointing Makes For Weak Polling

The norm in campaigns is for spineless political hacks to point fingers at each other and constantly try to one up each other. When they get something wrong, they point fingers. This of course isn’t always the case, but is commonly the case, even in really awesome campaigns with really awesome candidates. Politics just seems to attract those types of people.

The areas around politics do the same, attracting this culture of people with no skin in the game and prone to finger pointing. Politicians often can be said to have skin in the game, especially if they are self-financing. If they are not self-financing it is still their own name on the line. Political hacks in the back office or those who hold business cards for candidates have no skin in the game. They easily hop from one campaign to another, one organization to another. Where there is no skin in the game, there are often poor results. The candidates of course are highly motivated to do well, but that does not mean that those around them will share that same level of motivation.

In public polling, there tends to be this spineless, “Don’t distinguish yourself from the crowd” kind of mentality. Does anyone really care if a pollster gets it wrong? Doubtful. The horse race media is still going to come calling next election looking for that pollster’s data. Few will remember the bad calls, even fewer will care, and there will be no loss to the pollster. As in the case of the Bradley Effect, it is very easy for a pollster to point the blame at someone or something else. Those with skin in the game have less incentive to do this, and to instead be focused on results.

29)It’s Not Even Clear How Many Voters There Are

I’m not even talking about estimates of who will show up. I think I’ve established for you that there’s little hope in putting hope in those entirely unpredictable estimates, that depends on so many whims. I’m talking about the concrete number of people in the US who are registered to vote and can legally walk into a polling place on Election Day.

Even that number, that very concrete number is entirely unknown. Estimates – there’s that word again – just in case you thought most of the data in the paper was concrete rather than an estimate – estimates of how many voters are in the US vary widely.

30)Basic Demographics Are Not Known

For example the New York Times article “There Are More White Voters Than You Think,” from June 10, 2016 shows a peek under the hood of polling and how many estimates are required to create a model, thus being a demonstration of just one topic – figuring out how many white voters with no college degree exist. While many pollsters talk about methodology and study the methodology of each other, to the point that models can effectively predict many happenings, it is worth noting the many difficulties of predicting anything accurately and how complicated that can become.

31)Game Changers And Outliers Make Predictive Models Less Effective

With the Obama 2008 campaign, America saw a unique situation, an outlier of sorts. Never had there been a front-running black presidential candidate. At the same time the candidate fully embraced the desire of the time for hope and change in the context of the campaign. Also there was a financial crisis in full swing. It was obvious that simply relying on the models of the past would not be sufficient in understanding the mood of the electorate and in attempting to produce reliable results. An outlier was at work.

While the British inhabit islands unto themselves and are used to doing their own thing and taking some flack for that, in the scenario of the 2016 Brexit vote, Britain never faced an internationally levied gloom and doom campaign while having a referendum to leave the European Union in the midst of an immigration crisis that they were being made a party to and the constant blowback of interventionist Middle Eastern foreign policy and an active war on terror. The Brexit vote was an outlier. Merely relying on previous polling models was not going to be very effective at measuring the mood of voters and making an effective prediction.

In the matter of Trump 2016, though many wealthy people have run for president, never had a billionaire TV star and media personality faced the long hoped for first major party female presidential nominee, New York Senator, Secretary of State, scandal followed First Lady, wife of an impeached president, with a divided Democratic Party and an even more divided Republican Party, where the middle class vote for the rich man and the rich vote for the less rich candidate, in a uniquely polarizing election for contemporary times with high unfavorable ratings for both of them from the beginning. Merely relying on the polling models of the past in an outlier situation is another way of being likely to produce inaccurate results.

The book Black Swan by Nassim Nicholas Taleb is about exactly these kinds of outlier moments. They inevitably come. The many prognosticators of a society do not see them come until hindsight gives them the 20/20 advantage. Some prognosticators assure the world they saw it coming all along, while others insist “no one saw it coming,” and few will say “I got it all wrong, but my careful research shows that those who followed any of 27 different other models that my research has identified have gotten it more right than me.”

One would wonder how the pollsters would even identify those 27 different models, because while some pollster are busy trying to make a name for themselves in an industry, the accurate pollsters are busy challenging their own ideas.

The 27 accurate models are probably all written by “no names” who don’t get written up in the New York Times, Washington Post, or Huffington Post. Every industry has the self-promoters who are good at talk and the high achievers in the field who are so bad at talk, because they spend so much time focusing on the act of achieving. It’s not common for geek-level talent and game show host level gab to convene in one person.

32)The Undecided Voters Control The Outcome Of An Election

As a matter of personal policy it makes a lot of sense to simply hold your decision until the end. It makes you a much more important part of the process. By the time the two major party nominees have been decided, the electorate has generally fallen in line.

Large swaths of the electorate have decided for whom they will vote and the remainder of the election is largely to convince the usually small percentage of undecided voters. The number of undecided often fall in the single or low double digits as a percentage of the electorate. Those who are guaranteed to be a vote for a candidate are likely to be ignored by a candidate.

Blacks tend to be ignored by Democrats in the general election, as they are a reliable voter block. The religious right tends to be ignored by Republicans in the general election as they tend to be a reliable voter block. The day reliable voter blocks might not stay home but vote for the opponent is the day that they get taken more seriously. Those taken most seriously are those who remain undecided until the very end. As a matter of practice, I find that you, simply put, magnify your influence in the system by remaining undecided.

33)Insisting You Stay Undecided Helps Maintain Objectivity

If at no time do you “join a team,” then you are not likely to ever find yourself cheering for a team. The act of cheering for a team makes one blind to the positives of the other team and negatives of their own team. It tends to make someone a fan more than a free thinker. Lots of negative information will come to light, specifically for the purpose of persuading the undecided voter.

You might want to do yourself the favor of encountering that information with as little team bias as you can. This is likely to let you entertain only the information that is important to you. You do yourself a favor intellectually when you choose to ignore the polls and when you take it a step further and you reserve judgment for until very late in the process. It suggests you actually wait until you vote to decide.

Whether you have a small circle of influence or a large circle of influence, as someone who focuses on free thinking and reserving judgment until the very end, you will likely command an outsized share of influence in your circle as they cheer their team and you maintain an independent view. So much of what comes up in an election is blown out of proportion by cheerleaders who would otherwise not care about what was being raised other than the fact that it is being raised by their team and against the other team.

34)The Exact Relationship Between A Poll And How A Person Votes Is Unclear

It is not certain how a single poll response equates to an individual vote. It is an attempt the create a clear relationship as best as possible and is not an exact science.

35)Oftentimes Someone Is Trying To Wow You

Maps and Numbers and Infograohics get thrown around largely to wow people with their official looking presentation. Presentation can mean something, but it can also be entirely divorced from meaningful content.

Data scientists may put a considerable amount of time into good presentation. Additionally editors and graphic designers may do the same in a way that is not entirely true to the data. Degrees titles and third party verification will be thrown at you for the same purpose. When someone is trying to wow you is a perfect time to put up your defenses and do your best not to be wowed.

This often goes far beyond an interest in presenting data is a way that is digestible or usable or understandable. So much effort goes into the process of presenting data in a way intended to wow that it makes sense to be circumspect.

Why would anyone want to wow you if the data stands up to scrutiny? Part of the reason is an interest in standing out and communicating in what is a data rich and competitive environment. I worry that it goes far beyond that though. Where there is a great effort put in attempting to wow, I believe it wise to be circumspect about the underlying evidence or argument of those trying to change the conversation from an appeal to reason that gets you excited about the sparkly things.

36)All Polling Is Based On The Individual Judgment Of The People Running A Poll

I don’t know the person running the poll (in most situations) and even when I do know them I’m not necessarily more likely to trust the results of the poll. Individuals are individuals and their processes have shortcomings. “The reason is that a polling result is not a pure number descended from heaven; it reflect the professional judgment of one pollster…and such judgments can vary,” says pollster Sam Wang This is important to keep in mind, this important insight and skepticism, even from the great believer in polling data. From a more critical perspective Nassim Taleb practically denouncing the idea of public polling with “Forecasting is the province of the charlatans.”

37)The Social Desirability Bias. It’s Okay To Like Trump

It’s okay to like the Green Party candidate. It’s okay not to vote. It’s okay to like anything or anyone you want. That’s not permissiveness speaking. That’s an understanding of human tendency and the realization that adults tend to know best what they want and to pursue it as much as they want. Those who do not believe their answer may be socially desirable, may be more likely to voluntarily withhold information on a poll without announcing that intention – a lie of omission or a lie or commission.

38)Probability Does Not Hold Up In “Extremistan”

How extreme is Trump ? Some would say very extreme. Some would say he’s a game changer. He is certainly not the mediocre Bush or the mediocre Clinton. I’ve never watched a person speak to a Bush the way Trump did for half of 2015 and part of 2016. I’ve never heard a person speak to a Clinton the way Trump did for part of 2016. As Nassim Taleb points out “In Extremistan things are unpredictable. In Mediocrestan everything is predictable.” The election of 2016, will likely be the most difficult to predict election we have seen in America in my lifetime. I have no idea what will happen and am willing to admit that. It would be nice if more people covering the election would feel the same way. It’s good to be able to admit ones lack of certainty clearly and humbly.

39)Pollsters Have No Skin In The Game

This one is so important, I’m going to mention it twice. Where there is no skin in the game, there is no downside to failure. Where there is no skin in the game, there is an inferior motivator for seeking success. The incentive simply isn’t there. Where there is a corporation owned by the manager, as many companies look when they start out, that company will carry no debt.

Where there is a company not owned by its manager, it looks very different. The incentive structure is different and it is likely to take on a great deal of debt because deep down, the manager, with no skin in the game doesn’t really care. Pollsters have no skin in the game. There is nothing invested. There is nothing to lose by getting a public poll wrong. There is little to win by getting a public poll accurate.

The memory of the event will be short-lived. There is much to win by sticking with the rest of the crowd and cheerleading the candidate or outcome popular in your immediate circle. The motivations of pollsters are varied, and I am having a hard time finding a single pollster who is motivated to deliver an honest perspective on the election when that means taking risk and sticking ones neck out to make that happen.

40)Innovative Pollsters Are Condemned

Throughout the autumn of 2016, I’ve heard the “Los Angeles Times Poll” dismissively referred to. It was dismissed for two big reasons 1. It showed support for Trump and 2. It didn’t agree with the majority of the other polls. That translates to 1. The results were unpopular in the polling industry and media and 2. The results were unpopular in the polling industry and media. It is not by following consensus that innovation occurs. It is by taking chances and testing and usually failing over and over again and going back to the drawing board. The Los Angeles Times poll has some neat aspects to it and is a little experimental. It’s an important part of the equation as pollsters regularly prove themselves to be really bad at what they are doing and not worthy of trust. Maybe some innovation should be welcomed.

41)The Higher The Variability Of Inputs, The Less Accurate The Model

As a general rule, the more variable the inputs are – and many polls have seen considerable variability – the harder it is to accurately use the poll as a prediction tool. Nate Silver of FiveThirtyEight Blog was unfortunately calling the 2016 Presidential election in as an 80+% certainty for Hillary Clinton months before the election. Having been long involved with politics, it is clear that this should be enough to discredit that model. Elections and the electorate are such a fickle thing. No one knows who the winner is until after the votes are counted and often enough you still don’t really know after that. No one has any idea what obstacles stand between a candidate and election day. Forecasters are charlatans, especially if they say that they can predict the future. Humanity has a long history of people saying it can do exactly that. Skepticism is a good initial approach to them.

42)Nobody Knows Who Will Vote

The polling data is based around creating a model on who a pollster thinks will be voting. “We really don’t know who’s going to vote yet and that makes it more volatile,” said pollster John Zogby. “There are no likely scenarios here.” Most likely, you honestly don’t know if you will even vote on election day. How can we imagine that a pollster knows you are going to vote? They often pretend to be able to know if you will vote and also if your less interested neighbor will vote and your very excited 22 year old cousin. There’s a lot of guesswork in polling presented like gospel truth.

43)High Level Of Variability In Results Call Those Results Into Question

The Washington Post-ABC Tracking Poll went from Clinton +12 to Clinton +6 in under 3 days. This variability is enough to draw scrutiny. While this number may appear small, it’s quite the statistically significant swing and very improbable. It’s nonsense to even present data like that without significant explanation of the aberration and a careful re-examination of the methodology.

44)Overconfidence Can Affect Models Too

Overconfidence is a weakness. It clouds ones judgment and makes one less objective. On October 18, 2016, pollster Sam Wang wrote “It’s totally over. If Trump wins more than 240 electoral votes, I will eat a bug.” Simply listening to the certainty with which so many pollsters speak about something so uncertain, reminds of how undeservedly overconfident they can be as an industry and how much that likely know-it-all attitude affects their models.

45)Overwhelming Preference For A Candidate May Steer A Pollster

According to FEC records, journalist have given $382,000 to Hillary Clinton, while $14,000 has been given by the same group to Trump. I find it hard to believe that the media and their public pollsters are able to keep their know-it-all attitude away from their models. Part of that is that they also know unquestionably what is best for the rest of society.

46)Encouraging End Zone Dances On The 10 Yard Line Has Drawbacks

Pollsters like to encourage end zone dances when there is no victory had yet. When there is no election yet held, pollsters like to pretend that the election has happened or is irrelevant. They lose themselves in their own models, in their own BS. Don’t you also lose yourself in their models and BS. Crossing the end zone and having it declared a touchdown is the only time a touch down has occurred.

47)Science Does Not Proceed By Consensus

How common it is to say that because the majority of scientists agree on something that such a thing must be accurate. The problem is that every scientist knows that science does not proceed by consensus. The world was unanimously believed by scientists to be flat once upon a time. Holding that flat earth theory did not make those scientists correct, but it did make them popular. Someone with a more truthful model came along and proved them wrong. While it is some science and a lot of fudging, polling too is not done by consensus.

48)Inconsistent Sampling Makes Data Hard To Compare

Inconsistent sampling over time makes it more difficult even to compare polls from the same pollsters. Even the size of the sample, as broken down by political party, may change, without any mention in the methodology.

The same ABC News Washington Post tracking poll showed Hillary Clinton’s poll numbers going into a tailspin after FBI Director James Comey sent a letter to about a dozen Democratic and Republican members of Congress saying that he would have agents further reviewing emails as new evidence in the Hillary Clinton investigation that he had testified on during the summer.

From one day to the next, the poll counted fewer Republicans overnight. On October 30, 2016 the Democratic-Republican-Independent interviewees in the poll were 37 (D), 28 (R), 30 (I). On October 31, 2016 those numbers were 37(D), 27(R), 30(I). It is important to maintain consistency across polls so that they can be more effectively compared over time and therefore be more effective at providing snapshots as well as a story over time.

If each day’s results are entirely unrelated or unable to be compared to the previous day’s results, then the meaning of the numbers are significantly limited.

49)The Truth Is “Proprietary”

If you dig down deep enough into questions of how any public pollster is doing things, you are almost certain to hit a brick wall. When the questioning gets uncomfortable to the pollster for any reason you’ll probably hear the familiar answer “that’s proprietary.”

Considering how important it is for methodology to receive an independent audit from peers and to be exposed to public scrutiny to verify the accuracy of results, the fact that a cop-out like this exists and is in fact so prevalent in the industry should be enough for the results to be significantly dismissed in the mind of the careful observer.

50)Third Party Supporters Break By Election Day

Some people say they strongly support third party candidates. Sometimes the support can be in the double digits. When it’s time to cast that vote however, the turnout is more often than not in the low single digits – almost always below 5% and usually only 1 or 2%.

Even these small percentages can be impactful, but their supporters who end up being undecided voters by Election Day can be a challenge to accurately figure into the equation. In 2016, third party candidates polled very well, especially Gary Johnson nationally and Independent Evan McMullin in Utah. McMullini polls so well in Utah that he could turn it into a swing state.

This support adds such an uncertainty though because third party supporters are generally hard to accurately measure the support of. Effectively Trump could win 26% of the vote in Utah if the support for McMullin holds or 60% if the support for McMullin goes the way of many third party candidates.

51)Even Outspoken Supporters Never Quite Decide

Going into a polling place it would not be strange to have 20 to 40% of stated voters for a candidate still at that moment open to and ready to change their mind. Strength of support is an important indicator and difficulty to measure reliably because of how relative it is and how fickle it can be.

If you are running a well-funded poll with well-trained, diligent interviewers then you know exactly what the 800 people polled told you, but you have no idea how many of them will do what they told you. I’m not even talking about extrapolating that data to represent the will of 120million people, which pollsters and pundits assure you they can do like its gospel truth.

No, I’m not talking about that. I’m talking about how they can’t even know how 20-40% of 800 people interviewed will behave one minute after the question is asked and answered. The interviewed people may be lying, or in even more instances, they also might not really know. That is the nature of human existence to be dynamic and flexible in response to changing situations.

That adaptability is how the human race continues to exist. So sorry if humans do not want to dispense with the timeless survival instinct of adaptability in order to make a pollster’s day easier. Yet another good reason to not be so serious about anything a pollster says.

52)Losers Average Losers

The perennially successful trader Paul Tudor Jones, used this as a principle of his. Averaging a losing bet does not make that losing bet any better. Cut your losses. Go back to the drawing board. Do it again from scratch when the conditions are better. While the wording is harsher than I would defer to, the message at the core of this is true about polls as well.

The tendency among pollsters is to go back to the drawing board of course and to constantly test and retest – a great tendency, but at the same time it is a tendency to look around the industry in that process to compare oneself to others. Trendiness and popularity play considerably into polling techniques. Infrequently does trendiness precede success.

Sticking ones neck out and going counter to a trend in an industry is likely to generate success. All else is merely a considerable degree of imitation and an attempt to be mediocre. The amount of discussion and groupthink among public pollsters tends to put me off. It is an industry where the consumer would be better served by Mavericks than the spineless.

Groupthink is a disservice any time the critical mind needs to be engaged. Polling requires the constant critical mind. As an example of this, Sam Wang even advises the novice when in doubt about the results of a poll, to just average in two other polls and use those results instead to dampen the extremeness of the questionable polls. That this “let’s throw everything into this cake batter and see how it turns out” attitude is the norm and still works demonstrates how unscientific polling actually is. These guys are playing with some number, happen to be right sometimes, and still carry a lot of influence in the political process. I find this to be an absolute joke.

The tendency for averaging and groupthink is yet another reason that polls should be taken much less seriously than they are. It is also a reason why they are taken as seriously as they are. Get a group of people to think identically and once you can convince a few key people in that group of an idea no matter how idiotic, you will have the whole group capable of believing the same idiocy.

Such an example is the impression that pollsters have the gift of prognostication tantamount to the soothsayer who warns Caesar to beware the Ides of March. Of course if you point out this group-wide identical “rhythm of thought” (ROT) to someone in the midst of that groupthink, prepare for vengeance of just beyond the greatest possible degree you might consider appropriate for a violent attacker or home invader.

The truth hurts. And while no one has a monopoly on the truth, outside perspective is not often welcome in the midst of groupthink. Beware of groupthink. It strengthens the communal bond, and weakens the flexibility and independence of the the communal members. Where independence is needed groupthink is a malignancy rather than a benefit.

The mere tendency to seek averages rather than excellence is indicative of a mediocre mind and in our era of excellence, a mediocre mindset is losing mindset.

53)There’s A War For Your Mind

As joke-worthy as this statement might be to some and as much as this kind of thinking can be dismissed as “the stuff of conspiracy theorists,” it is certainly true that in mass media and mass politics, there is most certainly, for lack of a better term – a war for your mind.

A great deal of money is spent trying to get you to 1. Do things you wouldn’t otherwise do and 2. That are not necessarily in your best interest. The more you can accept that as your default operating mode, the better.

Those who tend to recognize “if any kind of media is around me, then someone is trying to sell me something” tend to negotiate the realities of our contemporary world much better than those who don’t.

While the former understand that marketing exists and that marketers are well-trained and well-paid not to act in your best interest, the latter always seem to pass along wisdom from commercials or newspaper headlines as if it were truth and then when it is disproven, act hurt that the world is a complicated place where you can never know what to believe.

Who to believe is yourself. The safest default is to be highly skeptical of everyone not in your personal life. Billions a year are not spent on Fortune 500, Big Pharma, Big Green, Big Politics, and Big Government marketing budgets because “it’s the right thing to do.”

That money is spent that way because someone thinks if you can be convinced of whatever it is that they are trying to sell then there is a greater return to be had from that investment. There truly is a war for your mind.

54)Believing The Polls May Impact Your Behavior

Each election minor party candidates are given an exciting opportunity to change the political discussion and impact elections. They sometimes even poll well. Predictably they end up with a tiny portion of the vote, far below the actual polling numbers.

The people who say they support them don’t end up being there on Election Day. There are many reasons for this, chief among them is the appearance of electability and how third party candidates lack it. (In that situation, a vote for a Green or a Libertarian risks being a vote taken from some other guy in a major party or a vote in favor of someone major party who you really don’t like).

How does this appearance of electability or lack of occur – in many ways, the most significant probably being through polling, though there are a variety of other systemic biases that ensures only Democrats or Republicans are elected and among them most often only those that the party establishment prefers. That is a matter for another time. It’s important to keep in mind that listening to the polls can affect your behavior.

Not voting your conscience, not voting third party is an example of that. The same affects the major parties too however. A candidate or referendum decision widely seen as a shoe-in, as may have been the case with the vote for Britain to stay in the European Union, may see its advocates stay at home, not realizing how important that individual vote might be.

The media, the punditry, the pollsters, and the Clinton campaign and surrogates at present appear to be doing an end zone dance on the two yard line with seething Trump supporters at their necks more and more motivated to magnify their efforts with every mention of how a Clinton election is a forgone conclusion.

Other Trump supporters may be disappointed, certain he will not win and therefore vote. Those on the fence might vote for the person they feel has a greater chance of winning. There are many possible results that come out of a person being fed misleading data and choosing to let that impact their behavior. Please don’t be such a person. Recognize the war for your mind and declare sovereign your mind and your behavior – your own to determine without any outside interference.

Why This Kind of Writing Appears on 52 Weeks in Slovakia & Why This Fight Against Orthodoxy is Just As Relevant in Present-day America as in Communist Czechoslovakia 1954

From shortly after WWII until 1989, approximately 400 million people in Central and Eastern Europe and the USSR lived under the tyranny of communist governments. The communist party membership made up a small percentage of the population – often single digit and sometimes very low double digit membership, yet were so capable of controlling virtually every aspect of entire societies and without an overwhelming amount of public resistance, so little resistance in fact that two generations were born and came of age under this system, knowing nothing else than a totalitarian government as the only recognizable form of government.

How a single digit of society could control an entire society has long surprised me. How can the orthodoxy of a small group of people be the orthodoxy foisted on an entire society. It’s naturally caused me to wonder when else it happens, and to turn the mirror on myself – though I do not live in communist Czechoslovakia circa 1954, is there an orthodoxy of a small minority that I too unthinkingly subscribe to and that vast arrays of people subscribe to?

Pollster Scott Rasmussen regularly polls for how many people identify as a member of the American political class. It is often a single digit number and sometimes in the very low double digits. These are the American equivalents of the card carrying Czechoslovak Communist Party members. The outcome is different, less vile, but these members of the political class, the American political establishment, are outsized in their dominance and influence over the rest of us. Their influence is to our detriment.

They constantly seek to grow their numbers, to grow their influence, to grow their hold on society. It’s not a conspiracy, it’s just the natural way of things – the members of the political class like to surround themselves with other members of the political class and to grow their own and their friends’ influence in life. Just like Chicago Cubs fans like to surround themselves with other Chicago Cubs fans and to grow their influence in life by doing things like cheering more loudly, dressing in blue, attending games, and showing support for their team and their tribe. There’s nothing malicious there. It is simply putting ones own interest and the interest of one’s tribe first in life, above the interests of others.

The same naturally happens in the political class. The political class dresses a certain way, talks a certain way, goes to certain schools and has certain degrees, reads certain periodicals and authors, goes to certain conferences, support certain charities, and spend their professional and leisure time in certain activities. It’s not hard to imagine that a certain groupthink would result in that environment just as much as the groupthink that would result among Chicago Cubs fans. It’s a “go us” mentality.

One can understand it might be even more significant of groupthink since while being a Chicago Cubs fan can feel all encompassing at a Cubs game, or when the Cubs are vying for the pennant, being a member of the political class really is all encompassing in so many aspects of ones life – from the paper you read, to the job you have, to the people you associate with, and the thoughts you allow yourself to have and nurture. In that environment, and living through a time of such prevalent orthodoxy from the American political class, it is so clear to me how a single digit percentage of a population can effectively control an entire society, even to the detriment of that society.

While I thankfully did not live through the communist era in Czechoslovakia, I recognize my own fight in my own life that I too am invited to have – a fight against an unthinking orthodoxy imposed on an entire society, and a fight in favor of a thinking pursuit of the truth and of a pursuit of individual liberties. That fight is as relevant today, in America, as pertinent in the lives of individuals, as it was during the communist era in Central and Eastern Europe.

As such brutal communist regimes held power, it was virtually impossible for most of society to see what made it so bad. It was all accepted by so many in their daily lives. Only in retrospect was the evil of communism truly comprehended across those societies. Even today, so many people who lived through it don’t actually realize what was so evil about it all, and sort of miss it. That same love of a harmful orthodoxy is as capable of existing in American today as it was in Czechoslovakia or Hungary or Poland or the DDR or Romania or Bulgaria or Yugoslavia or the USSR in the 1950s.

Every generation is invited to participate in that fight. When you look at an individual poll, an individual opinion, an individual eyewitness account, an individual observation, an individual article, and choose to accept it as fact or to approach it with skepticism, you are choosing what side of that fight you will stand on at that moment.

Allan Stevo writes on Slovak culture at www.52inSk.com. He is from Chicago and spends most of his time traveling Europe and writing. You can find more of his writing at www.AllanStevo.com. If you enjoyed this post, please use the buttons below to like it on Facebook or to share it with your friends by email. You can sign up for emails on Slovak culture from 52 Weeks in Slovakia by clicking here.

This entry was posted in Uncategorized. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Both comments and trackbacks are currently closed.

Comments are closed.

  • join our mailing list
  • Recent Posts

  • Recent Comments on 52inSk.com