Surveys are a great way to understand how people think, feel, and act. You can use it across different industries such as business, marketing, academia, and government.
While surveys are a powerful market research tool, if there are errors in your survey design, you could end up with a major disaster on your hands.
Survey disasters come in all shapes and sizes, from minor inconveniences to major scandals. Let’s take a look at some of the most unexpected survey and market research disasters in history.
Surveys can be a powerful tool for gathering data and insights, but they can also go wrong. Let’s learn from some of the most notable survey disasters in history to avoid making the same mistakes:
The term comes from a series of experiments conducted at the West Electric Company’s Hawthorne Works in Chicago in the 1930s. The original purpose of the Hawthorne studies was to look at the impact of different working environments on productivity.
The studies revealed an unexpected phenomenon: workers’ productivity increased whenever they were being observed by the researchers, regardless of the specific working conditions. This phenomenon became known as the Hawthorne effect.
For example, in one of the studies, researchers found that workers’ productivity improved when they were exposed to more light. However, they also saw an increase in productivity when the lights were dimmed.
Today, the Hawthorne Effect is used to describe when people change their behavior in a study because they know they’re being observed.
The Hawthorne Effect has resulted in several modifications to the design and implementation of research.
Researchers now use blinding methods and similar techniques to minimize the effect of Hawthorne. They also think about how the research environment might affect participants’ behavior.
The Hawthorne Effect has had a big impact on how managers motivate and involve their staff. Most managers now use positive reinforcement and the environment to make employees feel appreciated, respected, and cared for, which helps boost employee morale and productivity.
In the early 1900s, The Literary Digest was one of the most popular magazines in the US. It was famous for its annual presidential straw poll, which had consistently predicted the winner yearly since 1916.
Back in 1936, the magazine ran its largest straw poll of all time, sending 10 million votes to a sample of voters culled from phone directories, car registrations, and club lists. The magazine returned more than 2 million votes; 24% of the votes cast.
However, Roosevelt won the election in a historic landslide, defeating Landon by over 25% of the popular vote and winning every state except Maine and Vermont.
The Literary Digest survey disaster did not just happen out of thin air. It was largely caused by human errors, specifically biased sampling and methodological flaws.
Here’s how it happened:
The Literary Digest was not a representative sample of the electorate. It used telephone directories, car registrations, and lists of club memberships to compile its sample population. These lists were largely composed of affluent and educated Americans more inclined to vote for the Republican Party in 1936.
The country was also going through the Great Depression in 1936; many people were struggling financially. As a result, they were more likely to vote for Roosevelt and support the New Deal.
There were also some methodological issues with the Literary Digest poll. For example, they didn’t carefully screen the ballots to make sure they were from legit voters. They also didn’t factor in the fact that certain groups were more likely to vote than others.
Survey disasters don’t just happen in the past. Here’s a list of significant survey failures that have happened recently:
On November 3rd, 1948, the Chicago Daily Tribune ran the headline “Dewey defeats Truman” on their front page. It’s one of the most famous mistakes in the history of journalism.
The headline said that the Republican candidate, Thomas Dewey, would beat the Democratic President, Harry Truman. It was a massive shock to the world when Truman won the election.
Biases that Affected the Poll
The Chicago Daily Tribune’s poll for the 1948 election had a high nonresponse rate. This was most likely because the poll was mailed out when many people didn’t have cell phones.
As a result, the poll’s results reflected the thoughts of the wealthy and educated population, who tended to vote Republican.
Another factor that may have contributed to the “Dewey Defeats Truman” headline was the use of quota sampling.
Quota sampling is a type of sampling in which the pollster sets quotas for different groups of people, such as men, women, and people of different ages and races. The pollster then interviews people until they have reached their quotas for each group.
While quota sampling is a great way to make sure you have a representative sample, it is prone to bias. Pollsters have to pick and choose who they interview to make sure they meet their quotas. If they’re not careful, they might interview people who are more likely to have certain opinions or beliefs.
The results of the 2016 Brexit referendum came as a complete surprise in British politics, with 51.9% of voters opting to vote in favor of leaving the European Union, despite polls predicting a Remain vote.
Here are some of the factors that led to this poll prediction inaccuracy:
Polls are based on samples of the population, and there is always a margin of error associated with any sample. In the Brexit campaign, some polls were criticized for including too many older voters, who tended to vote more strongly in favor of Brexit.
Polls also predicted that turnout would be lower than it was, particularly among younger voters. The higher turnout among younger voters helped to swing the vote in favor of Leave.
A significant number of voters were undecided until the very last minute. These voters were more likely to vote Leave, and they may have been less likely to respond to polls in the run-up to the referendum.
Also, voters who supported Leave may have been reluctant to admit this to pollsters, for fear of being judged or ostracized. This could have led to an underestimation of the Leave vote in the polls.
Market research helps you develop products your target audience wants, allowing your products to be profitable. Here’s a list of market research that went wrong and how it affected the company:
Coca-Cola launched New Coke in 1985, believing consumers wanted a sweeter soda. However, market research didn’t factor in the emotional connection that consumers had with the Coca-Cola original formula. New Coke failed so badly, that it was discontinued three months after its launch.
Microsoft launched Windows 8 in 2012 with a new user interface designed for touch and non-touch devices. The market research didn’t factor in user-friendliness; many users found the new interface difficult to use, which led to Windows 8 failing in the market.
PepsiCo redesigned its Tropicana orange juice carton in 2009, replacing the iconic orange grove image with a more modern design. The new design was not well-received by consumers, they found it unappealing and refused to buy it.
The sales of Tropicana orange juice declined so rapidly that PepsiCo had to bring back the original design within a few months.
In 2014, Amazon released its first smartphone, the Fire Phone, which offered a range of innovative features, including a dynamic perspective display capable of tracking user head movements.
Unfortunately, customers found the smartphone features to be unnecessary and expensive, which caused the Fire Phone to fail and be pulled from the market after only a year.
McDonald’s launched the Arch Deluxe burger in 1996 as an attempt to attract more adult customers. The burger was made with higher-quality ingredients, so it was more expensive than McDonald’s other burgers.
The market research did not take into account the fact that consumers did not want to pay higher prices for a McDonald’s burger. The Arch Deluxe burger failed and was discontinued after only two years.
In 2012, Bic introduced a range of gendered pens for both men and women. These pens were available in various colors, slogans, and at different prices.
The pens were met with widespread criticism from customers, who accused Bic of sexism. The company had to issue an apology and recall the pens.
7. Google’s Google Glass (2013)
Google launched its Google Glass augmented reality glasses in 2013. Google Glass had mixed reviews; it was criticized for its high price, limited functionality, and privacy concerns. This led to Google Glass being discontinued two years after its launch.
8. Nokia’s N-Gage Gaming Phone (2003)
Nokia’s N-Gage is a gaming phone released by Nokia in 2003. The N-Gage was intended to be a dual-purpose phone and handheld gaming device.
The phone was bulky making it uncomfortable to use as a handheld gaming device. It also had a limited number of games. Unfortunately, Nokia’s market research did not reveal these problems, and N-Gage was a commercial failure.
9. Harley-Davidson Perfume (1994)
Harley-Davidson launched a perfume in 1994 to appeal to the Harley-Davidson lifestyle. However, the target audience did not receive the product well and was perplexed as to why a motorcycle manufacturer was manufacturing fragrances.
They also found the scent to be unpleasant. Harley-Davidson discontinued the perfume after just a few months.
10. Colgate Kitchen Entrees (1982)
Back in 1982, Colgate came out with a new line of frozen food called “Colgate Kitchen Entrees”. Colgate was already well-known for their toothpaste, so it thought their brand name would give them an edge when it came to frozen food.
Unfortunately, people weren’t interested in buying frozen food from a toothpaste company, so it didn’t do so well.
Even the most well-designed surveys can be susceptible to pitfalls. Here are some common errors you should watch out for in your survey and market research:
Sampling bias occurs when a sample is not representative of the population it is intended to represent. This can happen for several reasons, such as:
Nonresponse bias occurs when some members of the sample do not respond to the survey. This can lead to a biased sample, as the people who respond may be responding because have a certain opinion that’s different from the people who do not respond.
Examples of Surveys Prone to Sampling and Nonresponse Bias
These are questions that are worded in a way that suggests a particular answer. This can influence the respondent’s answer, even if they do not agree with the suggestion.
For example, the question “Do you think that the government should spend more money on education?” is leading, as it assumes that the respondent thinks that the government should spend more money on education.
Surveys and market research are powerful tools that help you learn about your customers and make better decisions. However, if not done ethically they compromise the integrity of the survey data.
Here are some unethical methods to avoid when conducting a survey or market research:
Push polling is designed to influence public opinion rather than collect accurate data. It is mostly used in political campaigns to spread negative information about the opponent.
For example, a push pollster might ask voters questions like, “Do you still support Candidate Y after you’ve learned that she had a juvenile record for shoplifting?”
Push polling isn’t limited to asking leading questions, pollsters may also use negative language or tone of voice in their questions to try to influence the respondent’s answer.
Reporting data to support your preconceived notions and overlooking data that does not support them, are classic cases of data misinterpretation.
Report your findings honestly and without bias to ensure your data is accurate. If you encounter challenges with the survey, highlight your methods and the limits of the study.
While survey disasters are pretty common, and happen to the best of us, they are avoidable. Here’s how to avoid survey and market research fails:
(1) Methodological Rigor:
Implementing a well-designed survey helps you avoid inaccurate results by employing best practices such as random sampling, clear questions, and controlled experiments.
(2) Pilot Testing and Pre-Testing:
Pilot testing involves administering the survey to a small group of people to identify any potential problems. Pre-testing is similar but it uses a larger sample.
Pilot testing and pre-testing can help to identify several potential problems, such as:
Here are some best practices to ensure your surveys don’t make the headlines for the wrong reasons:
When you are transparent about your methods and findings, it builds your credibility and provides a benchmark for future research. So, ensure you report your survey methods and findings.
Don’t just abandon failed surveys, ensure you learn from survey disasters and feedback from other researchers and practitioners. Also, pay attention to new developments in survey research and implement them.
Experience isn’t always the best teacher, but it certainly is the most memorable. We hope this guide helps you avoid common market research errors and implement the best practices to design better surveys in the future.
You may also like:
Introduction The focus of market research surveys is to reach decision-makers from all works of life. These decision-makers who are...
The Chi-Square test is a statistical test that is commonly used in surveys to determine whether there is a significant difference...
Introduction Matrix questions are a type of survey question that allows respondents to answer multiple statements using rates in rows...
Introduction Social research is a complex endeavor. It takes a lot of time, energy, and resources to gather data, analyze and present...