Quantifying Qualitative Data: How ResTech Can Lift the Credibility of Qualitative Market Research
I’ve been doing market research for almost 30 years. And in that time, I’ve done countless qualitative research projects. I’ve moderated hundreds of focus groups, individual interviews and ethnographic studies. And I’ve conducted countless live streaming groups and interviews, especially over the last two years.
I believe there’s nothing better than qualitative market research for developing a deep human understanding and inspiring creativity. I don’t believe you can gain an empathic understanding of your consumer without it.
But despite this, qualitative research has always played second fiddle to quantitative analysis in the world of market research. By most estimates, qualitative accounts for only about 25% of total global spending in market research. The majority of market research is custom quantitative, hybrid qual-quant studies and, increasingly, analytics. People often describe qualitative as “preliminary,” directional” and “exploratory.”
Why is this?
I think there are two reasons, one cultural and the other statistical. As a culture, we trust numbers. And as an industry, we fear being wrong.
Trust in Numbers
We have a cultural bias toward quantification. By cultural, I don’t just mean the culture of marketing and market research. I mean our broader culture. Most of us have faith in the empirical legitimacy of numbers. As a way of knowing our world, quantification is truth. We see numbers as objective, beyond dispute. Numbers in our society have an authority that other forms of evidence simply do not. And because of that, numbers carry weight within our organizations. If you’re a brand manager looking to justify a decision to senior management, would you rather point to a number or to a quote? For most of us, the answer is clear.
I love this passage from UCLA Professor of History Theodore M. Porter.
“The appeal of numbers is especially compelling to bureaucratic officials who lack the mandate of a popular election, or divine right. Arbitrariness and bias are the most usual grounds upon which such officials are criticized. A decision made by the numbers… has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand for impartiality and fairness. Quantification is a way of making decisions without seeming to decide. Objectivity lends authority to officials who have very little of their own.”
– Theodore M. Porter
“Trust in Numbers: The Pursuit of Objectivity in Science and Public Life”
Porter uses the term bureaucrat broadly to refer to anyone who has to justify their decisions to stakeholders or constituencies; anyone who isn’t endowed with some other form of authority. That applies to all of us whether we serve clients, customers, bosses or boards of directors.
You can argue whether our trust in numbers is justified, moral or beneficial to our society. But that doesn’t change the fact that our cultural bias toward quantification is a reality of our professional lives.
Fear of Error
The second reason why qualitative analysis plays second fiddle to quant is that we’re afraid of being wrong.
Qualitative is prone to error in two ways; one widely known and accepted, and another that is known but widely ignored. I’ll call these two forms of error Representation Error and Interpretation Error.
Representation Error is a form of error that is usually associated with qualitative research. It happens when we mistakenly assume a small sample represents a larger population. This is sampling error. With qual, there is simply a greater chance of statistical error when we attempt to make assertions that go beyond our pool of respondents. We all get that and that is a big part of the reason qual insights are often treated as preliminary and in need of quantitative confirmation.
This is a widely accepted limitation of qualitative research. And it is one of the reasons hybrid studies are so popular. By combining qual and quant, we compensate for the limitations of each. Qual gives us exploratory insights and human depth. Quant gives us projectable confirmation.
But Representation Error is more than just sampling error. It also takes into account the fluid way in which data are often collected in qualitative studies. In many studies, interview and discussion flow changes from respondent to respondent. Qual fieldwork is often an evolving journey, making it hard to aggregate insights across respondents. This is the reason why many qualitative studies can’t be replicated. Each study exists within its own small universe. If the study were to be done again, it’s likely that the evolving journey of fieldwork would take a different course, and we’d end up in a different place.
If Representation Error is error in extrapolating qual results to populations outside of the study sample, Interpretation Error is error that that occurs within the sample of respondents involved in the study. And this is true whether a qualitative study is stand-alone or combined with a quantitative phase.
I believe that the insights we derive from qualitative are often flat out wrong. This is because qualitative data collection and analysis is often loose and sloppy. The lack of rigor leaves it open to a host of biases and noise that lessens its credibility. And this is particularly true for traditional qualitative methods like focus groups, whether conducted in person or streaming.
Qualitative Research: A Very Noisy System
If you haven’t read “Noise: A Flaw in Human Judgment” by Daniel Kahneman, Olivier Sibony and Cass Sunstein, you need to.
Had the authors included qualitative market research in their analysis, I think they would have found it to be a very noisy system.
Noise, they argue, is one source of error in human judgment. When it comes to judgments being made by organizations or systems that involve multiple decision-makers, noise is “random scatter.” It is unpredictable variation in individual judgments. Bias is the other source of error in human judgment. Unlike noise, bias is systematic and predictable. It is recurring deviation.
There are many sources of noise that Kahneman and his co-authors review, many of which, it is easy to see if you read the book, are present in traditional forms of qualitative research.
Focus groups, it appears, are particularly noisy.
“Groups can go in all sorts of directions, depending in part on factors that should be irrelevant. Who speaks first, who speaks last, who speaks with confidence, who is wearing black, who is seated next to whom, who smiles or frowns or gestures at the right moment – all these factors, and many more, affect outcomes.”
– Dan Kahneman, Olivier Sibony & Cass R. Sunstein
“Noise: A Flaw in Human Judgment”
According to Kahneman and his co-authors, groups are plagued by many sources of noise, including “social influence” (the tendency for one influential group member to sway others with an early comment), “informational cascades” (the tendency for people to learn from others and adjust their opinions as the group discussion progresses) and “excessive coherence” (the tendency for a pre-existing or emerging explanatory story to cause people to distort or ignore information that does not fit).
What’s important to understand is that these sources of noise in qualitative research happen not just among respondents but also among observers. There is noise happening in both data collection and during the in-the-moment analysis that happens in most qualitative research projects.
I had a recent example of this that illustrates the problem but also points to the solution.
Qualitative Noise in Action
I was involved in a study last year that combined asynchronous qualitative data collection with follow-up webcam interviews. After each webcam interview, we held a quick debrief with the brand team. In the debrief following the third or fourth interview with an eloquent respondent, an also eloquent brand team member latched onto a theme that the respondent had brought up.
Another team member mentioned that he had “heard” this theme come up in earlier interviews. Others agreed. Everyone started to get excited about the emerging story of what we were learning. In subsequent interviews, the moderator made sure to ask about this emerging theme. And in subsequent debriefs, this theme was sure to be mentioned. Someone always “heard it” in the course of each interview.
The team emerged from fieldwork excited about the direction. There’s was only one problem. It was wrong.
The theme the team had latched onto was not nearly as prevalent as they had thought. Back-room cascade effects, confirmation bias and their drive for a coherent narrative early in the process took over and produced an error in judgment. It produced noise.
How do I know this?
Because we also looked at the data another way. We quantified the qualitative data.
We combined AI text analytics and human coding to take a comprehensive, objective look at ALL the data – asynchronous and live interviews alike. While the theme the team had latched onto was present, it was by no means as prominent as they had thought.
This story illustrates both a source of qual’s credibility problem and suggests the solution. By quantifying qualitative data, we can make qual analysis more robust, accurate and compelling, and enhance qual’s credibility. And I believe we can do this without sacrificing the deep human insights for which qual is known.
Qualitative 2.0: Where do we go from here?
Applying quantitative analysis to unstructured data doesn’t mean we have to disregard the rich verbatims, individual human stories and textural videos qual is known for. And it doesn’t mean we have to ditch the immersive experience of observing qualitative research. By combining quantification with traditional qual analysis, we can have both depth and breadth. And in so doing, overcome qual’s credibility problem compared to quant.
But to make this happen, qualitative researchers need to make some changes to how they approach data collection, analysis and visualization
Data collection
Kahneman and his colleagues make an interesting point that is supported by empirical research. They find that aggregated, individual data points tend to be less noisy and less error prone than groups.
What are the implications for us? Simple. We need to stop doing focus groups and start doing more individual interviews and way more asynchronous qualitative.
Focus groups – live and streaming – are not a serious form of data collection. Period. End of story. They are archaic. The noise present among respondents and observers is simply too hard to control and untangle, even with the best of moderators. As an industry, we stick with focus groups more out of laziness and tradition than anything else.
I’ve always said that focus groups are as much corporate theater as research, and I’m convinced that they are more the former than the latter. If your goal is to get your internal team away from their day-to-day tasks so they can focus on listening to the consumer, then fine. Hold some focus groups. Hire an entertaining moderator. Eat your fill of backroom M&Ms and have a ball. But for serious market research, we need to ditch groups.
Decades ago, focus groups were the most efficient qualitative option out there. That’s why market researchers started doing them back in the Mad Men days. Through groups, they could hear from 30 or 40 consumers in a day, all without having to leave a research facility. It was much more efficient than one-on-ones and certainly more efficient than in-homes or ethnographic methods.
But these days, asynchronous qualitative platforms allow us to capture reams of unstructured data – video, imagery, narrative text – and employ the same projective techniques we’d use in groups much more efficiently than either live or online focus groups.
And asynchronous qual platforms allow us to capture unstructured data on samples large enough to actually begin to have statistical validity. We can run qualitative studies with hundreds of respondents who create digital collages, write projective stories and provide in-depth answers to open-ended prompts.
Data Analysis
We need to apply more rigor to our analysis. For too long, qualitative analysis has been interpretative and loose. We need to open ourselves up to a new age of qualitative analysis.
Rigorous analysis of qualitative data used to have to be done by hand coding. But now with AI text analytics and more advanced coding software, we can efficiently analyze large amounts of unstructured data.
It’s important to keep in mind that software can’t do it all. In my experience the perfect balance is blending research technology with human analysis – artificial intelligence with human intelligence. Software can help us wrangle, sort, correlate and tabulate. But it can’t derive meaning. It can’t develop grounded theory. Only skilled human analysts and strategists can do that.
Data Visualization
Finally, we need to use quantitative data visualization to lend authority and credibility to our analyses. A well-placed data graphic in a qualitative report can work wonders to build credibility with an audience. I’ve seen it happen. Earlier, I mentioned our cultural bias toward numbers. You can fight this if you want. But I opt to play the game. And I play it by supporting my insights and assertions with both the deep human texture of rich verbatims and video, and data visualizations that make the quantification of qualitative data engaging, compelling and hard to argue with.
We’ve entered a new, exciting time in market research. Today, technology allows us to capture qualitative data and analyze it more efficiently than ever before. In many ways, I believe the old distinction of qualitative and quantitative research is becoming irrelevant.
We can now capture and analyze unstructured data at scale. We can leverage technology to find signals in the data that couldn’t be detected with the human eye alone. And we can lift the credibility of qualitative and the deep human insights it reveals.
As always, let me know what you think.