All Research is Problem Solving

Sep 06 2010

“In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence.  A wise man, therefore, proportions his belief to the evidence.”  - David Hume

“Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.”  - Karl Popper

Can market researchers be consultants?

It is a well known truth that management consultants get paid considerably more than market researchers.  A friend and I discussed this at the weekend. We are both in research, and have both worked on projects with consultancies like McKinsey and Bain, and agreed that often this difference in pay was deserved!  While market researchers are as technically capable as consultants, and often more so, most consultants provide a level of critical thinking which is missing in many (not all) market research projects.

Consultants I have worked with are not afraid to take a point of view (something many market researchers are shy to do).  They also use the point of view to create clear, testable hypotheses throughout the research process.  By this, I don’t mean that they run lots of statistical tests, something quantitative researchers do far too much of.  Rather consultants will consider what research outcomes will support their point of view, and more importantly which outcomes will not support it.  And they do this repeatedly, with multiple hypotheses created by breaking down and understanding the client’s business problem, and the different factors which play a role in the business issue.

Solving problems

Researchers would do well to learn some basic frameworks for problem solving, and I can recommend Problem Solving 101 as a good place to start, with clear explanations and examples of logic trees, yes/no trees and hypothesis pyramids and other tools used by all the best consulting firms (the author is an ex-McKinsey consultant).

The problem of how we learn from experience has troubled philosophers for thousands of years, and was most eloquently expressed by David Hume, who sought to unwind the logic of causality from the psychology of how humans feel about the learning process.  He pointed out the logical asymmetry of confirmation and falsification (a million cases cannot ‘prove’ something is certain, whereas a single contradictory case can ‘disprove’ it). He also pointed out the human tendency to use confirmatory cases to ‘prove’ a theory, which then becomes an expectation (this is commonly known as induction).  Hume pointed out that although there is no validity to induction (that is no number of cases can prove something with certainty), human psychology is designed to follow the inductive method (because in practice it works most of the time).

Recent findings in brain science support the view that in many ways our brains are fantastically designed to create and test hypotheses about the outside world (for example read On Intelligence by Jeff Hawkins), and George Kelly developed a whole theory of personality around this idea (A Theory of Personality by George Kelly).  One of the things I find most comforting in George Kelly’s theory, although now outdated, is the importance he placed on reflexivity.  He put it (I paraphrase) that the trouble with psychologists is that they would not want to describe themselves in the same way that they described other humans.  He saw humans as having the same thirst of knowledge, and many of the same mental processes, as scientists.

The trouble with David Hume

However, ‘Hume’s problem’ continued to trouble philosophers for the next two centuries.  In the last century, despite disagreements over the nature of the process of science, there has been increasing agreement that induction does not constitute a scientific or logical process by itself.  Instead, Karl Popper introduced the concept of falsifiability as the way to demarcate science (or logic) from other forms of knowledge.  Simply put, he believed that in order for us to have confidence in a particular piece of knowledge, it must be falsifiable.  There must be a clear way to prove that the knowledge is true or false, by setting up a test in which both outcomes are definable and possible.

This is the way that most sciences now proceed, and whatever your opinion of the principle, I believe that the process of setting up hypotheses which can be proved or disproved is the most rigorous way to approach market research problems (across all methodologies).  That is, be clear on your assumptions and point of view, use them to create research questions (hypotheses) which have alternative outcomes, and test them to see if they give the answer ‘yes’ or ‘no’ to your questions.  If it’s only possible to get one answer, then you need to reformulate the question.

How does this work in research?

Prior knowledge is important in research (and I would include intuition as a form of prior knowledge), and should be used to help formulate hypotheses (that is, formal questions).  However, prior knowledge should never blind us to other potential explanations and solutions, and analysis must always be data driven.  Most market research problems, follow the principles of Bayesian statistics.  Prior knowledge (and there is almost always prior knowledge in research) should always inform analysis, but is one piece of evidence (providing one estimate of probability) which can be combined with research evidence to estimate a revised estimate taking into account the additional knowledge we have gained from the research.

The best analysis is ‘data rich’ (as described in The Art and Science of Interpreting Market Research Evidence), which means that a good researcher will use every available piece of evidence in order to come to the best estimate of the truth.  This means looking beyond prior knowledge and the immediate study when possible, at any knowledge which directly or indirectly has a bearing on the hypotheses.  Many market research problems are not simple cause and effect, as they involve complex social systems.  Combining multiple evidence can help here, but each piece of evidence should be evaluated for its relevance, usefulness, direction and importance.  Consistency of evidence provides great support for intuition.  Statistics are important, but are overused in research, where patterns and data relationships can often provide clearer direction than statistical testing on its own.

Research critical

Critical thinking helps all researchers, using any methodologies, to reach robust evidence-based recommendations which can give clear guidance to clients.  If market researchers aim to be valued in the same way as management consultants, we must have strong and clear convictions, which are clearly set out in the form of hypotheses, and can be proved and disproved with well-desigend research.

REFERENCES

The Art and Science of Interpreting Market Research Evidence by DVL Smith and JH Fletcher (2004)

All Life is Problem Solving by Karl Popper (1999)

Problem Solving 101: A Simple Guide for Smart People by Ken Watanabe (2007)

One response so far

  1. Great stuff Inspector!

    This post highlights an important problem in both research and the process of making marketing decisions. That problem is one of too little time being spend on understanding and structuring a clear decision process prior to executing a research project. This leads, in far too many cases, to research that simply continues to build confirmatory evidence for an pre-existing favoured alternative. Too little time and effort is given by either clients or research consultants to seek non-confirmatory evidence. In other words, too little time trying to falsify the favoured alternative or dominant hypothesis.

    I believe clients need to be open to research consultants seeking such evidence and researchers to be more assertive in designing and seeking information that may disrupt any pre-conceived notions of the outcome.

Leave a Reply