Consensus is overrated. Objectivity isn’t.

I want to start by summarizing a few recent discussions I’ve had on LinkedIn. Then I want to talk about what I think is the misguided goal of trying to reach consensus or agreement regarding design, methods, interpretation, or anything else related to research. Then I want to talk about the notion that objectivity isn’t possible, which is a notion I consider both true and kind of beside the point.

Some Stuff People Have Said Recently

Conversation 1. On the Network of Social Science Researchers group, in response to my post on the ways scale choice distorts survey results, I got into a discussion with a researcher from Italy about issues of survey validation and interpretation. We both had strong opinions and we voiced those opinions freely. After several rounds of obviously not making a dent in each other’s convictions, another person chimed in with:

Thanks you two for this wonderful conversation – I’ve learned a ton from both of you! If you’re interested in my two cents, it seems that you’ve bumped up against the limitations of this type of communication. My sense is, if you spent an hour in a coffee shop (face-to-face), you’d probably be able to bring closure to this discussion. Again, my thanks to the both of you!

I responded that I agreed that face-to-face communication was easier than virtual communication. Then another group member wrote that she agreed that face-to-face communication was good too, and added some comments about mixed methods. Then my original conversation partner joined in and said she agreed that face-to-face discussion was better, and agreed with the point about mixed methods. Then the conversation died.

Conversation 2. On the DIME/PMESII, HSCB, and IW group (people in government love acronyms), and in response to Paul’s recent post about the project we’re conducting to look at the relationship between suicide attacks and military suicides in Afghanistan and Iraq, Paul got into a discussion of whether doctrinal or ideological justifications for suicide attacks were really an informative way of trying to explain such attacks. The response to Paul’s comments included this:

…from a “big picture” perspective, reliance on behavioral indicators will be blindsided by invisible motivations as surely as reconstructing motivations from after-interviews can be skewed by any number of factors…What’s needed *overall* is a view that takes in behavioral and motivational, political and religious, contemporary and historical aspects …. At some point we will need a conceptual architecture that shows where and how these various facets belong in relation to one another. Until then, let’s keep talking.

And then everyone stopped talking.

Conversation 3. In the Social/Behavioral Science and Security group, in response to my post on my problematic relationship with theory, one commenter said (among many other things):

As if we could observe and describe objectively…the one who is describing is the one who is observing and studying. Objectivity is the observer’s illusion that observing can be done without him. (Who said that? I can’t remember). We need theories of observation, we need theories of classification etc. Not because it’s such fun to theorise, but out of an attitude of intellectual hygiene.

And a little later:

As a side step: Are you familiar with the work of Ernst von Glasersfeld and Paul Watzlawick? You may find it interesting.

In case you don’t want to look up those names, they are popular writers within the constructivist school of social theory. I replied:

Yes, I’m as familiar with both of the authors you mention as I am with any of the constructivist school of thought. I don’t find constructivism particularly useful – the idea that individual objectiv[ity] isn’t possible is a valid point, but it’s so basic, and now so widely recognized, that I don’t really see what continued focus on that criticism really offers in terms of practical research benefit.

To which the commenter replied:

…the closing paragraph of your last posting renders any further discussion pointless. Thank you, it’s been interesting.

Agreement Isn’t Always a Good Thing

I think these three experiences actually have quite a bit to do with one another.

The participants in the first conversation were right that some face-to-face interaction would have smoothed the dialogue: both sides did appear a bit agitated. It seemed to me that my conversation partner was aggravated by my assertions about scale choice that ran counter to her experience. I was aggravated that she provided no evidence to show that her account of her experiences was both accurate and generalizable. By saying that we would almost certainly agree with each other if we were to talk face to face, the commenter declared our disagreement to be primarily interpersonal, not analytical, in nature. But the disagreement was both interpersonal and analytical. Yes, the limitations of online communication made it difficult to find agreement, but aside from that we really did seem to have different assumptions about a method’s validity. Saying that our interpersonal differences were coffee-shop-able hid the fact that our analytic differences probably weren’t, and it created a situation where again debating the analytic differences would have seemed a bit rude.

I think the second conversation had a coffee-shop effect on the debate about causes of military suicides. The comment that “what’s needed overall is a view that takes in behavioral and motivational, political and religious, contemporary and historical aspects” boils down to “we need to look at everything.” That advice isn’t very practical (it’s impossible to look at everything), but it may not even be true.  Why should we believe that, say, religious aspects of suicide terrorism are necessary parts of an explanation of that behavior? That statement implies either that we have a really solid logical argument for that aspect’s necessity, or that we have replicated findings from well-designed studies that demonstrate that necessity. Unless I’ve missed a hugely impressive portion of the literature on terrorism, neither the logical argument nor the evidence exists to support the assertion that we need to look at religion to understand suicide terrorism. The same goes for the other aspects of human experience that were promoted as essential to our understanding of the issue. Saying that we needed to look at a lot of different things obscured the discussion of the one particular thing that had prompted the conversation in the first place.

In both the first and the second conversation, commenters took some principals that are generally sound – a mode of communication can impede understanding; there can be many different approaches to doing the same thing – and applied them to the debate of specific analytic issues. I think that might have been a mistake.

I honestly don’t think that first conversation would have turned out at all different if we had both been located somewhere other than a LinkedIn forum. For me, no amount of her personal experience could outweigh even the modest and initial findings from my simulation study, because my simulation was presented in a replicable format (to make it explicitly critique-able), and her experience wasn’t.  And it may be that no number of computer simulations could outweigh what she had learned from her personal experience. I think we both would have run out of coffee before we ever gave ground on those positions. The interaction would probably have been more pleasant, but it wouldn’t have brought us any closer to figuring out how the use of different scales can bias survey results.

In fact, the face-to-face interaction might have made it more difficult to make any progress at all, because the pressure to find points of superficial agreement can mask underlying differences of opinion. That’s usually a good thing when we’re talking about interpersonal relationships. It’s often not a good thing when we’re talking about research. Similarly, emphasizing that we ought to look at a problem from a lot of different angles can make it difficult to assess the appropriate-ness of the particular angle we happen to be discussing at any given point in time. At the very least, returning to the original point of contention after such a let-many-flowers-bloom admonition can make a person fear he’ll come across as dogmatic, or at least just stubborn. The search for agreement can actually inhibit good analysis.

Objectivity is an Emergent Property

My main reason for this belief is illustrated by the third conversation. I think it would be difficult for anyone who studied social sciences in any university during the last two decades to have not come away with a load of critiques of the idea of objectivity. The critiques come in all shapes and sizes and levels of sophistication, but a common component of such critiques was summed up by the commenter: “Objectivity is the observer’s illusion that observing can be done without him.” We researchers look at stuff, and what we decide to look at and how we decide to look at it is influenced by a whole load of things other than the physical reality that is the target of our observation.  It really doesn’t matter how many precautions we take, and how self-critical we try to be. In the end, the entirety of each individual researcher’s research is going to be biased mishmash of what that research saw and what he or she expected to see.

I accept that critique. I think it’s true, as long I can include the caveat that some mishmashes more fully made up of what the researchers actually saw than other mishmashes are. In the end, individual objectivity isn’t a realistic expectation, because it requires a level of individual intellectual honesty that is impossible for nearly any human to obtain.

Hugo Mercier and Dan Sperber published an illuminating piece in Behavioral and Brain Sciences this last April. I think their abstract deserves quoting:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is… to devise and evaluate arguments intended to persuade…. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.

As the authors state, this way of thinking about reasoning makes sense of a lot of findings from across the cognitive sciences. The pursuit of truth or fact seems to play a very small role in the way we make arguments about truth and fact. There’s no reason to believe that scientists are immune to these limitations.

And I think all of those limitations are almost entirely irrelevant to the issue of doing good research.

The winning-is-better-than-truth type of reasoning, despite its flaws, has managed to produce a wide array of practical successes in our ability to build and maintain more realistic expectations about, say, the way the physical world works. (I really like Brian Silver’s account of this progress-in-spite-of-ourselves). Despite what seems to be an almost complete lack of individual objectivity, we humans have more realistic expectations about our physical world than we used to. Pointing out repeatedly that we don’t view the world dispassionately doesn’t really do anything to explain that.

The explanation I prefer is that individual objectivity is not a necessary condition for the production of objective knowledge, so long as claims to such knowledge are continuously subjected to scrutiny, criticism, and outright attacks. Consistent susceptibility to attack makes more biased claims less able to endure for long periods of time. We don’t even need to assume that the attacks are based on anything other than the desire to prove the other guy wrong. Depictions of atoms or chemicals or people or societies that more accurately approximate the way those things actually operate should be able to withstand criticism longer because those depictions’ defenders will always be able to answer any criticism with “but the model works.”

Less accurate depictions can withstand criticism for a long time by their proponents removing themselves from their critics. Anyone who has tried to publish anything in the social sciences has probably had manuscripts rejected not on charges of poor reasoning or insufficient evidence, but because the manuscript didn’t sufficiently show that the author agreed with the same people the reviewer agreed with. (In graduate school, I nearly lost a National Science Foundation grant for this very reason. Luckily, I was given the chance to revise my proposal. I dropped in the names of a couple authors I had previously failed to mention – and changed nothing else – and suddenly the reviewer decided I knew what I was talking about.) Individual objectivity is a damning indictment of scientific claims to objective knowledge only when those claims have been conceived, examined, and promoted with a small, insular community where conflict is seen as a trouble to be avoided. Objective knowledge emerged from friction.

Emergent objectivity does not require each individual to be individually objective, any more than flocks of birds or schools of fish require their individual members to know or care about how the whole collective moves. Objectivity comes from an environment of debate, not from the debaters themselves. Because of that, we need less consensus, not more. Our goal as researchers should not be to build a core body of knowledge about a topic – none of us has the ability to do something like that. Our goal should be to promote our own ideas in forums and formats that open those ideas to as much criticism as possible. If we do that, and if we do not settle for superficial agreement so long as substantive issues remain to be addressed, that core body of knowledge can be built as a side effect of our individual debates.

Advertisements

14 thoughts on “Consensus is overrated. Objectivity isn’t.

  1. Thanks Schaun- this made a lot of sense to me- surely all research should include a section describing the author’s relevant background/perspective/prejudice- the best one can do is to expose it for others to judge the effects of lack of objectivity

  2. Heather,

    Thanks for your comment. I’m afraid I’m really skeptical about the idea that a here-are-my-biases section in a publication would do much good. For one thing, I think individual researchers are the last people we should trust to openly and honestly assess their own biases. I think there are a lot of researchers who would openly and honestly try to make such an assessment, but I don’t think there are many human beings who are capable of succeeding at such an endeavor.

    Because we’re not good at really understanding and laying out our own perspectives, I suspect most people would do what is already common: they would identify their work with various philosophical, theoretical, and methodological labels as a shorthand way of packaging their perspectives. (To some extent, there is no other way to do that – research journals only have so many pages). That would then result in readers either recognizing familiar labels being used in familiar ways, and therefore agreeing with the research, or in readers seeing either unfamiliar labels or familiar labels used in unfamiliar ways, and therefore disagreeing with the research. Which would then bring us right back to the consensus-as-a-measure-of-quality yardstick that is already so problematic.

    I personally would rather see a whole lot less discussion of perspectives and whole lot more discussion of research design and methods. I don’t really care if a researcher incorporated so-and-so’s theoretical framework into his or her study. I care that he or she laid out the steps of the study explicitly enough that I could conduct the exact same study without needing that researcher to hold my hand every step of the way. The requirement to really make a study replicable seems so basic and yet I see it do so infrequently in the social sciences.

  3. Chinese strategy classics indirectly talk about the importance of understanding the objective and the tactic. … Think about it. … Read the classics- focus on the big picture not on the action. … The answer is there.

  4. Pingback: The qualitative/quantitative divide is sort of useless. Focus on replicability instead. « House of Stones

  5. Really good read, well done. Is it that you don’t believe that sharing bias would contribute or do you believe that’s it’s important but it can’t be done. If it’s the latter, and while you may think there are very few who could honestly do it (Goleman’s research in emotional intelligence is a place to look), it’s been my experience that the few times researchers are willing to honestly engage in this discussion, it’s leads to needed conversations. From an interpersonal point of view, openness and transparency are extremely important goals to aspire to in how and why we conduct our research they way we do.

    Thanks for posting.

  6. Murray,

    Thanks for posting. I think the practice of sharing biases has to varied an effect upon a debate to really contribute to it consistently. It’s not just that we’re not very good at recognizing our own biases. We’re not very good at articulating them either – I think almost everyone has had the experience of witnessing a conversation where the participants thought thought they saw eye-to-eye even though an outside observer could easily see that they were talking about entirely different things. Maybe that’s just what happens when we try to translate our thoughts into words. But even if we all got really good at articulating our biases, articulating our bias might just signal to like-minded people that we are willing to confirm their biases, and to different-minded people that they should avoid us in order to avoid unpleasant debates.

    In other words, even if articulating bias can help, I don’t see any reason to believe that helps any more consistently than it hurts, and I certainly don’t see any reason to believe, even in the cases where it clearly helps, that it helps enough to really facilitate the creation of an objective knowledge base.

    I agree that articulating one’s bias can be very important from the standpoint of facilitating good interpersonal relations, and that benefit shouldn’t be minimized. I just don’t see much benefit in terms of separating findings that deserve our belief from findings that don’t.

  7. Interesting perspective Schaun. I’m not sure you can subtract out of the equation the impact human interaction (in this case, researchers sharing their biases) has on the way research findings are deciphered. To your point about people’s inability to recognize their own biases, that’s difficult to generalize. I believe we are all over the board when it comes to individuals levels of self and social awareness. It would be an interesting study to examine levels of emotional intelligence among the different research communities.

  8. Oh, I don’t think it’s possible to subtract out those interpersonal factors, either. I just don’t think those factors, and the ways they affect the interpretation of research, are consistent enough for us to be able to rely on them to build a core understanding of a particular subject matter.

  9. If I’m understanding your position correctly, to obtain a authentic/reliable understanding of a subject matter through research, we need to have interpersonal factors as our control item to modify it’s influence. But what more do we have at our disposal other than human interaction when collecting, interpreting and reporting our research to build an understanding?

  10. I’m not sure I understand what you mean when you say “we need to have interpersonal factors as our control item to modify it’s influence.” My position is that we ought to make all aspects of our research so explicit that a person could do our exact same research project without ever need to talk to us personally.

  11. Pingback: We don’t need better research. We need more research (with search options). « House of Stones

  12. how about reality is socially constructed, thus, science is a social construction, like a construction of building, it consist of any material which we could choosed accordance to our taste.. then science is very closed to our taste of body..

  13. Bayu,

    I don’t put much stock in the whole reality-is-socially-constructed thing. It can’t be strictly true – that would mean that rocks aren’t hard unless there is social consensus regarding their hardness, which seems kind of absurd. So the statement really begs the question about what it means for something to be socially constructed, and what things are so constructed. And I haven’t seen a satisfactory answer to those question.

    Actually, your analogy illustrates the problem that I have with the argument. We can’t construct a building with any material we choose according to our taste. You can’t build a stable building out of gelatin, for example, no matter how well suited that it to your taste. Successful buildings require certain materials. So if we use your analogy we must conclude that successful science requires certain components or characteristics, and the appropriateness of those components or characteristics to the scientific endeavor exist to some degree independent of the preferences of individual researchers.

    To look at it from another perspective, take the quote from Henri Poincare from which we took the name of this blog: “Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science.” So maybe science is “socially constructed,” whatever that means. The manner of that construction is much more important than the fact of the construction itself. Saying it’s socially constructed just says that people made it happen. Everyone already knows that. But how they make it happen – especially how particular people have made particularly successful attempts to build better expectations about some aspect of our world – is the really important and more interesting question.

Comments are closed.