I want to start by summarizing a few recent discussions I’ve had on LinkedIn. Then I want to talk about what I think is the misguided goal of trying to reach consensus or agreement regarding design, methods, interpretation, or anything else related to research. Then I want to talk about the notion that objectivity isn’t possible, which is a notion I consider both true and kind of beside the point.
Some Stuff People Have Said Recently
Conversation 1. On the Network of Social Science Researchers group, in response to my post on the ways scale choice distorts survey results, I got into a discussion with a researcher from Italy about issues of survey validation and interpretation. We both had strong opinions and we voiced those opinions freely. After several rounds of obviously not making a dent in each other’s convictions, another person chimed in with:
Thanks you two for this wonderful conversation – I’ve learned a ton from both of you! If you’re interested in my two cents, it seems that you’ve bumped up against the limitations of this type of communication. My sense is, if you spent an hour in a coffee shop (face-to-face), you’d probably be able to bring closure to this discussion. Again, my thanks to the both of you!
I responded that I agreed that face-to-face communication was easier than virtual communication. Then another group member wrote that she agreed that face-to-face communication was good too, and added some comments about mixed methods. Then my original conversation partner joined in and said she agreed that face-to-face discussion was better, and agreed with the point about mixed methods. Then the conversation died.
Conversation 2. On the DIME/PMESII, HSCB, and IW group (people in government love acronyms), and in response to Paul’s recent post about the project we’re conducting to look at the relationship between suicide attacks and military suicides in Afghanistan and Iraq, Paul got into a discussion of whether doctrinal or ideological justifications for suicide attacks were really an informative way of trying to explain such attacks. The response to Paul’s comments included this:
…from a “big picture” perspective, reliance on behavioral indicators will be blindsided by invisible motivations as surely as reconstructing motivations from after-interviews can be skewed by any number of factors…What’s needed *overall* is a view that takes in behavioral and motivational, political and religious, contemporary and historical aspects …. At some point we will need a conceptual architecture that shows where and how these various facets belong in relation to one another. Until then, let’s keep talking.
And then everyone stopped talking.
Conversation 3. In the Social/Behavioral Science and Security group, in response to my post on my problematic relationship with theory, one commenter said (among many other things):
As if we could observe and describe objectively…the one who is describing is the one who is observing and studying. Objectivity is the observer’s illusion that observing can be done without him. (Who said that? I can’t remember). We need theories of observation, we need theories of classification etc. Not because it’s such fun to theorise, but out of an attitude of intellectual hygiene.
And a little later:
As a side step: Are you familiar with the work of Ernst von Glasersfeld and Paul Watzlawick? You may find it interesting.
In case you don’t want to look up those names, they are popular writers within the constructivist school of social theory. I replied:
Yes, I’m as familiar with both of the authors you mention as I am with any of the constructivist school of thought. I don’t find constructivism particularly useful – the idea that individual objectiv[ity] isn’t possible is a valid point, but it’s so basic, and now so widely recognized, that I don’t really see what continued focus on that criticism really offers in terms of practical research benefit.
To which the commenter replied:
…the closing paragraph of your last posting renders any further discussion pointless. Thank you, it’s been interesting.
Agreement Isn’t Always a Good Thing
I think these three experiences actually have quite a bit to do with one another.
The participants in the first conversation were right that some face-to-face interaction would have smoothed the dialogue: both sides did appear a bit agitated. It seemed to me that my conversation partner was aggravated by my assertions about scale choice that ran counter to her experience. I was aggravated that she provided no evidence to show that her account of her experiences was both accurate and generalizable. By saying that we would almost certainly agree with each other if we were to talk face to face, the commenter declared our disagreement to be primarily interpersonal, not analytical, in nature. But the disagreement was both interpersonal and analytical. Yes, the limitations of online communication made it difficult to find agreement, but aside from that we really did seem to have different assumptions about a method’s validity. Saying that our interpersonal differences were coffee-shop-able hid the fact that our analytic differences probably weren’t, and it created a situation where again debating the analytic differences would have seemed a bit rude.
I think the second conversation had a coffee-shop effect on the debate about causes of military suicides. The comment that “what’s needed overall is a view that takes in behavioral and motivational, political and religious, contemporary and historical aspects” boils down to “we need to look at everything.” That advice isn’t very practical (it’s impossible to look at everything), but it may not even be true. Why should we believe that, say, religious aspects of suicide terrorism are necessary parts of an explanation of that behavior? That statement implies either that we have a really solid logical argument for that aspect’s necessity, or that we have replicated findings from well-designed studies that demonstrate that necessity. Unless I’ve missed a hugely impressive portion of the literature on terrorism, neither the logical argument nor the evidence exists to support the assertion that we need to look at religion to understand suicide terrorism. The same goes for the other aspects of human experience that were promoted as essential to our understanding of the issue. Saying that we needed to look at a lot of different things obscured the discussion of the one particular thing that had prompted the conversation in the first place.
In both the first and the second conversation, commenters took some principals that are generally sound – a mode of communication can impede understanding; there can be many different approaches to doing the same thing – and applied them to the debate of specific analytic issues. I think that might have been a mistake.
I honestly don’t think that first conversation would have turned out at all different if we had both been located somewhere other than a LinkedIn forum. For me, no amount of her personal experience could outweigh even the modest and initial findings from my simulation study, because my simulation was presented in a replicable format (to make it explicitly critique-able), and her experience wasn’t. And it may be that no number of computer simulations could outweigh what she had learned from her personal experience. I think we both would have run out of coffee before we ever gave ground on those positions. The interaction would probably have been more pleasant, but it wouldn’t have brought us any closer to figuring out how the use of different scales can bias survey results.
In fact, the face-to-face interaction might have made it more difficult to make any progress at all, because the pressure to find points of superficial agreement can mask underlying differences of opinion. That’s usually a good thing when we’re talking about interpersonal relationships. It’s often not a good thing when we’re talking about research. Similarly, emphasizing that we ought to look at a problem from a lot of different angles can make it difficult to assess the appropriate-ness of the particular angle we happen to be discussing at any given point in time. At the very least, returning to the original point of contention after such a let-many-flowers-bloom admonition can make a person fear he’ll come across as dogmatic, or at least just stubborn. The search for agreement can actually inhibit good analysis.
Objectivity is an Emergent Property
My main reason for this belief is illustrated by the third conversation. I think it would be difficult for anyone who studied social sciences in any university during the last two decades to have not come away with a load of critiques of the idea of objectivity. The critiques come in all shapes and sizes and levels of sophistication, but a common component of such critiques was summed up by the commenter: “Objectivity is the observer’s illusion that observing can be done without him.” We researchers look at stuff, and what we decide to look at and how we decide to look at it is influenced by a whole load of things other than the physical reality that is the target of our observation. It really doesn’t matter how many precautions we take, and how self-critical we try to be. In the end, the entirety of each individual researcher’s research is going to be biased mishmash of what that research saw and what he or she expected to see.
I accept that critique. I think it’s true, as long I can include the caveat that some mishmashes more fully made up of what the researchers actually saw than other mishmashes are. In the end, individual objectivity isn’t a realistic expectation, because it requires a level of individual intellectual honesty that is impossible for nearly any human to obtain.
Hugo Mercier and Dan Sperber published an illuminating piece in Behavioral and Brain Sciences this last April. I think their abstract deserves quoting:
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is… to devise and evaluate arguments intended to persuade…. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.
As the authors state, this way of thinking about reasoning makes sense of a lot of findings from across the cognitive sciences. The pursuit of truth or fact seems to play a very small role in the way we make arguments about truth and fact. There’s no reason to believe that scientists are immune to these limitations.
And I think all of those limitations are almost entirely irrelevant to the issue of doing good research.
The winning-is-better-than-truth type of reasoning, despite its flaws, has managed to produce a wide array of practical successes in our ability to build and maintain more realistic expectations about, say, the way the physical world works. (I really like Brian Silver’s account of this progress-in-spite-of-ourselves). Despite what seems to be an almost complete lack of individual objectivity, we humans have more realistic expectations about our physical world than we used to. Pointing out repeatedly that we don’t view the world dispassionately doesn’t really do anything to explain that.
The explanation I prefer is that individual objectivity is not a necessary condition for the production of objective knowledge, so long as claims to such knowledge are continuously subjected to scrutiny, criticism, and outright attacks. Consistent susceptibility to attack makes more biased claims less able to endure for long periods of time. We don’t even need to assume that the attacks are based on anything other than the desire to prove the other guy wrong. Depictions of atoms or chemicals or people or societies that more accurately approximate the way those things actually operate should be able to withstand criticism longer because those depictions’ defenders will always be able to answer any criticism with “but the model works.”
Less accurate depictions can withstand criticism for a long time by their proponents removing themselves from their critics. Anyone who has tried to publish anything in the social sciences has probably had manuscripts rejected not on charges of poor reasoning or insufficient evidence, but because the manuscript didn’t sufficiently show that the author agreed with the same people the reviewer agreed with. (In graduate school, I nearly lost a National Science Foundation grant for this very reason. Luckily, I was given the chance to revise my proposal. I dropped in the names of a couple authors I had previously failed to mention – and changed nothing else – and suddenly the reviewer decided I knew what I was talking about.) Individual objectivity is a damning indictment of scientific claims to objective knowledge only when those claims have been conceived, examined, and promoted with a small, insular community where conflict is seen as a trouble to be avoided. Objective knowledge emerged from friction.
Emergent objectivity does not require each individual to be individually objective, any more than flocks of birds or schools of fish require their individual members to know or care about how the whole collective moves. Objectivity comes from an environment of debate, not from the debaters themselves. Because of that, we need less consensus, not more. Our goal as researchers should not be to build a core body of knowledge about a topic – none of us has the ability to do something like that. Our goal should be to promote our own ideas in forums and formats that open those ideas to as much criticism as possible. If we do that, and if we do not settle for superficial agreement so long as substantive issues remain to be addressed, that core body of knowledge can be built as a side effect of our individual debates.