Why defenses of unsystematic methods often undermine those methods instead

A while ago I wrote about the seeming tendency for social/behavioral researchers to consider themselves focused either on “qualitative” or “quantitative” research, and how I didn’t think that kind of distinction was all that useful. I was struck at how defensive the self-proclaimed “quals” seemed to be in these discussions. My post wasn’t attacking “qualitative” work. At least I didn’t mean it to. I use a lot of the sorts of methods that people often consider qualitative, so it wouldn’t make any sense for me to disparage those methods. Rather, I was arguing that a method’s qualitative-ness or quantitative-ness focuses attention on issues that don’t help researcher evaluate the soundness of findings derived from those methods.

Maybe I’m particularly sensitive to the don’t-tell-me-qualitative-methods-aren’t-as-good-as-quantitative-methods sorts of feelings because I used to have those feelings myself. My early education in the social sciences consisted almost entirely in the narrative accounts that make up most of the ethnographic literature. I took only one statistics class as an undergraduate, and that was only because it was a required class. I didn’t see the usefulness of systematic methods until the beginning of graduate school, didn’t really start using those methods until the end of graduate school, and didn’t really learn how those methods worked – and didn’t start to drastically expand my statistical toolbox – until my first post-graduate-school job.

So I cringe when I read things like Maria Konnikova’s recent piece in Scientific American. Part of it is the smug tone. I feel like she’s patting her “quantitative” readers on the head each time she repeats her assertion that if you just focus on measurable behaviors you’re going to miss a big part of the picture…as if she had anything but her own bald assertions to back that up. But mostly I cringe because I remember making those same bald assertions and even adopting that same condescending tone towards systematic research that criticized or questioned “qualitative” findings.

So I found it interesting, in the discussion that followed my post, that there didn’t seem to be a single instance of a “quant” talking about how qualitative research missed an important part of the picture, while there seemed to be many instances of “quals” defending their approaches through exactly that line of reasoning. The but-your-methods-miss-a-big-part-of-the-picture argument bothers me for several reasons. For one thing, that’s true of ALL methods. Also, as I already stated, I have no reason to believe such assertions other than the fact of the assertion itself. But if we disregarded those two points, it just seems to me that if someone’s preferred methods really are valid and relevant, they shouldn’t have to spend so much time shouting about how valid and relevant they are.

More than anything else, I dislike that line of reasoning because I think it undervalues non-systematic methods.

It seems the main goal of systematic methods (or “quantitative” or “statistical” methods – although I think it’s a mistake to equate all those descriptors) is the identification/discovery/generation/formulation of evidence. I think Peter Achinstein’s definition of evidence is quite useful. Based on my probably overly-simplistic understanding of what he writes, Achinstein argues that a set of observations counts as evidence in support of an explanation if (1) there is a plausible logical relationship between the observation and the explanation, (2) the observations are consistent with the explanation, and (3) the observations are not equally consistent with the absence of the explanation – in other words the observations fit x but not not-x. (That last point is one is one that I think researchers often overlook – if observations fit a given hypothesis but also fit practically every other hypothesis out there, then it’s not really fair to say the observations support the hypothesis.)

Systematic methods create evidence by addressing parts (2) and (3) of Achinstein’s recipe. If we unsystematically collect and analyze our data – and my measure of “enough” is “so explicitly that it would be entirely possible for someone else to be able to come along and repeat our study if he or she wanted to” – then it’s hard to really argue that our observations didn’t just fit our preferred explanation because we designed our study to make that the only likely outcome. I think it’s a mistake to try to argue that unsystematic methods supply evidence in the same sense that systematic methods do – they just don’t.  They leave too many alternative explanations un-addressed.

Instead, in defending my use of unsystematic methods, I much prefer to focus on the first ingredient in Achinstein’s recipe. The ability to address the second and third ingredients is dependent upon having the first ingredient squared away (how can you know which observations to map to which explanations if you don’t even know what the possibilities are?) So that means the accumulation of evidence, especially the accumulation of a core body of evidence that can be used as a foundation for exploring new issues or explanations, depends upon logic just as much as it depends upon method. It is in that area that unsystematic methods can really shine.

For example, a couple years ago Nature published a really very good piece of work on patterns in the timing and size of insurgent attacks. The authors used the best data available to them, looked at a dozen or so different conflicts, compared their findings from insurgencies to a couple conflicts that are not commonly considered insurgencies to make sure they weren’t just talking about conflict in general, and they created a model that was quite good at predicting the timing and size of attacks. I was working for the U.S. Department of the Army at the time this article came out; copies of the article circulated my office. I was excited about it. Then I started to talk with colleagues about it. Of those who had read it, nearly all dismissed it.

Some just dropped it when they saw the math and some brought up legitimate concerns about the study’s sources and things like that – although I never quite understood how it made sense to dismiss a study’s findings just because the best sources available weren’t ideal. Until better data become available, it seems that just warrants perhaps a slightly stronger caveat for the findings. But a lot of the dismissal I encountered didn’t have anything to do with sources. It actually focused on one little piece of the paper that talked about a “Global Signal”:

‘Global signal’ is an input that each group has access to… This can be understood as traditional news being broadcast (i.e. CNN, AJ etc) and the competition is then a competition for people’s attention. This attention acts to increase the supply of resources to the successful groups – more people join this insurgent group, more money is directed towards it, etc.

The actual math used in the Nature paper didn’t demand that this input be a news broadcast. As far as I understand it, all the math said was that an attack changes the chances of there being another attack. But the authors decided that the most plausible form for that signal was a news broadcast, and people who knew something about the conflicts examined in the study, or even people who had just seen a decent documentary on insurgencies instantly recognized that that was sort of silly – not because it was entirely implausible, but rather because there were so many other plausible explanations. The methods used in the study were about as sound as anything else people have used to study insurgent conflicts, but the methods by themselves were not all that went into determining the evidence that the study ultimately claimed to produce. The logical relationships between observations and explanations were part of that evidence, and as far as the Nature paper is concerned, that part of the recipe was pretty weak.

We don’t need systematic methods to discover plausible connections between observations and explanations. We only need an n of 1 to see if an explanation looks totally dumb when examined in the unit-of-observation level. That doesn’t mean analytic explanations need to make complete sense to the people to whom the explanations are supposed to apply, but it does mean that we can’t just assume that the logical component of our evidence is unproblematic.

This is why I like unsystematic methods, and also why I get so frustrated when people take unsystematic findings and bury them so deep within theory-specific jargon that it’s practically impossible to separate what the research thinks he or she saw from what the researcher thinks the theory said he or she should see.  As a researcher, I want to be able to generalize my findings even if I have to heavily caveat those generalizations. That ultimately requires systematic methods, but systematic methods ultimately require a decent intuition about what is probably ultimately unknowable: the full scope of plausible observations, plausible explanations, and plausible relationships between the two. I’m not aware of anyone who seriously argues that those plausibilities can be identified through systematic research alone. So we ought to be using unsystematic research for that part of our investigations – it’s much better than uncritically using assumptions.

Maybe that’s why I always seem to get a little confused when people respond to discussion of methods by saying “well, that’s why I like mixed methods – I think that’s really the way to go nowadays.” No matter how earnest or well-intentioned, that always seems like such an empty statement to me. There are cases where we have a good enough understanding of a system to reasonably assume that we know what most of the parts of the system are and know many of the ways different parts interact with one another and know at least a few of the means by which those interactions take place. In those cases it doesn’t make much sense to devote a lot of resources to unsystematic research.

If, on the other hand, we have only a very tenuous grasp of what the system is or how it works – and I think that is a pretty fair description of the current state of knowledge regarding many behavioral issues and most social issues – it makes the most sense to use as many different methods as we can as often as we can. It doesn’t make much sense to me to call that eclecticism a particular methodological approach like “mixed methods”. It just seems like good, sensible research practice in general.

People who try to argue that unsystematic methods can produce evidence comparable to systematic methods lack a leg to stand on – they don’t have anything to a support their assertions except for the assertions themselves. Moreover, those arguments, I think, actually undermine the credibility of those methods by measuring them against a standard by which they were never capable of performing well. Unsystematic methods help us explore the logic of our explanations, including but not limited to providing much needed sanity-checks on big-scale hypotheses. They’re good at that. Let them be good at that. They don’t need to beat systematic methods at their own game. They have their own game – one very much worth playing.

5 thoughts on “Why defenses of unsystematic methods often undermine those methods instead

  1. Schaun,

    I agree. I would argue that good science requires a plausible hypothesis, generated through theory, logic, and some kind of evidence, often “qualitative.” That hypothesis (and the null) can then be tested using systematic, often “quantitative” analysis if the goal is a generalizable explanation. Statistical support of course never proves a hypothesis is true, only suggests the probability (hopefully low) that you got the support by chance.

  2. Interesting. I agree the separation between ‘qualitative’ and ‘quantitative’ is kept somewhat artificially wide by different research disciplines using different variations of them, so that in practice there is little overlap. It results in a kind of ‘that’s not real science’ attitude which is frustrating, not least because I too used to think that in my early study days…

    I’m a recent return to research, and it does surprise me how little introspection there is in the academic research communities into working practice. ie, are the methods we use to research science actually scientific? Is hypothesis testing – especially signficance testing – scientific? When and why? Can we be systematic in our qualitative data gathering? When does qualitative become quantitative? What qualities are needed of our quantities? When I have the answers I’ll let you know…

  3. Grace,

    Yes, I think “plausible hypothesis” is a term often thrown around by researchers without really giving much consideration as to what makes a hypothesis plausible. (I’m not saying that’s what you’re doing here – I’m just spring-boarding off your comment). That’s one of the things I was trying to get at in the original post – we tend to talk about differences in research approaches in terms of evidence. I don’t think that’s a bad idea, but it downplays the fact that evidence first requires a consideration of what the plausibilties are – and they are usually much more numerous than just one hypothesis and the null.

  4. Martin,

    Yup. I don’t think academics are all that different from the rest of the human race – we make decisions about what to do based on what has worked for us in the past and based on what is familiar and available to us at the moment. Most researchers I know – in or out of academia – use a particular set of tools justified by a particular set of theories, and rarely explore other options.

    That’s actually one of the things I like about being a researcher outside of academia. I seem to remember a LinkedIn post not too long ago asking what the costs and benefits are of being a non-university researcher. My first thought was of the cost of enduring a professionally isolating environment – I’m the only researcher in my entire company, and one of only a handful of people who does any analysis of any kind, and that gets frustrating. When I have an idea, I can’t pop over and ask the guy in the office next door, because the guy in the office next door is a marketer who doesn’t really want to talk about research stuff.

    But at the same time, one thing I really love about non-academic (that seems a weird way to characterize it) research is that my tools, approaches, and theories are largely determined by the demands of my job, not by inertia left over from graduate school. When I left grad school, I knew some basics of frequentist regression and a few dimensionality-reduction techniques such as principal components analysis. Working for the Army, I picked up generalized linear model, multiple imputation, a little bit of hierarchical modeling, and vector autoregression because I came across projects that required those tools. In my current job, I’ve learned a variety of Bayesian methods, tons of text-mining/regular expression tools, and have improved my hierarchical models toolbox, again because my job required it. I have a feeling that if I had stayed in academia, I’d still be running PCA and OLS models.

    Same thing with theory: nothing damages a theory like reality does. That might be why I’m such a minimalist when it comes to theory (https://houseofstones.wordpress.com/2012/02/27/my-problematic-relationship-with-theory/): so little has survived my attempts to apply it to real-world problems.

  5. Pingback: Trying to figure out why I don’t want to call myself a data scientist « House of Stones

Comments are closed.