Social scientists sometimes have kind of a weird view of their own relevance

I came across this piece in the Chronicle of Higher Education a little while ago. The author’s opening caught my attention – a vignette about someone asking for his advice and then asking how much she owed him for his time – because I had that same experience for the first time not too long ago. I work in the private sector, not academia, and the offer still caught me off guard. It never occurred to me that I might charge someone for my advice. -I guess that means I should be careful about pursuing a consulting career.

As I read further into the article, I at first thought I was seeing some arguments similar to the ones I’ve made previously about academic publishing (see here and here), but it soon became clear that the author was only minimally concerned about the effects the current publishing model has on the quality of academic work:

Publishers can assure the quality of their products only if highly trained experts examine the articles on the academic production line and pick out the 10 percent to 20 percent that meet the highest standards for excellence. Without this free labor, the publishing companies’ entire enterprise would collapse.

When I referee an article for a journal, it usually takes three to four hours of my time. Recently, two Taylor & Francis journals asked me to review article submissions for them. In each case, I was probably one of 20 to 30 people in the world with the expert knowledge to judge whether the articles cited the relevant literature, represented it accurately, addressed important issues in the field, and made an original contribution to knowledge.

If you wanted to know whether that spot on your lung in the X-ray required an operation, whether the deed to the house you were purchasing had been recorded properly, or whether the chimney on your house was in danger of collapsing, you would be willing to pay a hefty fee to specialists who had spent many years acquiring the relevant expertise. Taylor & Francis, however, thinks I should be paid nothing for my expert judgment and for four hours of my time.

So why not try this: If academic work is to be commodified and turned into a source of profit for shareholders and for the 1 percent of the publishing world, then we should give up our archaic notions of unpaid craft labor and insist on professional compensation for our expertise, just as doctors, lawyers, and accountants do.

I think the author’s point about the monetary value of access to expertise is perfectly sound: in most cases, medical doctors can do things I can’t do myself so if I find myself needing those things done, I’m willing to pay a doctor to do them. The value of the money and time I spend on access to their expertise is less than the value of what I would spend acquiring that expertise myself. Same thing goes for chimney inspections and property deed documentation.

Where he loses me is when he claims –well, when he assumes, really – that social researchers like anthropologists have clearly demonstrated that access their expertise provides otherwise un-added value in the same way that doctors, lawyers, and accountants do.

He lists four areas where social researchers have “expert knowledge” that adds value. In each case, he’s saying social researchers, and presumably not other people, have the ability to judge the extent to which a piece of research:

  1. Cited the relevant literature.
  2. Represented that literature accurately.
  3. Addressed important issues in the field.
  4. Made an original contribution to knowledge in the field.

The first area of “expert knowledge” seem ludicrous to me. Given the number of researchers who align themselves with any particular academic discipline, the number of journals who cater to researchers who align themselves with that discipline, the number of articles published in those journals each year, the number of researchers and journals who do not align themselves with that particular discipline but who nevertheless publish research related to or even directly engaging with that discipline’s research, the task of even identifying the bounds of relevance seems quite impossible. When you consider the historical inability of any of the social science disciplines to develop more of a cohesive, core set of principles about how their subject matter works (see here and here), well the task becomes almost laughable.

The second area – making sure the literature is represented accurately – is just as difficult as making sure all the relevant literature is cited, and carries the additional difficulty of relying on individual people to decide what is or is not an accurate representation  of an issue when the extent of each individual person’s knowledge of that issue is questionable.

The third and fourth areas seem to be covering the same issue to me. They both seem to be saying that social researchers have the training and experience to identify which new findings are interesting – which ones deserve to receive the attention of people who are interested in the issue at hand. That proposition seems both unrealistic and egotistical.

A doctor is an expert if he or she can do things that other people can’t readily do, and if doing those things tends to create outcomes that are generally better than what would have happened if nothing had been done in the first place. I’m hard pressed to find instances of social scientists doing things that other people can’t already do.

This is especially true of researchers who steadfastly refuse to consider any research method that looks like it might involve anything that resembles a number. I generally think the qualitative vs. quantitative distinction in social research is pretty useless, but I make an exception to that rule when I find researchers who loudly proclaim themselves to be “qualitative” researchers. In those cases, I find that self-identification pretty reliably identifies researchers whose work differs little in quality, tone, or assumptions from a standard piece of investigative journalism. I look at what those researchers do, and I just don’t see that they’ve done anything to clearly demonstrate that their understanding of their field consistently produces real-world outcomes that are better than what would have been otherwise.

While I tend to prefer research that uses more systematic collection and analysis tools, it seems social and behavioral researchers who use those tools are no less tempted to claim more wins for social science than the field really deserves. For example, Gary King is a researcher at Harvard whose work I greatly respect. I really like his MatchIt program pre-processing data to facilitate causal inferences, and I use his Amelia II program for imputing missing data all the time. On slide 2 of this presentation he gave at the University of Virginia, King attributes the following to social science:

  • “transformed most Fortune 500 firms”
  • “established new industries”
  • “altered friendship networks (Facebook)”
  • “increased hum an expressive capacity (social media)”
  • “changed political campaigns”
  • “transformed public health”
  • “changed legal analysis”
  • “impacted crime and policing”
  • “reinvented economics”
  • “transformed sports (seen MoneyBall?)”
  • “set standards for evaluating public policy”

I can see King’s point more easily on some of these items than on others. MoneyBall seems to be pretty clear example of how systematic analysis of past behavior can improve expectations about future behavior. There also seems to be cases of large-scale data analysis improving law enforcement activities.  There are certainly tons of cases of statistical analyses being used to inform political campaigns and tons of promises that such analysis could inform industries like health care, but I have difficulty seeing how we can draw a clear line of causation from social science to any real or promised outcomes here.

Take any of the MoneyBall-type cases where people clearly did something useful with a systematic analysis. Should we attribute that useful outcome to the tool that accomplished it or the affiliation of the people who used the tool? If I get an MRI and it catches some problem and fixing that problem saves my life, I partially attribute that very good outcome to the MRI tool itself, but mostly attribute it to a doctor who not only knew how to read the tool’s output but also knew enough about how the human brain works that he or she could connect the tool’s output to my particular needs.

I’m not clearly seeing how that sort of thing is happening, even in the MoneyBall sorts of situations. It’s definitely useful to break observations down into data points and then systematically analyze those data points in various ways that generate probabilistic statements about what to expect in the future. I’m totally cool with that. But that’s mostly dealing with the properties of the tools used to accomplish the analyses, not the discipline of the researchers using those tools. Yes, I know that a person needs to know some things about baseball or law enforcement or political campaigns to be able to do an analysis of those things, but do you need to know as much about those things as a doctor needs to know about the brain to be able to expertly interpret and act on an MRI output? It seems like a stretch to attribute those successes to “social science” instead of to “statistics.”

And the causal arrow between these outcomes and social science could plausibly be entirely reversed in some of the cases, such as social media. Social media is a technology. When people use that technology, they leave behind traces of that use. Those traces can be very useful sources of data for researchers. If anything, it seems that much of current “quantitative social science” should be attributed to those social media technologies, not the other way around.

For me to accept that social science as a discipline deserves the same kind of regard and, potentially, remuneration, as medical science or engineering or any of the other fields that do command a decent amount of respect and money, I would need to see examples of outcomes being created as a result of social science where the outcome was clearly better than the alternative, and where the outcome couldn’t just as easily be attributed to just the use of a particular technology, or even just to the fact that people threw time, money, and attention at a problem.

Advertisements

16 thoughts on “Social scientists sometimes have kind of a weird view of their own relevance

  1. Schaun. Send in a bill with your evaluation. Nothing outrageous but an honorarium. 50% of the time you get paid.

  2. I took something different from the piece you’ve critiqued. In general, it seems sad, in the current world, that three people decide whether a report is worthy of a “good science” stamp, and that one person (editor) is responsible for selecting those three. Hugh Gusterson’s most interesting idea in the article (which certainly isn’t only his) is to open peer review to _everyone_ in the scientific community. Thus, value of work is a function of the knowledge of the field rather than three people willing to give their time (granted, there would still be somewhat of a selection bias). Fakers, plagiarizers and generally shoddy scientists will find it more difficult to get by, and really good work will get its deserved reputation along with the critique to advance the next project.

    Incidentally, it seemed that Gary King endorsed the article you’re critiquing in a twitter post. But I might be reading too far into what he said in that post.

  3. Nathan,

    That’s interesting – I didn’t see the idea of opening up peer review to a wider population as a major theme in Gusterson’s piece. In fact, he seemed to be arguing that, for most topics, there were only a select few with the expertise to judge a piece properly.

    But at any rate, my intent wasn’t to launch a wholesale critique of his article. I was limiting my comments to his particular assertions about social scientists’ supposed expertise.

  4. i think what you’re saying here is that social science is useless. it doesn’t seem “weird” at all to me that a social scientist might disagree. moreover, your assertion is irrelevant to the argument that this particular social scientist is making. the publishers who sell his writing and use him as a reviewer clearly think that he is adding value. he is literally putting cash in their pockets because someone pays them for his work (not you clearly, but other people are paying for articles and subscriptions or the publishers wouldn’t publish them). finally, the argument he’s making goes for all sciences, not for social sciences in particular.

  5. I’m not saying social science is useless. (I am a social scientist, after all). I’m saying social science doesn’t have the same track record of improving people’s expectations regarding social/behavioral phenomena as, say, the biological sciences have of improving people’s expectations regarding biological phenomena.

    Also, I’m not criticizing the author’s entire paper. I’m only criticizing the specific part where he compares social science expertise to the expertise of doctors and other professions who have demonstrated value added in a way that the large population tends to recognize.

    I understand that the author is adding value to the particular journals for which he reviews manuscripts, and I have no problem with him expecting payment for that. I have a problem with his jump from “my value added is apparent to a specific sub-population who has a financial interest in selling academic research” to “my value added is apparent to the majority of the larger population as a whole (in same way that doctors’ value added is apparent).”

  6. We all have faith that social sciences will eventually provide something very useful (certainly they have already, but maybe not to the extent that physics and medicine have) or we wouldn’t be here. We all put in just as much effort as those in the medical field, just perhaps with more difficult to study and less studied problems. I think saying we don’t deserve equal credit as experts in our field is selling yourself and the rest of us short. There’s a difference between being an expert and being able to actually do something with your expertise that the public deems worthwhile. Our hay day may be yet to come which would mean that the social science experts of today are extremely valuable; they are the ones building the structure that will bear the fruit.

    It’s sort of like saying guys like Eli Whitney deserve less credit than Macy’s because Macy’s are the ones putting clothes on people’s backs. In reality, it’s two totally different ball games, and Eli and Macy’s are both winners.

  7. futureofscipub,

    I think if Schaun thought social science was useless, he’d hardly write blog posts about social science or do things like take the time to create R packages for text analysis. :-)

    It’s more a matter of what are the measures by which we evaluate social science? Are journal articles, or the ability to edit journal articles, the best measure? To me that’s an important and interesting question.

    Paul

  8. Schaun and Nathan,

    I think the discussion of “getting credit” isn’t really the interesting point. To me the important question for evaluating science (social or otherwise) is whether a contribution can be said to have allowed us to do something (in some cases “say” something) that we couldn’t do before, or to do it better. That’s the “hard walls” metric that Schaun suggested in one of the first posts here. The cotton gin seems to meet that criterion. A journal article doesn’t necessarily. And that’s the interesting question for me. Can we surpass journal articles as the high standard of social science? Are there other measures that might be better, or at least need far more attention than they’re receiving?

    Here’s my suggestion for one alternative: a piece of code, or an R (or some other language) package. An R package that allows us to do something we couldn’t do before it, for me, is pretty clearly an important (scientific) contribution. On this note I found this recent blog post on “Anecdotal Science” to be very persuasive: http://ivory.idyll.org/blog/anecdotal-science.html

  9. Nathan,

    I’m not sure I follow. Processing cotton is something that people can recognize as valuable in and of itself, so it doesn’t seem to make sense to compare Eli Whitney (who demonstrated his value added regardless of what may or may not have happened in the future) with social science (which has still, for the most part, not demonstrated its value added without resorting to vague gestures to as-yet-to-be-seen breakthroughs). In other words, it looks to me like your analogy depends upon a false equivalence.

    But even if we ignore that, no one knew that processing cotton (Whitney) would be able to lead to mass production of clothing (Macy’s) until that mass production happened. At that point, it became ok to credit people like Whitney with laying the foundation for the later breakthrough. You can’t take credit for laying the foundation for a breakthrough until that breakthrough has actually been made. The social science of today may very well be laying a foundation for future advances, but that doesn’t mean we, as social scientists, have any right to claim to be laying that foundation. That claim is only warranted once the breakthrough has been made.

    I don’t think it’s selling anyone short to say that experts need to demonstrate their value added. I also think it makes sense to hold scientists to a different standard than, say, corporate consultants, lawyers, or self-help gurus, who demonstrate their value added in terms of the niche goals of their clients. Scientists’ concern with validity and reliability issues indicates that scientific expertise requires demonstration of value added to a larger audience than any particular client – it requires that value to be recognized generally by the population at large (although, of course, not by every single individual member of the population). These aren’t new ideas, of course. Many researchers have talked about the need for practical demonstration of the soundness of a field of research – I like James Rule’s book on the subject: http://www.amazon.com/Theory-Progress-Social-Science-James/dp/0521574943

    Scientific value added requires the demonstration that the science in question leads to the ability to do things that wouldn’t be possible without the science, and that demonstration needs to be accepted by the population at large, not just by clients. I’ve seen very few examples where social scientists can claim to have made such a demonstration. That’s why I don’t think we can make the same claims about expertise as can our colleagues in other disciplines. I sure hope that changes, but in the meantime I don’t think it’s right to rest claims of expertise upon as-yet-unrealized futures.

  10. To return to the subject. Refereed articles in professional journals definitely have added value for the author — that’s why he might list those accepted in his CV (he does not list those rejected). The author generally gets paid nothing for his article, but has career gains. The reader is spared time wasted on poorly researched work. Professional journals have reputation to consider and reputation will not be improved by accepting sub-standard material. All gain. The referee on the other hand gets nothing and his report is and must be anonymous. This leads to a situation where ‘the’ expert in a certain area is approached to provide a professional evaluation. He might refuse for many reasons, particularly if the review is going to be time consuming and unproductive to him in terms of ‘advanced info’. He is likely to send his regrets to the journal and suggest another person to make the evaluation. Thus the evaluation very often gets kicked down the line.

    Having made many such reviews, I know that a good one takes time and thought and a reconsideration of one’s own knowledge — particularly if commenting on something ‘original’. A good review should be more than just ticking boxes. Very often I see a good piece that could be much better had the author taken xxx into account or dropped some inconsequential points. In that case I say so and request the info be passed to the author — and I believe in most cases it is (anonymously). In doing this I participate in building my discipline and helping a colleague rather than simply rejecting him or her. It would be nice to get a token gesture of appreciation e.g. a year’s free subscription, but payment is not and should not be the motive.

    It is perhaps a tangent to point out that highly refereed journals and their ‘advisors’ can be wrong. I am now retired but remember my first published article in such a journal four decades ago. Having shown it informally to several colleagues in anthro, I sent it in confidently to MAN, the journal of the Royal Anthropological Institute, UK (of which I was already a Fellow). I received the reply that my article would not be published because it ‘contains nothing factually or academically new’. That hurt! I sent exactly the same article off to Ethnology in the US, which refereed and wrote back thanking me for an excellent well researched and thoroughly innovative contribution. The piece was published without change.

    I am now an old man, retired after a life in anthropology, but I remember that publication of my first refereed journal. Nobody should doubt the value of an expert opinion…but if condemned to death, a second opinion may well be the reverse of the first. That’s the way academia works. Nothing is 100%. As anthropologists we should know that nothing is black and white and viewpoint can be everything. If we care enough to read such journals and care about our discipline, we should participate appropriately in the referee process. I doubt any professional journal makes a profit — to insist on payment would simply raise the price of such publications and do nothing to improve quality.

  11. Robert,

    I’ve been trying to set aside some time to give your comments the full response they deserve, but I don’t seem to have much luck finding that time. So just briefly:

    1. Yes, the journal system has benefits for individual authors in terms of career incentives. It seems to me that an open publication system would have many more benefits.

    2. I don’t see how the current journal system spares the reader from wasting time reading poorly-researched work. Journals are full of poorly-researched work. In fact, some of the highest-prestige journals have developed a reputation for publishing bad work just because it look exciting (http://andrewgelman.com/2012/05/the-tabloids-and-the-tabloids-why-nature-ran-that-john-edwards-story/). I know journals are supposed to weed out bad work, but in practice they just don’t.

    3. I don’t see why a referee’s review needs to be anonymous. People attach their real names to their comments all the time in all sorts of real-life venues. If the authors are required to do it, the reviewers should be required to do it.

    4. Most of the examples you give of how a good review ought to happen seem to concern cosmetic changes: clarify a point, drop irrelevant materials, etc. I think those things matter, but I don’t see why they should be at the forefront of our concerns when reviewing a piece. Cosmetic changes can be made any time if we drop the assumption that publication needs to take place in a print journal. Clarification of ideas can be an ongoing process – it never has to stop.

    5. The comment you got back from MAN exemplifies much of what I see wrong with the current publication system: no one is qualified to decide what is “new”. No one is qualified to decide what is interesting. Those are only legitimate concerns if you have very limited publication space and therefore have to choose only a few manuscripts from many qualified ones. That simply doesn’t need to happen any more.

  12. Pingback: Science is more than its methods (but social science currently isn’t) | House of Stones

Comments are closed.