Science is more than its methods (but social science currently isn’t)

Paul called my attention to this piece (behind a pay wall…of course), titled “The science in social science” and written by anthropologist H. Russell Bernard. When I was doing my graduate work at UCONN, we commonly referred to Bernard’s Research Methods in Anthropology as our “methods Bible,” so I went into his article with favorable expectations. Unfortunately, I think he engaged in some logical leaps that I just can’t make. From his abstract: Continue reading

Advertisements

Social scientists sometimes have kind of a weird view of their own relevance

I came across this piece in the Chronicle of Higher Education a little while ago. The author’s opening caught my attention – a vignette about someone asking for his advice and then asking how much she owed him for his time – because I had that same experience for the first time not too long ago. I work in the private sector, not academia, and the offer still caught me off guard. It never occurred to me that I might charge someone for my advice. -I guess that means I should be careful about pursuing a consulting career. Continue reading

Surveys measure what people do, not what people think

In my previous post, I wrote about ways scale choice could distort the ways survey results portray the things they are supposed to measure. This certainly isn’t a new issue – researchers who use surveys often go to great lengths to ensure that their surveys are valid and reliable, which in this context usually means “consistent.” Questions about reliability are a whole lot easier to answer than questions about validity. A survey is reliable if you can have the same people take it multiple times or put the questions in a different order and still get roughly the same answers, or if people who repeatedly rate certain items high also repeatedly rate other items low. There are all kinds of research designs and methods for assessing reliability. Continue reading

Analytic modesty in the face of poor performance

I put up links to some of my posts on various LinkedIn groups in hopes that people will stumble upon them and join the discussion. Here’s a recent comment (from the Social/Behavioral Science & Security group), in response to my post on my problematic relationship with theory:

“Human intuition just isn’t all that good when it comes to understanding problems that have lots of moving parts”, well, neither is science. It took several hundred years for a mathematician (Poincare) to solve the two-body-problem, and all of our joint computing powers haven’t yet solved the three-body-problem. And that’s a relatively straight forward sort of problem compared to solving the forces involved in human behaviour. Continue reading

My problematic relationship with theory

I received an email a few days ago asking about a couple assessments I wrote while working for the Army. In those assessments, I laid out an approach to analyzing shared patterns of behavior that I eventually called “Social Terrain Analysis.” When I started working for the government, I saw a lot of analysts going all gaga over things like culture and ideology. I used to study those things too, but eventually decided they’re more trouble than they’re worth: too much folk psychology, too little conceptual coherence. So I developed the assessments mostly to justify my not analyzing those kinds of things, but also to lay out another iteration of a theoretical framework I’d been working on ever since my dissertation fieldwork.  Continue reading