The Value of Reproducible Research: Sometimes the response matters more than the results

Yesterday I followed a tweet to a post by Jason Lyall responding to apparently widespread criticism of a new survey in Afghanistan done by the Asia Foundation. The post was the first I’d heard of the survey or of the response to it, so I don’t know anything more about the criticism than what Jason wrote, or much about the nature or arguments of the criticism. But the post did link to one criticism in particular, from Sarah Chayes, a journalist turned NGO-founder and regular ISAF-hired expert on Afghanistan. The general approach taken in her critique seems illustrative of something I find very valuable about systematic and reproducible research and analysis: it facilitates productive and progressive (though perhaps not always intentionally so) responses. Continue reading

Bottom-up creation of data-driven capabilities: show don’t tell

I’ve been writing lately on what to do when people who make decisions in an organization say they want data-driven capabilities but then ignore or attack the results of data-driven analysis for not saying what they think the data ought to say. Some of the most productive things you can do in that situation include automating your work so you can devote more time and attention to more important (and labor-intensive) projects, as well as building support among the organization’s weak actors as a means of garnering positive attention from higher-power stakeholders. Continue reading

Big Data of all sizes: how to turn a regular organization into a data-driven organization

Everyone’s talking about Big Data lately. It’s being touted as a “revolution” for organizational decision making. I generally think more reliance on data is a very good thing, and I’m glad that people who traditionally haven’t thought much about data are now thinking about it more. That being said, I’ve been struck at the differences between the ways the actual term Big Data seems to be used by practitioners, as opposed to the ways the term is used by the executives and managers who supposedly want Big Data to work for them. Continue reading

Some helps for running and evaluating Bayesian regression models

Around two years ago, I suddenly realized my statistical training had a great big Bayes-shaped hole in it. My formal education in statistics was pretty narrow – I got my degree in anthropology, a discipline not exactly known for its rigorously systematic analytic methods. I learned the basics of linear models and principal components analysis and was mercifully spared too much emphasis on ANOVAs and chi-squares and other “tests.” I developed a large portion of my statistical skills while working for the Department of the Army…not because the Army is really into rigorous analysis (see here and here and here), but because a co-worker introduced me to R. (I’m convinced the best way to learn statistics is to get a minimalist introduction – just enough to avoid being intimidated by the jargon – and then devote a few months to doing two or three projects in R.) During all of this, I kind of knew there was a thing called Bayesian statistics out there, but I’d never really looked into it and didn’t have strong opinions about it. Continue reading

Why do Jihadi Clerics become Jihadi?

I don’t spend a lot of time thinking about Jihadi terrorism these days. I do still pay attention to the conflict in Afghanistan, and off and on I’ve been able to help with some projects being undertaken by other researchers. But I don’t have much time to think about terrorism outside of a conflict zone. However, yesterday I saw a flyer in the elevator for a talk on “Jihadi Clerics” and my interest was piqued enough that I attended. Continue reading