A Closer Look at Reporting Bias in Conflict Event Data

The forthcoming article “A Closer Look at Reporting Bias in Conflict Event Data” by Nils B. Weidmann is summarized by the author here:

According to the Feb 28, 2015, edition of The Economist, smartphones have become the fastest-selling technical device ever. What effect does the increasing availability of information technology have on political mobilization or violent conflict? In my forthcoming article “A Closer Look at Reporting Bias in Conflict Event Data“, I address a particular methodological problem that can occur when analyzing these effects using media-based event data. What if the technology whose effect we study is also used to produce the media reports our coding is based on? For example, what if cellphone coverage not only affects the emergence of insurgent violence, but also whether this violence is actually reported?

The methodological problem, then, is as follows. Imagine that we find insurgent violence to be more frequent in cellphone-covered areas. Does this indicate that cellphones foster violence? Not necessarily. If cellphones only make the reporting of violence more likely, the same pattern should emerge, since violence in regions without coverage would simply be underreported. What we have here is an instance of measurement error in the dependent variable: for some of the locations in our sample, we see ‘peace’ where in reality there was violence. This error would be unproblematic if it was random. However, when studying the effects of ICT, we have reason to assume that the error is systematically related to our independent variable – the erroneous coding of ‘peace’ for locations with actual violence is more likely for locations without mobile coverage.

In the paper, I present a number of findings to shed light on this problem. Most importantly, I conduct an analysis to test whether cellphone coverage increases the probability of a violent incident being reported in the international media, and what this means for estimating the effect of cellphones on violence. To this end, I use a military dataset for Afghanistan that contains almost all insurgent attacks, and code which ones actually appeared in the international news. It turns out that cellphone coverage is a robust predictor of reporting. Next, I test what this means for estimating the effect of cellphone coverage on insurgent violence. The analysis reveals that a positive effect is driven by selectively reported violence – when using the military data (which supposedly does not suffer from this problem), we would conclude that there is no effect.

Last, I implement a simple diagnostic test for reporting bias in estimated effects. This test is based on the assumption that event severity reduces reporting bias: for example, a huge attack that leaves 20 people dead will be reported, no matter whether cellphone coverage existed or not. Now, if selectively reported violence is driving a positive effect of cellphones on violence, we should see this positive effect decline as we move from low- to high-severity incidents (the latter of which suffer less from reporting bias). Applied to a recent analysis on cellphones and violence in Africa, this pattern indeed emerges. Therefore, my article concludes with a word of caution regarding the use of media-based event data. Researchers need to be aware of potential biases in these data, which, depending on their research question, can crucially affect their results.

This article is part of the AJPS Virtual Issue:  Most Cited, 2015-16.

Speak Your Mind

*

 

The American Journal of Political Science (AJPS) is the flagship journal of the Midwest Political Science Association and is published by Wiley.

%d