

Discover more from Crime Thoughts
Do facts matter in crime research?
A paper published in 2020 in the journal Environmental Research studied the effect of temperature on crime. The authors used FBI National Incident-Based Reporting System (NIBRS) data as their crime outcome and they looked at county-level data without any controls for victim or offender demographics because “The NIBRS database does not include any information about the offender or victim, and reports are aggregated data at the county-level without characteristics, such as age, sex, or race.” Both of these issues - having county-level data and having no demographic information - were among the limitations discussed at the end of the paper. Specifically, they stated, “ The NIBRS restricts the availability of individual data, including information about perpetrators, victims, ages, or exact locations of crimes ... County-level crime counts further prohibits the evaluation of environmental characteristics with small-scale neighborhood heterogeneities, including the potential role of greenspace (Bogar and Beyer, 2016).” But these are imaginary problems.
Not only is NIBRS data available at the agency level (which is still fairly wide in relation to weather but is much better than a county), but there is information on every victim’s and every offender’s age, sex, and race (many offenders have unknown demographics as do some victims).
The entire “incident-based” part of NIBRS is that there is detailed information about every incident, victim, and offender.Apparently, though, none of the three authors, at least one editor, and 2-3 peer reviewers knew enough about the data used to notice this major factual error - and these fictions about the data have been published for two full years and counting. These errors almost certainly affect their results if, as they argue, the geographic unit and victim/offender demographics matter. This paper, “Hot under the collar: A 14-year association between temperature and violent behavior across 436 U.S. counties,” written by Drs. Berman, Bayham, and Burkhardt is emblematic of a trend across published crime research: factual errors about even basic information and a failure to correct them.
One of the purposes of peer review is to catch errors and problems in papers. This ranges from relatively benign issues like unclear writing or missing some relevant citations to severe issues like incorrectly describing the data or having illogical analysis methods. Actually checking if the data was cleaned correctly or the analysis ran properly falls outside the realm of peer review - at least in most fields. Some journals, such as Science, require that data and code be made available (with exceptions for privacy) to readers after the paper is published though usually not to reviewers.
Without knowing how many issues are caught during peer review, it’s hard to judge how good it is. But there’s clear evidence that it’s not perfect, as demonstrated above with the clear factual error missed by everyone involved in the paper. We should never expect peer review to be perfect. Reviewers won’t be experts on every aspect of the paper, and they shouldn’t be expected to fact-check every single claim. So we expect that some mistakes will get through peer review. In return, we should also expect that when readers find errors in published papers that these errors be quickly corrected. Are they? In my experience, the answer is a strong, absolutely not.
Plenty has already been written about how editors respond to allegations of intentional fraud. In this post, I won’t just be rehashing what happened during these cases. I’m not even going to be talking about fraud at all. Instead, we’ll examine how editors respond to allegations of much more minor problems. This is, in some ways, more informative of how a field responds to factual errors in papers. It’s like judging a prosecutor for how they handle a case where the defendant confesses on the stand. If you can’t score a conviction on a slam dunk case like that, when can you?
When a paper has an error, and someone notifies the editor, the paper should be corrected. I think that’s a relatively uncontroversial statement. And let me be clear on what I mean by “an error.” I am talking about factual errors that are entirely indisputable. There are many subjective parts of papers, and different people with the same data and the same topic may write wildly different papers. So I’m not talking about decisions on what analysis to use, how to clean the data, or even how to interpret results. No, I’m talking about something extremely simple. I’m talking about my own name.
Yes, there’s actually a paper that cites me in a table but spells my name as “Jack” instead of “Jacob.” This paper, “Do sanctuary policies increase crime? Contrary evidence from a county-level investigation in the United States,” by Ascherio, was published in 2022 in the journal Social Science Research and has a very helpful table describing the variables used and their sources. It uses some of my FBI data and says my name is Jack Kaplan. This is an extremely simple mistake, and I’m sure Dr. Ascherio did it accidentally. I’ve certainly spelled people’s names wrong, even in presentations. So there’s an error in the paper, and it’s indisputable that it’s wrong and needs to be corrected. And it’s an easy fix; just replace the “k” with “ob,” and my name is fixed. What better opportunity could there be to demonstrate editors fixing mistakes when they’re alerted to it?
So I emailed the editor, Dr. Cao, about it, expecting it to be promptly corrected. I did this on August 9th of this year, over eight months ago. It still hasn’t been updated, and I haven’t heard anything from him since his response the day following my email, which promised to “discuss it at our next meeting.”
So it’s been over eight months, and this - the most trivial error possible - still hasn’t been fixed. This example is so trivial that it almost feels made up. This isn’t challenging the paper's findings or saying there’s any fraud or anything unethical happened. It’s not saying the author, reviewers, or editor failed in any way. There’s no reputation damage risk to the journal for admitting this issue. It’s just a simple typo. But it’s still been over eight months - and counting! - without a correction.
Okay, I know what you may be thinking. This is so trivial that waiting a long time to fix it is fine. It may be incorrect, but this isn’t misleading to people reading the paper. They’re not going to walk away thinking something wrong with the research. Let’s move to my second example, where readers of the paper will not only get misleading information about the data but will completely misinterpret one of the results.
This paper, “Can Victim, Offender, and Situational Characteristics Differentiate Between Lethal and Non-Lethal Incidents of Intimate Partner Violence Occurring Among Adults?” by Overstreet, McNeeey, and Lapsey in the journal Homicide Studies in 2020, looks at FBI National Incident-Based Reporting System (NIBRS) for what factors predict intimidate partner homicide relative to aggravated assault. One of the variables in NIBRS is the victim’s residence status. The authors explain that this variable is “a binary variable indicating whether the victim was a legal resident of the United States (coded as 0) or a non-legal resident (coded as 1).” They go on to say that “The majority of victims were legal residents of the United States (91%).” The effect of this variable in their analysis was not significant and was never mentioned in either the results or discussion section.
The problem is that this variable has nothing to do with whether the victim “was a legal resident of the United States.” It is just about whether the victim lived in the jurisdiction where the crime occurred. In the FBI’s manual for this data, they say explicitly that it’s wrong to think about this variable in terms of legal residency: “Resident status does not refer to the immigration or national citizenship status of the individual. Instead, it identifies whether individuals are residents or nonresidents of the jurisdiction that the incident occurred.”
So first, this is a failure by the authors for being completely wrong in understanding and explaining one of their variables. This is also a failure by the editor(s) and reviewers who missed this fundamental error, though without being quite familiar with NIBRS data, this is an easy thing to miss. And we shouldn't expect editors and reviewers to check every definition provided in a paper - it's up to the authors to do this checking. This is a reasonably substantial mistake as readers of the paper will misunderstand what the variable means and be misled on two fronts from the analysis. They'll think they know the effect of legal residency when they don't and think they don't know the effect of living in the jurisdiction when they do.
While alarming, this is still a mistake that’s very easy to fix. Again, it’s indisputable that the definition is wrong - we even have the FBI themselves saying so! There’s again no allegation of fraud or misconduct by the authors. They should have been more careful and checked the data manual, but nothing to suggest that they did anything other than make a mistake. Every day it’s published - and it’s been available since late September 2020 - is misleading to readers. So, easy fix? The image below shows my email to the journal’s editor and her response. I emailed in late May of 2022 and, as with the email to the Social Science Research editor, I got a quick response promising that the editor would look into it. As before, I never got a follow-up message, but this time there was a correction to the paper. Or at least a partial correction.
On the journal’s page for the article, there is a link that says View correction. Click that link, and it will show a two-paragraph block of text (following a citation for the paper) dated July 19th, 2022 (about two months after my email) explaining the incorrect text and what it should be. I’ve reproduced that page in the below image. The corrected text now accurately describes the variable. But that’s all there is to the correction—just that one brief page. Look on the article’s main page or at the PDF; the incorrect text remains.
This is an inadequate correction. The vast majority of readers, I believe, aren’t going to check the page to see if there is a correction note. They’re just going to read the paper on the site or download the PDF (which has no indication there was a correction). Some people, such as those without direct journal access, will also have the paper sent to them over email, so wouldn’t even be on the page that says there’s a correction. It’s the bare minimum to say, “We made a correction,” while not changing the actual paper at all.
I think these two examples are enough to demonstrate that the current process of correcting factual errors in papers is insufficient - and you could argue that it barely even exists. These two examples are both extremely easy mistakes to fix. And yet only one was fixed, and that was only a partial fix two months later. Mistakes happen. While everyone involved in research - authors, reviewers, editors - should be more careful in their work, mistakes will still occur. And when they do, editors needs to fix them as quickly as possible. If research is helpful in the real world beyond just accumulating citations - and that’s a debatable assumption - then it’s entirely unacceptable for keep factual errors in papers.
Failure to correct errors in papers is also actively harmful to the discovery of future mistakes. Why should anyone take the time to inform an editor of an error if they know it won’t be corrected - or it would take many months? Checking an error can take substantial time to confirm that it is, in fact, incorrect and to gather the documentation to show the editor why it’s wrong and what it should be. This is true for these minor factual errors; it is even more true when you try to inform an editor of a significant error in a paper, such as incorrect data or analyses.
It would be unfair of me to end this post without talking about some reasons why editors are slow or reluctant to make changes to published papers. I could talk about how science should be a careful and deliberate process. How checking mistakes can take a lot of time. Or that some editors may think that the way to improve research is through better new papers, not correcting old ones. Spending time looking at published papers means there is just less time to spend on new ones. And, of course, editors’ time is precious, so can’t be expected to fix an issue immediately. Maybe I’d end with how journals compete with each other for submissions and prestige, so being more likely to admit errors puts you at a competitive advantage. Not being an editor myself, I’m sure there are reasons I can’t even think of.
But ultimately, I don’t think it matters. The examples I’ve given are indisputable factual errors. They can easily prove that they’re wrong and the correct answer is a very simple fix for the paper. The Jack Kaplan example is, of course, laughably trivial and makes absolutely no difference to the paper. The resident status example is much more serious as it’s a variable defined completely incorrectly and even after the paper was corrected the vast majority of readers would still read the incorrect text. If the resident status variable was statistically significant it may have ended up as an even larger part of the paper - as it was, the variable was not statistically significant and was not mentioned in the results or discussion section. Neither of these examples requires a lengthy investigation so could be fixed immediately. To be clear, I don’t think they should be literally immediately fixed. Instead, to be more efficient and not rush things, I think the fixes should be made at regular - and frequent - intervals such as once a month.
I’ll end this post with a thought on how the current system effectively deters people from reporting errors. As criminologists, we should be aware that deterrence requires punishment that is swift, certain, and fair.
I'm not referring to the authors whose paper has the issue as the ones who should be punished. Except for intentional fraud, we should certainly not be treating the authors like they're the bad guys. That's counterproductive and, well, just mean. Instead, I'm referring to deterrence in that the failure to have swift, certain, and fair consequences to a paper having factual errors is in fact deterring people from reporting these errors. Since people informing editors about errors are never (at least from what I've seen) publicly acknowledged, the only benefit to the person reporting it that published research is a little better.There are real costs to this reporting. Some editors may not be happy to be told that there are issues in a paper and that it needs to be corrected. This could be especially true if they were the editor on that paper. Similarly, authors may also be offended that you say that their paper has errors, possibly leading to it being corrected (or in serious cases retracted). Given the immense power that editors have, and how interconnected criminology is so you never know who is connected to whom, I think a lot of people would prefer to just ignore the issue than risk even a small chance of hurting their publication chances in that journal. Put together, why should someone take even a 1% risk of damaging their career for an outcome that may take months or years (swift), if at all (certain), and may not even be an appropriate response to the issue (fair)?
You may note that I didn’t say if I emailed about the Environmental Research paper to tell the editor there that the paper had major issues in relation to its data. I didn’t email that journal. My experience dealing with these issues - both from the examples in this post and others - has effectively deterred me from continuing. The Environmental Research paper is also a more serious case than the other examples in this post as the authors were dead wrong about their primary dataset, likely requiring a completely new analysis and changes to the data, results, and discussion section. So it’s a much larger ask to correct than just rewriting a word or a few sentences.
Science is supposed to be the self-correcting process of understanding the world. If our journals don’t promptly (or at least within eight months) fix indisputable factual errors in papers, is our field really a science?
Ethnicity is available for victims and for arrestees. Arrestee information is in a separate table as offenders and an offender who is arrested will be in both tables.
I take no stance on the quality of the paper on a whole, so just taking these results at face value.
Sometimes written as swift, certain, and severe, or with fancier words than swift or certain.