

Discover more from Crime Thoughts
The Covid Kings of Salami
I recently attended the American Political Science Association (APSA) conference, and while that’s the only political science conference I’ve attended and I watched very few talks there, I was struck by how different it was from the criminology conferences I’ve been to.
In all the talks I attended at APSA, the authors were interested in big, important questions and collected their own data to answer them. Contrast this to the American Society of Criminology (ASC) conference, it is night and day. At ASC, you’re lucky if all the presenters show up and if they all have something more to present than a lit review and maybe some descriptive statistics. At ASC and in criminology more broadly, they ask small questions and give even smaller answers. Why the difference? Plenty of potential reasons exist, but I think the big one is laziness. Or, put another way, an acceptance of lazy work. In criminology, you can get away with doing much less - with doing much lower quality research - and still attain good career achievements. There are few better examples of this than the papers that study the Covid and crime relationship.The Covid pandemic brought a flood of papers assessing how Covid affected crime. This is an important topic worthy of the highest quality research we can offer. And yet criminology did what it does best: disappoint. Many of these papers were characteristic of what I consider the greatest rot in criminology: an acceptance of doing the least possible work for a paper. These papers looked at only a single city; in some cases, the same authors wrote multiple papers on that city.
Now, you might argue that when Covid hit (in an election year, no less!) is exactly the time to publish papers quickly. Crime may be changing, so this is the moment criminology is built for. Our moment to help the country (and the world, as many papers covered non-American countries) understand what is happening and how to respond. We don’t have the time to wait for national data or even to collect a representative sample of agencies.
I agree with the first part. When Covid hit was a moment for criminology to shine with knowledge of how to measure crime changes and the proper ways to respond to keep people safe. Instead, as it almost always does, it took the easy way out.
There is a balance between waiting for as much data as possible - both in terms of long-term data and having many agencies - and publishing quickly to address a public need for information. Criminology (and other fields to a more minor degree) decided that speed trumps having more data. And this is a choice. Let’s be clear on that.
Many researchers use the FBI’s Uniform Crime Reporting (UCR) Program data to study crime, but that’s always a calendar year after the crime happened. So crime data for 2020 was released in the fall of 2021, meaning people would have to wait until then to write anything about 2020 using FBI data. Instead, these authors used data directly from local police agencies. There are data available from dozens of police agencies, including most of the largest agencies in the country (see here for a good, though incomplete, list). There is no requirement to get this data - you go to the link and click download. For most of those agencies, the data is up-to-date as of that day or the day before. In addition to these agencies, people (everyone, not just researchers) can ask or FOIA agencies directly to get their data. In my experience, FOIAing agencies for basic crime data (just crime type, date, and time), almost all of them will share the data and will do so within a couple of weeks.
So using only a single city because that’s the only data available is incorrect. When people started writing their single-city papers, data from dozens of other cities was equally available. Why then are there so many papers looking at only a single city? Let’s start with the most obvious reason: they do it because they can. In criminology, if you publish a lot, you will get rewarded for it - you’ll get a job, tenure, and prestige. Sure, publishing in high-ranking journals and having impactful research is better. But quantity beats quality 99 times out of 100. And when that happens, the incentive to salami slice grows. Why write one paper that answers a question well when you can write several papers that all bite at that question? You’re better off writing more papers.
The next reason I’ve seen people argue is that single-city papers are okay because some cities are unique, and it’s vital in their own right to know what’s going on in a particular city. This is a decent argument. But again, if data are available for multiple cities, why choose only one? Suppose it’s important to look at individual cities. In that case, you can still do that in a single paper that presents each city's results separately (preferably in an appendix to avoid having a ton of figures in the main text). You may say this is just p-fishing by running tests for many cities. As I discuss below, I don’t think significance matters much for Covid papers as everything is just correlations anyways. I don’t see much difference between one paper looking at ten cities individually compared to 10 papers each looking at one city. Ultimately it’s the same number of tests run. And while talking about the uniqueness of certain cities is beneficial, that’s not what I see done in most Covid papers. Sure, they all talk about the specifics of the city or its response to Covid, but almost all papers (not just Covid ones) argue that their sample is important and novel. And in the Covid papers I’ve read, almost nothing stands out that this city is a unique case that needs to be studied specifically.
As a part of this, I’ve seen people argue that it’s important to look at individual cities to help with policy in those cities. Crime is exceptionally local, so it’s much more critical to local government to know what’s going on in their community than what’s happening on average in some other area or nationwide. There’s a lot to like about this argument. Research should be policy-oriented, especially in times of crime change, as we saw during Covid. That said, the research quickly becomes less relevant if it takes a long time to publish - and academic papers can take months to years to publish. If you’re trying to help mayors and police chiefs (and others) understand what’s happening in March or May of 2020, what good is a paper that won’t be released until 2021 or later? Not very much. Working papers - which seems to be a growing trend in criminology - will help with this but it’s also unlikely that many local officials will read a complete academic paper. If authors need to have a shorter, modified version of a paper for local officials, that’s fine but why should we lower academic publishing to the same standard?
This argument also tends to fall apart when you read the papers. The intro talks about how the city is unique; the discussion says how the city can be generalized to other cities. As an example, here’s how the paper A Case Study of Family Violence During COVID-19 in San Antonio, published in Crime & Delinquency by Drs. Leal, Piquero, Kurland, Piquero & Gloyd in 2021, motivated their choice of the city: “[The] City of San Antonio has yet to be examined but is a noteworthy city for this type of research for two reasons.” These reasons are that San Antonio is a large and majority Hispanic city and has an atypically high - and increasing - number of domestic violence murders. The discussion section also notes its novelty by calling San Antonio “a unique context due to its demographic composition and long-term high domestic violence/homicide problem.” Nonetheless, it ends by providing broad policy implications that seemingly apply far beyond the context studied:
“[It] is important that social service and social welfare agencies consider and plan for how future pandemics or other major disasters will affect the incidence of family violence and take appropriate steps now to bolster resources and scale up for the future. Local politicians and local influencers can also use their platforms to educate the public about the problem of family violence, where to get resources, and how exerting physical and emotional forms of violence are unacceptable. Lastly, police officers need to obtain specialized training on how to respond to domestic calls and, in turn, how to provide necessary services to domestic violence victims. Current efforts in policing today vis-à-vis mental health calls has some police officers responding to such calls in concert with mental health or social service professionals. To the extent feasible, some consideration should be given to the use of these types of approaches for domestic violence calls as well.”
The third reason, which I don't think many people will like to admit, is that researchers are bad programmers. In my experience working with people from multiple fields, this is far more of a problem in criminology than in other fields.
If you're not using FBI data - which itself has challenges - then you'll need to spend the time cleaning each city individually. I've cleaned data from nearly every agency that makes their data available, and several agencies that I've FOIAed. It's quite a bit of work to deal with this data. Even seemingly simple things like standardizing the crime categories can be an arduous and risky (in terms of making an error) process. Some agencies are simple, using only UCR-standard crime category names; others have such detailed crime categories that over 1,000 different ones are available. So it's quite a lot of work to deal with this data, and the work snowballs if you aren't a strong programmer and have many cities to deal with. So I think many authors just took the easy route of using a single city, the bare minimum they had to do since they don't have the skills, ambition, or incentives to do more.A potential fourth reason is that authors write papers about where they live. This is pretty common in both Covid and non-Covid papers where authors study the city they live in or work in. Maybe they focus on these cities because it's just what they're interested in, they have relationships with local government or the police so they have special data access, they want to help them, or it's simply easier to study somewhere you already know.
There's a benefit of working with data and in a context they're familiar with. It helps avoid mistakes with the data and understand how policies are enacted. The tradeoff is that we're still left with many single-city papers for the field at large, even if they were written with good intentions.Now, let’s get into my actual analysis. I wanted to find the “Covid Kings of Salami,” who wrote many single-city Covid papers. Ultimately, though, remember who the real King of Salami is: the entire field of criminology. There would be no salami slicing if the field didn’t encourage quantity over quality, if editors didn’t accept these kinds of papers, if reviewers started demanding more of authors, and if researchers had the ambition to answer big questions with big answers. Will that ever happen? I doubt it. To find these kings, I did a semi-systematic review of papers based on a simple search criteria and found all the ones that only looked at a single city. The authors on the largest number of papers would be the “kings.”
I searched "covid and crime" in Google Scholar and checked every paper in the first 20 pages of results. Each page has ten results, so I looked at 200 papers. For this post, I'm only including papers that quantitatively measure the relationship between Covid and crime. I read the title, abstract, and data section to see which cities were studied. If any of them indicated that it was only a single city, I stopped and didn't read the following parts. I included this paper in the table at the end of this post if they only studied a single city. I'm also including Queensland, Australia, even though it is a state and not a city, for a point I'll explain below. I include both working and published papers; in cases where I find both the working and published form of the same paper, I only include the published paper. Some results were introductions to special issues, papers that talked about crime as a metaphor for Covid harming people, qualitative papers, papers that used survey data and not crime data, or that the about crime increases abstractly without including any data.
I keep only academic papers, excluding the small number of government reports.From this search, I found 56 single-city Covid and crime papers written by 121 authors and published in 32 journals and as working papers. Considering that I looked at 200 papers, this is a considerable share. At a minimum, this is 28% of papers. And that 200 included quite a few papers that weren’t quantitative Covid and crime papers. While I didn’t record the exact number of these papers, I’d ballpark the actually Covid and crime papers at about 100-125 of the 200. So the 56 single-city Covid papers are about 45-55% of papers. This isn’t the whole set of single-city Covid and crime papers since the search term I used doesn’t always capture the right results, and 20 pages are not every possible paper. But I think it is an adequately large sample.
Table 1 shows the number of papers that each of the 23 authors who wrote more than one paper wrote. Since, by definition, you can’t salami slice with a single paper, I exclude from this table authors with only one paper. At the top is Alex Piquero, now the Bureau of Justice Statistics director. He was an author of nine papers, 16% of the total. He’s followed by Jason Kurland and Jason Payne, who have five articles (9%). Five authors - Andresen, Chen, Hodgkinson, Kim, and Nicole Piquero each have four papers (7%). The trend of single-city Covid papers is declining, with nine published in 2020, 31 in 2021, and only 16 in 2022. Some cities are far more common than others. There are five papers about Chicago, four each for an unnamed city in China, Rio de Janeiro, New York City, and Vancouver. Queensland, Australia (which is a state, not a city) also has four papers published about it. All other cities have two or fewer papers.
A pattern begins to emerge if you look at Table 2 at the bottom of this post, which has the authors, journal, year published, and article title for every paper. The people with the most papers often write with each other and write multiple papers on nearly identical topics, as I’ll demonstrate more below. For example, four of Kurland’s five papers were coauthored with Alex Piquero; all of Nicole Piquero’s were with Alex Piquero; Andresen and Hodgkinson wrote all their papers together (one paper also had two additional authors).
So why do I care about salami slicing? It’s pretty standard in criminology, so the field's norms accept it. I care because salami slicing is harmful to the reputation of the field of criminology and to everyone who cares about good research.
First, consider the field-wide reputation damage. I’ve worked a great deal with non-criminologists during and since grad school, especially since I moved to a political science department about 18 months ago. I used to think that these fields looked down on criminology. Consider how econ tends to never cite criminology papers as evidence of that. I was wrong, though. In my experience (which is certainly not representative), these fields don’t think about criminology at all. And for a good reason.
Consider, for example, the paper COVID and crime: An early empirical look by David Abrams, an economist at the University of Pennsylvania. The paper was published in January 2021, in the Journal of Public Economics. His paper was a simple descriptive analysis of 13 crime categories for 25 cities. This paper is not much different from many single-city Covid papers except that it’s more. More crimes are studied, more cities are studied, there are more robustness checks, and there is more transparency about data sources and methods. It’s just better in nearly every conceivable way. Whereas criminology gets similar results through dozens of individual papers, he does it in one 16-page paper. Of course, this isn’t a universal truth. There are plenty of good criminology papers (even good Covid papers) and bad econ (and political science and other fields) papers. But I think that it’s true overall.
Now, just throwing the kitchen sink of data at a question and including everything can quickly get into the realm of p-fishing or p-hacking. Though I don’t think that really matters for Covid papers since they’re all just correlational (since you can’t isolate the effect of Covid beyond that it caused a ton of things to change simultaneously), so the descriptive results matter way more than statistical significance. And splitting up results into dozens of papers, even if written by different authors, doesn’t solve the problems of multiple hypothesis tests. So if you want to know why so many fields - and politicians - ignore criminology, look no further than Abrams’ paper against these 56 single-city papers. Abrams asked a big question and gave a big, satisfying answer. It doesn’t wholly answer how Covid affected crime but goes a considerable distance. With all the single-city papers, you get the same big question but now just a nibble on the answer. If we think about research as illuminating the darkness, Abrams gave us a floodlight, and these single-city papers each give us a wet match.
You might say that we can illuminate more than a single floodlight with enough wet matches. But consider the second harm of salami slicing: it's much more work for everyone involved - except the authors - to collect matches than floodlights. Every single one of the single-city papers (except for the three working papers) has gone through peer review. Peer review is flawed but just think about the time it took for these 53 papers.
We'll assume that each paper had one editor and two reviewers and that each spent two hours on the papers. That's probably an undercount, and that's assuming that no paper was peer-reviewed and then rejected and therefore reviewed in multiple journals. For all 53 published papers, this is 318 hours of reviewer and editor time, using this likely very low estimate of time spent. That's quite a lot of time spent. Just consider the opportunity cost. Peer review takes forever, partially because so many papers are being submitted. When editor and reviewer time is spent on these wet match papers instead of floodlight papers, that's a misuse of resources in most cases. It's likely easier to review as an editor and a reviewer, and to write as an author these short wet-match papers than floodlight papers. But I doubt it's so much easier that it balances off all the extra papers that need to be read, reviewed, and edited. This is all time that can otherwise be spent on doing work such as research or teaching.And now, let’s think about what happens once these papers are published. Are we supposed to read all of these papers individually? Assuming that each paper is 20 pages, these 56 papers are 1,120 pages, and you could easily combine many of them. That’d make it easier for the reader and everyone involved in the review process. And it’d make the research more meaningful outside of academia, where people aren’t going to read dozens of different papers that all try to answer the same question. A running joke in criminology is that reviewers don’t actually read the papers - at least Review 2 doesn’t. Yet the field continues to publish so many papers - and these all tend to have the same length intro and lit review, even if their data and methods sections are shorter - that it’s impossible to keep up even for researchers who want to.
A common trend in my posts is talking about criminology’s reputation and how it’s crucial to do things to benefit the field’s reputation and avoid things that damage it. You might think that I’m totally off base by caring about reputation. What matters is the actual science and that it’s unimportant (or perhaps a waste of effort) to care about the whole field’s reputation. But researchers are essentially just messengers of facts. We do our research, yes, and the facts we produce are often heavily caveated or may be actually wrong, but then we become messengers by writing a paper with our results. Every paper we read effectively takes a massive leap of faith that the paper is actually correct. We can’t check the code or the data in almost all cases; even if it was available, nobody has time for that. So we need to read it and hope it’s right.
And we do care about reputation, which is why some journals are “highly ranked” and others are not; some researchers are considered “superstars” and others are not. It’s shorthand for who to trust. When that reputation is damaged, that trust breaks. And since criminology doesn’t have the flood breaks that some other fields have - such as open code and data requirements - for why we should trust the field even when individuals may cause issues (such as making up data for multiple papers and having journal editors cover it up), reputation is really all this field has.
To finish this post, let me explain why I call these people the Kings of Salami. Being a salami slicer is (at least supposed to be) bad, so it’d be rude to accuse people of doing that. And, you might say, writing a lot of papers is just being productive, and there are entirely innocent ways to write similar papers that may appear to a cynical viewer to be salami sliced. But I have what I think is pretty good evidence that at least some people at the top of the list of authors intentionally wrote these nearly identical papers. And to be clear, I believe that the tacit - if not explicit - support for salami slicing that criminology has encouraged the unethical behavior I’ll discuss below.
When I say nearly identical papers, I don’t mean that the papers have a similar topic or outcome studied. They’re all Covid and crime papers, so they will have to be similar to some degree. If you look through the papers in the table below, you’d be forgiven for thinking that some are the same. Some of them really are that similar. But I’m talking about papers that are so incredibly similar to each other that they have the same text, figures, and tables. Papers that massively self-plagiarize themselves.
For example, the papers Exploring regional variability in the short-term impact of COVID-19 on property crime in Queensland, Australia, and COVID-19 and social distancing measures in Queensland, Australia, are associated with short-term decreases in recorded violent crime, both written by Drs. Payne, Morgan & Piquero. These papers were published in 2021 in Crime Science and in 2022 in the Journal of Experimental Criminology, respectively. It seems the authors were also confused about which paper they were writing as they have a considerable amount of identical text in both papers.
Here’s how the property paper describes the study (I’ve bolded the identical text): In this study, we use officially recorded police data from Queensland, Australia, to explore whether property crime has changed in the context of the COVID-19 pandemic.” And now the violence study: “In this study, we use officially recorded police data from Queensland, Australia, to look for early signs that violent crime has changed in the context of the COVID-19 pandemic.”
The authors write identical or nearly identical text at various other points in the papers. I’ll provide a couple of examples. As you read the examples below, please note the parts that aren’t identical. To me, this is strong evidence of intent. It’s not just accidentally copying over text and forgetting to change it - though even that is a problem. It’s changing a word or phrase, so it isn’t precisely identical that says to me, “I know this was wrong, so I’m going to change it so it seems not copied.”
Here’s how the papers (the property paper first and then the violence paper) describe Covid containment measures.
In Australia, containment measures to prevent the spread of COVID-19 were introduced incrementally. The entry of foreign nationals from mainland China was banned on 1 February, before further travel bans on Iran, South Korea, and Italy in early March. This was followed by self-isolation requirements on all travelers arriving in Australia introduced on 16 March. Large, non-essential, organized public gatherings of more than 500 people were also restricted from this date, as were indoor gatherings of more than 100 people. Social distancing requirements were also introduced at this time, which requires individuals to maintain a distance of 1.5 m (or about 5 feet) from one another. Australian borders were closed to all non-Australian citizens and non-residents effective 20 March. The following day, the requirement that there be 4 m 2 per person in any enclosed space was introduced. On 23 March large-scale closures of on-premise licensed premises, restaurants and cafes (except for takeaway), entertainment venues and places of worship came into effect. Further restrictions were imposed on a range of other venues, including indoor and outdoor markets, on 26 March, while limits were placed on the number of people who can attend weddings and funerals. Public gatherings were limited to two people (non-family members) from 30 March, and Australians were advised that they were only allowed to leave home for essential shopping, medical needs, exercise, or for work or education. Queensland was the first Australian state or territory to declare a public health emergency under the Public Health Act 2005 on 29 January, 4 days after the first Australian confirmed case, although containment measures were not introduced until the national restriction on large gatherings in mid-March. Since the non-essential business, activity and undertaking closure direction was first released on 23 March a series of revisions have been made in line with national requirements, imposing further limits on which venues and businesses may continue to operate. School closure—which have varied from state to state—came into effect on 30 March, remaining open to the children of essential service workers. Queensland borders were closed effective 26 March, with entry limited to Queensland residents, residents of border communities undertaking essential activities and other exempt persons. Non-residents were initially required to self-isolate for 14 days after crossing the border; however, as of early April—the time period for our analyses, restrictions were tightened further and only Queensland residents could cross the border. These restrictions were enforceable by law.
Containment measures have been introduced incrementally (see Ting and Palmer 2020 for a detailed timeline). In terms of a national response, the entry of foreign nationals from mainland China was banned on February 1, before incremental travel bans on Iran, South Korea, and Italy in early March. This was followed by self-isolation requirements on all travelers arriving in Australia introduced on March 16. Large, non-essential, and organized public gatherings of more than 500 people were also restricted from this date, as were indoor gatherings of more than 100 people. At the same time, social distancing requirements were introduced, which required individuals to maintain a distance of 1.5 m (almost 5 ft) from one another. The Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) Declaration 2020 was announced on March 18, followed by a further announcement that Australian borders were closed to all non-Australian citizens and non-residents effective March 20. The following day, the requirement that there be 4 m 2 per person in any enclosed space was introduced. On the 22nd of March, the Prime Minister announced large-scale closures of take-away liquor outlets, licensed premises, restaurants and cafes (except for takeaway food), entertainment venues, and places of worship, which took effect the following day. Further restrictions were imposed on a range of other venues, including indoor and outdoor markets, on March 26, while limits were placed on the number of people who are allowed to attend weddings and funerals. Public gatherings were limited to two people (non-family members) from March 30, and Australians were advised that they were only allowed to leave home for essential shopping, medical needs, exercise, or for work or education. Queensland became the first Australian state or territory to declare a public health emergency under the Public Health Act 2005 on January 29, 2020, providing the Chief Health Officer with broad powers to make directions regarding the types of restrictions that may be imposed to limit the transmission of COVID-19. Since the non-essential business, activity, and undertaking closure direction was first released on March 23, a series of revisions have been made in line with national requirements, imposing further limits on which venues and businesses may continue to operate. School closures came into effect on March 30, remaining open to the children of essential service workers. Queensland borders were closed effective March 26, with entry limited to Queensland residents, residents of border communities undertaking essential activities, and other exempt persons. Non-residents were initially required to self-isolate for 14 days after crossing the border; however, as of early April, restrictions were tightened further and only Queensland residents could cross the border. These restrictions are enforceable by law.
And this explains why crime should be affected by Covid.
Location data reported by Google on community mobility has tracked how often and for how long people travel to different location types, compared with a baseline value (the median value for the same day of the week in January and early February). Figure 1 shows these changes over time in Queensland, and demonstrates that there have been significant reductions in visits to public spaces, including parks (down by an average of 33% over the month of April), retail and recreation premises (down 37%), workplaces (down 36%) and transit stations (down 59%). Conversely, the time spent in residential locations increased by an average of 15 percent.
Location data reported by Google on community mobility has tracked how often and for how long people travel to different location types, compared with a baseline value (the median value for the same day of the week in January and early February). Figure 2 shows these changes over time and demonstrates that there have been significant reductions in visits to public spaces, including parks (down 43% as of April 30), retail and recreation premises (down 39%), workplaces (down 34%), and transit stations (down 60%). Not surprisingly, the time spent in residential locations has increased by 18%, which may help to increase the opportunity for interpersonal violence.
Okay, you may be saying. This looks bad, but self-plagiarism is not that serious. They didn’t steal from other people after all. And it’s only one example. Do you have anything else? I do. Another pair of papers written by the same authors both times, using the same data for the same city and looking at nearly the same outcome both times, self-plagiarizes a considerable amount. This pair is Drug offence detection during the pandemic: A spatiotemporal study of drug markets, published in the Journal of Criminal Justice in 2021, and Drug markets and COVID-19: A spatiotemporal study of drug offence detection rates in Brisbane, Australia, published in the International Journal of Drug Policy in 2022. Drs. Payne and Langfield author both papers.
Not only do these papers have the same text in parts, but they have, to my eye, an identical image.
And an identical Table 1. First is the 2021 paper, and then the 2022 paper.
I’ll only provide one example for this pair of papers. This is the bulk of both papers’ methods section, which does get into some of their results. The 2021 paper is on the first, then the 2022 paper for each quoted chunk. Again, the identical text is bolded.
GAM models are a flexible method for capturing non-linear relationships and have recently been used to model the spatiotemporal nature of SARS-CoV-2 infection rates in the UK, for example (see Wood, 2021). Specifically, geospatial correlation (i.e. the predictable statistical relationship between neighbouring locations) is captured through a spatial thin-plate spline (see Fig. 2) and the long term trends and seasonality of the series (i.e. the predictable patterns across the 12 months of each year) are captured through a cubic regression spline. In this case, it is easiest to think of GAM tensor products and splines as ‘smoothed’ parameters representing non-linear effects (see also Wood, 2017).'
GAM models are a flexible method for capturing non-linear relationships and have recently been used to model SARS-CoV-2 infection rates in the UK (see Wood, 2021) as well as the spatiotemporal patterns of such things as mean global temperatures (see, for example, Peristeraki et al., 2019). Specifically, geospatial correlation (i.e. the predictable statistical relationship between neighbouring locations) is captured through a spatial thin-plate spline (see Fig. 2) while the long term trends and seasonality of the series are captured through a cubic regression spline. In this case, it is easiest to think of GAM tensor products and splines as ‘smoothed’ parameters representing non-linear effects (see also Wood, 2017). For spatial correlation the smoothed effect is captured through the cross-product of longitude and latitude (measured at the centre-most point of each SA2) and each point in the spatial field is permitted to vary in overall trend and seasonality (for more information on GAM models see Wood, 2017).
Five location-specific covariates were included to explain any residual variance not captured in the geospatial and temporal correlation. These covariates were: (1) the relative preponderance of businesses providing food, accommodation and retail services; (2) the relative economic prosperity of local residents (coded such that higher values indicate lower levels of disadvantage); (3) the relative education and occupation status of local residents; (4) the proportion of local residents that are aged between 15 and 24 (i.e. young adults); and (5) the proportion of local residents that were born overseas. Our choices here were informed lay our reading of the prior literature which has demonstrated that drug markets tend to establish in or nearby to areas of low collective efficacy (Forsyth, Hammersley, Lavelle, & Murray, 1992; McCord & Ratcliffe, 2007), high pedestrian and business activity (see, for instance, Barnum, Campbell, Trocchio, Caplan, & Kennedy, 2017; Bernasco & Jacques, 2015; Eck, 1995; Haracopos & Hough, 2005; St Jean, 2007; Willits, Broidy, & Denman, 2015) and where local residents are disproportionately young people or of low socioeconomic status (Willits et al., 2015).
Five location-specific covariates were included to explain any residual variance not captured in the geospatial and temporal correlation. These covariates were: (1) the relative preponderance of businesses providing food, accommodation and retail services; (2) the relative economic prosperity of local residents; (3) the relative education and occupation status of local residents; (4) the proportion of local residents that are aged between 15 and 24 (i.e. young adults); and (5) the proportion of local residents that were born overseas. Our choices here were informed by our reading of the prior literature which has demonstrated that drug markets tend to establish in or nearby to areas of low collective efficacy (Forsyth et al., 1992; McCord & Ratcliffe, 2007), high pedestrian and business activity (see, for instance, Barnum et al., 2017; Bernasco & Jacques, 2015; Eck, 1995; Haracopos & Hough, 2005; St. Jean, 2007; Willits et al., 2015) and where local residents are disproportionately young people or of low socioeconomic status (Rengert, 1996).
As for the local business data, we have used SA2 level estimates reported by the Australian Bureau of Statistics (ABS) and scaled these estimates into a relative measure for the Brisbane LGA. Where the relevant measure for an SA2 is coded as zero, this means that the number of businesses in that SA2 is equal to the average of all SA2s in the Brisbane LGA. A value of one (1) indicates an SA2 where the number of businesses is one standard deviation above the average for the region. For economic, education and occupation indicators we use the Index of Economic Resources (IER) and the Index of Education and Occupation (IER) derived by the ABS from the most recent Census of the Population (Australian Bureau of Statistics, 2016). As with the local business data, we have standardised this to the average of all SA2s in the Brisbane LGA such that zero indicates an area where the IER or IEO score was equal to the average score across the region. Finally, we use Census data to measure the number of young adults (aged 15–24) and the number of residents born overseas as percentages of the local population.
As for the local business data, we have used SA2 level estimates re ported by the Australian Bureau of Statistics (ABS) and scaled these estimates into a relative measure for the Brisbane LGA. Where the relevant measure for an SA2 is coded as zero, this means that the number of businesses in that SA2 is equal to the average of all SA2s in the Brisbane LGA. A value of one (1) indicates an SA2 where the number of businesses is one standard deviation above the average for the region. For economic, education and occupation indicators we use the Index of Economic Resources (IER) and the Index of Education and Occupation (IER) derived by the ABS from the most recent Census of the Population (Australian Bureau of Statistics, 2016a). As with the local business data, we have standardised this to the average of all SA2s in the Brisbane LGA such that zero indicates an area where the IER or IEO score was equal to the average score across the region. Finally, we use Census data to measure the number of young adults (aged 15-24) and the number of residents born overseas as percentages of the local population.
For modelling purposes, we execute the analysis on data from April 2016 to March 2020 (48 months), holding out data for the three months when COVID-19 restrictions were in place (April–June 2020). We then use the model parameters to predict the crime rate for each location and each month of the COVID-19 lockdown. Recognising the inevitable presence of prediction uncertainty and the likelihood that the degree of this error will vary from location to location, we use a Bayesian simulation procedure to compute 10,000 simulations of the model predictions at each location. From this we generate 95% prediction intervals to reflect the range of values within which we should confidently expect the true value to fall had the COVID-19 lockdown not occured. Unlike the ARIMA methods which have so far dominated the COVID-19 and crime literature (see, for example, Ashby, 2020b; Langfield et al., 2021; Payne et al., 2020, 2021), a key advantage of the spatiotemporal GAM procedure is its capacity to simultaneously model patial and temporal covariances.
For modelling purposes, we execute the analysis on data from April 2016 to March 2020 (48 months), holding out data for the three months when COVID-19 restrictions were in place (April-June 2020). We then use the model parameters to predict the crime rate for each location and each month of the COVID-19 lockdown. Recognising the inevitable presence of prediction uncertainty and the likelihood that the degree of this error will vary from location to location, we use a Bayesian simulation procedure to compute 10,000 simulations of the model predictions at each location. From this we generate 95% prediction intervals to reflect the range of values within which we should confidently expect the true value to fall had the COVID-19 lockdown not occured.
The statistical model parameters and model diagnostics are provided in Table 1. Overall, the model explains approximately 85% of the variance and, when tested, the residuals were found to be randomly distributed, meaning that there was no spatial (p = 0.96) or temporal autocorrelation (0.98) in the error term. In the final model, both spatiotemporal smooth terms are statistically significant (EDF(Spatial/Trend) = 221, p = 0.00 and EDF(Spatial/Seasonality) = 25, p = 0.00). This tells us that there is strong spatial correlation which changes both in the long term (trend) and in a predictable seasonal pattern. Of the location-specific covariates, the model shows that pre-COVID drug offence detections were higher in locations where there are more food and accommodation businesses (b = 0.37, p = 0.00). This means that in areas where the number of food and accommodation businesses was above average (one standard deviation above the mean), the drug offence detection rate was approximately 40% higher. Similarly, detections were higher in locations with higher levels of economic disadvantage (b = -0.68, p = 0.00, i.e. 97% higher in areas where disadvantage was one standard deviation below the mean), but lower in areas where there was a higher proportion of young adults (b = -0.04, p = 0.00, i.e. 4% fewer detections for each percentage increase in the young adults population) and/or residents born overseas (b = -0.02, p = 0.02, i.e. 2% lower for each percentage increase in the proportion of overseas born residents).
The statistical model parameters and model diagnostics are provided in Table 1. Overall, the model explains approximately 85% of the variance and, when tested, the residuals were found to be randomly distributed, meaning that there was no spatial (p = 0.96) or temporal autocorrelation (p = 0.98) in the error term. In the final model, both spatiotemporal smooth terms are statistically significant (EDF(Spatial/Trend) = 221, p = 0.00 and EDF(Spatial/Seasonality) = 25, p = 0.00). This tells us that there is strong spatial correlation which changes both in the long term (trend) and in a predictable seasonal pattern. Of the location-specific covariates, the model shows that pre-COVID drug offence detections were higher in locations where there are more food and accommodation businesses (b=0.37, p = 0.00). This means that in areas where the number of food and accommodation businesses was above average (one standard deviation above the mean), the drug offence detection rate was approximately 40 percent higher. Similarly, detections were higher in locations with higher levels of economic disadvantage (b=-0.68, p = 0.00, i.e. 97% higher in areas where disadvantage was one standard deviation below the mean), but lower in areas where there was a higher proportion of young adults (b=-0.04, p = 0.00, i.e. 4% fewer detections for each percentage increase in the young adults population) and/or residents born overseas (b=-0.02, p = 0.02, i.e. 2% lower for each percentage increase in the proportion of overseas born residents).
While I won’t show it to conserve space, another paper authored by Payne - Drug offence detection during the pandemic: An ARIMA analysis of rates and regional differences in Queensland, Australia by Langfield, Payne & Makkai, published in the Journal of Criminology in 2021, has a nearly identical methods section as the property paper discussed above.
Remember who I said was the true King of Salami: the entire field of criminology. Yes, it’s bad that authors publish many of these salami-sliced papers, they shouldn’t do it. But there are strong incentives to salami slice, and let’s not forget that even some papers that appear to be salami are done without that intent. For example, an author may write a paper on a topic using specific data because that’s the only data available, and then get access to new data and write a second, similar paper that goes further than the initial paper. To an outsider, this is salami slicing even when, in fact, it isn’t. But the effect is still the same of having many very similar papers requiring writing, reviewing, editing, and reading. It substantially increases the work of research without a corresponding increase in the quality of evidence on that subject.
I say that the field is responsible because the prioritization of quantity of quality encourages both these problems: intentional salami slicing and honestly writing such small, similar papers that they appear to be salami. The only way to address both of these problems is to increase the standard for what papers get published. Editors and reviewers should reject papers that choose to use only a tiny subset of the available data. Hiring and tenure committees shouldn’t reward people who have a lot of small, salami-ed papers over those with fewer but bigger papers. Editors should run submissions through plagiarism software and reject the papers - and potentially ban the authors - when it is plagiarized or self-plagiarized. Editors should also check that the authors don’t already have a very similar paper published elsewhere or as a working paper. This is not an editor's job - who should focus on determining the quality of submissions, not policing them - but I think it’s unfortunately necessary at this point.
I’m asking for a lot more people to do a lot more work, not just for researchers themselves but for editors, reviewers, and people on different committees. All this individual work will be for the non-tangible reward of the field improving. And the work will fall disproportionately on those who care about salami slicing - which is to say, those who already do not do it. If you’re offended by me saying that you’re part of the problem since you’re a member of the field, good. Do something about it.
From my discussions with grad students and professors, neither salami slicing nor self-plagiarism are considered big deals - or really problems at all. If anything, they may be regarded as good as self-plagiarism saves time that could otherwise be spent on research, and salami slicing is just the cost of getting a lot of publications. Clearly, I disagree. But if I’m alone in considering salami slicing and self-plagiarism to be wrong and ultimately extremely damaging to criminology’s already pathetic reputation, then so be it. I’ll be alone, but I’m still right.
I’ve also been to APPAM, a public policy conference, which is far more like ASPA than any criminology conference I’ve attended.
I generally have, though not exclusively, FOIAed large agencies, so this may be different for smaller agencies.
I won’t even consider ranking psychology here, as clicking buttons on SPSS is not programming.
An exception to this work is using Matt Ashby’s excellent Crime Open DataBase (CODE), which standardizes data from dozens of agencies, making it quick and easy to work with. However, data is not updated in real-time, meaning you likely need to clean the data yourself if you want the most recent information, especially for papers written shortly after Covid started.
This includes people who have paid relationships with local organizations.
There’s even a “paper” about how Covid vaccines are a “Global Crime Against Humanity.”
In my experience submitting in econ and political science journals, this is far worse of a problem in criminology than in those fields.