Take me some credit-ville

On Wednesday the 16th, I attended an event at the Institute on Public Safety and Social Justice at the Adler School of Professional Psychology.  The title of the event was “Attention Felons: Chicago’s Project Safe Neighborhood”, (PSN). One of the speakers at the event was Dr. Andrew Papachristos, who co-authored the evaluation of the project during his time in Chicago.  The evaluation showed that PSN was successful at reducing gun crimes in the west side neighborhoods they operated in.  The funny thing about social science research in Chicago, you never really hear about a project evaluation that showed the project was not successful, weird isn’t it?  I mean PSN might be a great project, but with all the glowing evaluations of the other projects running in Chicago it is hard to trust the claims of results of any of them.

When the presentation was over I asked a question about how the evaluators of the program were able to control for other possible factors such as differing policing strategies in the districts they observed vs. their control districts.  To his credit, Dr. Papachristos basically said that they could not control for most of the factors I mentioned.  That got me thinking about how all these programs from criminal justice agencies, community agencies, and now the feds might all just be taking credit for the same occurrence, a reduction in the homicide rates in Chicago, when in reality most likely had little or nothing to do with the reduction.

It is important for academics, politicians, and the public to understand that evaluating the precise cause for a complex social phenomenon such as crime is really more art than science.  Unfortunately for Chicagoans, most of the data that would allow anyone to judge the art/science is only available to the people doing the evaluations.  Even Papachristos, in his answer to my inquiry, said that he tried to get a few other types of data to look at in an effort to consider other external effects and he could not because that data was being examined by other academics. I guess that means it is off limits to others while one academic is using that particular set of data.

Key to understanding the limits of social science research and especially project or program evaluations is that the researchers can almost never truly account for all the possibilities that could have been responsible for the outcome.  These possibilities in social science research are called external effects.  Here is just a short list of possible external effects that could account for the drop in violent crime in Chicago over the last 19 years or so, because crime has been dropping in Chicago since the very early 1990.

  • Chicago’s Alternative Policing Program (CAPS)
  • Gentrification
  • The economic changes during this period that greatly improved employment opportunities for the youth of Chicago
  • New programs put into place by the Cook County State’s Attorney’s Office
  • Increased incarceration
  • Better re-entry programs that help ex-offenders reenter society
  • Improvement in schooling by the Chicago Public Schools that has prepared Chicago’s youth to enter the job market
  • Tougher sentencing policies in the Cook County Circuit Court
  • Redevelopment of the Chicago Housing Authority
  • Better work by Chicago Police Officers independent of any specific program

Now any of these ten could possibly (although I seriously doubt most had any effect) account for some of the drop in crime and violence in Chicago. The reality is we will never really know.   We must be careful when individual academics evaluate a program and find that it is working.  Public policy is too often driven by this research without the benefit of additional independent researchers corroborating the results.   You can take a look at most major programs working on crime and violence in Chicago and find some academic who has done a evaluation of the project to find only too glowing results.  In Chicago there seems to be a pandemic of this type of research.

I am not saying that all project evaluations are not worthy of some degree of trust; some are actually valid to one degree or another.  We need to be cognizant of the role restricted access to data plays in anyone’s ability to evaluate how any agency or project within the criminal justice sphere is working.  Restrictions are in place to prohibit prying eyes from looking upon data that will detail the patterns and practices of criminal justice agencies.  This goes the same for evaluating some community based organizations because most of the data you need to evaluate their work comes from criminal justice agencies.  The agencies typically do not provide access unless they are pretty sure of the outcome of the study before it even takes place.  This gatekeeper function played by local criminal justice agencies allows criminal justice officials the ability to regulate what data is used to evaluate their practices and the practices of the agencies they work with.  All project evaluations needs to be take with at least a teaspoon of salt, sometimes a gallon.