Individualized Justice and Statistical Reality
-- Posted by Neil H. Buchanan
Last week, as Professor Dorf and I were (like everyone else) puzzling over the events in Ferguson, Missouri, I continued to think about the problem of isolated incidents. Even though it is plainly true that the U.S. criminal justice system -- to say nothing of society more generally -- is stacked against the poor and people of color, that obviously does not mean that every incident of white-on-black violence will have happened because of the race of the victim.
Indeed, we cannot even assume that the victim of a supposed assault was actually assaulted, as the infamous Tawana Brawley case reminds us. Although it seems more than plausible, given all of the facts that are currently known about the incident earlier this year in Ferguson, that this was yet another example of race-infused tragedy and police overreaction (in a situation where a similarly situated white man would almost certainly not have been killed), we cannot be sure. And that was why I found myself reading about the grand jury's decision not to indict in the Ferguson case and thinking, "Well, they have more facts than I have. Maybe this was an instance in which there really was no crime." I doubted it, but (unlike the O.J. Simpson verdict, for example), this does not strike me as a case where a clear specific injustice has occurred. Tragedy? Yes.
And that is about as much as I had to say about the matter. Professor Dorf was then able to make a sustained argument in his post this past Friday, comparing the Ferguson matter to the infamous Supreme Court decision in McCleskey v. Kemp. McCleskey was the 1987 case in which a majority of the Court rejected an exhaustive statistical study that showed that the race of the victim (but, interestingly, not the race of the perpetrator) was a determining factor in the imposition of the death penalty. Stripped of statistical nuance, the "Baldus study" said: Killers of white victims get the death penalty, while killers of black victims don't.
As Professor Dorf's post discusses, the possible use of statistical inference in the two cases is quite different, essentially because of the consequences for individuals of taking the statistical inference seriously. In the Ferguson case, the concern was that an actually innocent white police officer would be indicted (and possibly convicted) because of broad evidence that white officers shoot young black men much more frequently than can plausibly be explained by African Americans' supposed "personal irresponsibility," or whatever code words people use to vilify young black men as a group. Wherever one comes down on the question of how likely it is that the white cop/black man encounter turned tragic because of race, I suspect that everyone would admit that we should be hesitant to convict people of crimes by relying on known biases in the overall system.
By contrast, the stakes in the McCleskey case were essentially the opposite of those facing the Ferguson grand jury. For Mr. McCleskey, the question was whether a jury had sprinted past a life sentence and imposed the "ultimate penalty" because of race. Even if we are not absolutely certain that this particular jury was driven by racial animus, the statistical evidence strongly suggested that it could have been. Had the Supreme Court acted on that inference, the result would simply have been that Mr. McCleskey's death sentence would not have been imposed. He would still be a convicted murderer, and his conviction and fall-back sentence would stand. Using statistical evidence, then, would be part of the process that should make it difficult to impose the death penalty. To be clear, I oppose the death penalty in all instances. Even people who disagree with me about that, however, generally agree that it should be imposed with great reluctance. "Maybe this is partly the result of systemic racism" seems like something that ought to contribute to society's reluctance to kill.
Turning from freedom-or-imprisonment and life-and-death matters, I now eagerly change the subject to one that is much less fraught, but that still involves the question of how to use statistical facts in the search for individualized justice. A few weeks ago, in a post here on Dorf on Law, I noted that a law-and-economics idea called the "punitive multiple" had failed to take hold in U.S. civil courts. In the context of that earlier post, I was simply using that example to show that the great influence that law-and-econ scholars have had within law schools has not at all translated into similar influence in the real world. Here, the "punitive multiple" (which I'll refer to as PM, and which I will explain below) raises a different issue.
Frequent readers of this blog are well aware that I am most definitely not a fan of orthodox economics, and especially of its offshoot in the legal academy. Even so, that does not mean that I categorically disagree with everything that comes out of the law-and-econ trenches. PM always struck me as an appealing theory, because it is a plausible response to the problem of large corporations (and other repeat players) gaming the civil justice system. The idea is simple: If a company is, as a matter of course, likely to injure a large number of victims, then that company will be sued by those victims and, where it is appropriately found to be liable, forced to pay compensation to those victims. No victim should be overcompensated, the theory goes, because that would not correctly set the incentives of the company in relation to the likelihood of its injurious behaviors actually harming people.
So, if my company injures 1000 people each year, and each injury will cause $20,000 in damages, then I should pay each of my victims $20,000, no more nor less. If I know that I will pay for any damages that I might cause, then I will take appropriate care. But in the real world, we know that there are large numbers of people who will never sue in the first place (perhaps because they are uncomfortable with the "litigious culture" of America and choose not to contribute to it, or because reliving the events would be too painful, or whatever). We also know that some people will lose cases that they should win, because of bad lawyering, or disappearing witnesses, or biased judges, or jurors who are unsympathetic to the "culture of vicitimization," and on and on. (There is some evidence of non-meritorious cases actually winning in court, but even taken seriously, that evidence merely changes the degree of overall harm, not the direction.)
Imagine that only 1% of the people with meritorious claims actually bring and win cases each year. If limited to compensatory damages only, the company is able to inflict $20,000,000 worth of harm and pay only $200,000 in total damages to the ten people who made it through court. The punitive multiple is the reciprocal of the percentage likelihood of meritorious claims actually winning in court, which in this case means PM = 1/0.01 = 100. If total damages are set by multiplying PM times the compensatory damages, then each victim will be paid $20,000 in compensatories and $180,000 in punitives.
The logic and simplicity of this theory makes the little economist who is still hiding inside my chest swell with pride. The tortfeasor cannot complain, because he is being made to pay for the harms that he has caused, and all is right with the world. Sure, 990 victims receive nothing, and 10 get one hundred times more than they should have received, but that is better than the alternative.
We know, as I noted above, that courts and legislatures have roundly rejected this approach. Punitive damage rules (statutory and judicially created) require some especially vile or depraved act in order to impose punitives, whereas the PM approach says nothing about the tortfeasor's morality. Under-enforcement of tort claims is allowed in the real world, because our legal rules require the money to be paid on an individualized basis, from the specific wrongdoer to the specific victim, in amounts that reflect the specific bad act (possibly enhanced by even worse acts or motives).
We thus treat the tortfeasors more like the police officer in Ferguson than we should have treated the jury in McCleskey's case. The same motive that apparently underlies the desire for individualized justice in a criminal context -- "We cannot take away a man's freedom based on mere statistical inference" -- carries over into the civil context -- "We cannot force a company to pay more to a victim than the harm caused to the victim, unless we have a video of the CEO acting like Mr. Burns from The Simpsons, inflicting harm and saying, 'Excellent!' "
Of course, there are many practical problems with the computation of punitive multiples. And there are instances in which we do multiply compensatory damages, such as treble-damage provisions in antitrust cases. Still, the broad pattern is to treat civil damages as individualized justice, ignoring the obvious evidence that this allows a lot of harms to be uncompensated. This seems far too close to the Ferguson end, and too far from the McCleskey end, of the continuum.
Last week, as Professor Dorf and I were (like everyone else) puzzling over the events in Ferguson, Missouri, I continued to think about the problem of isolated incidents. Even though it is plainly true that the U.S. criminal justice system -- to say nothing of society more generally -- is stacked against the poor and people of color, that obviously does not mean that every incident of white-on-black violence will have happened because of the race of the victim.
Indeed, we cannot even assume that the victim of a supposed assault was actually assaulted, as the infamous Tawana Brawley case reminds us. Although it seems more than plausible, given all of the facts that are currently known about the incident earlier this year in Ferguson, that this was yet another example of race-infused tragedy and police overreaction (in a situation where a similarly situated white man would almost certainly not have been killed), we cannot be sure. And that was why I found myself reading about the grand jury's decision not to indict in the Ferguson case and thinking, "Well, they have more facts than I have. Maybe this was an instance in which there really was no crime." I doubted it, but (unlike the O.J. Simpson verdict, for example), this does not strike me as a case where a clear specific injustice has occurred. Tragedy? Yes.
And that is about as much as I had to say about the matter. Professor Dorf was then able to make a sustained argument in his post this past Friday, comparing the Ferguson matter to the infamous Supreme Court decision in McCleskey v. Kemp. McCleskey was the 1987 case in which a majority of the Court rejected an exhaustive statistical study that showed that the race of the victim (but, interestingly, not the race of the perpetrator) was a determining factor in the imposition of the death penalty. Stripped of statistical nuance, the "Baldus study" said: Killers of white victims get the death penalty, while killers of black victims don't.
As Professor Dorf's post discusses, the possible use of statistical inference in the two cases is quite different, essentially because of the consequences for individuals of taking the statistical inference seriously. In the Ferguson case, the concern was that an actually innocent white police officer would be indicted (and possibly convicted) because of broad evidence that white officers shoot young black men much more frequently than can plausibly be explained by African Americans' supposed "personal irresponsibility," or whatever code words people use to vilify young black men as a group. Wherever one comes down on the question of how likely it is that the white cop/black man encounter turned tragic because of race, I suspect that everyone would admit that we should be hesitant to convict people of crimes by relying on known biases in the overall system.
By contrast, the stakes in the McCleskey case were essentially the opposite of those facing the Ferguson grand jury. For Mr. McCleskey, the question was whether a jury had sprinted past a life sentence and imposed the "ultimate penalty" because of race. Even if we are not absolutely certain that this particular jury was driven by racial animus, the statistical evidence strongly suggested that it could have been. Had the Supreme Court acted on that inference, the result would simply have been that Mr. McCleskey's death sentence would not have been imposed. He would still be a convicted murderer, and his conviction and fall-back sentence would stand. Using statistical evidence, then, would be part of the process that should make it difficult to impose the death penalty. To be clear, I oppose the death penalty in all instances. Even people who disagree with me about that, however, generally agree that it should be imposed with great reluctance. "Maybe this is partly the result of systemic racism" seems like something that ought to contribute to society's reluctance to kill.
Turning from freedom-or-imprisonment and life-and-death matters, I now eagerly change the subject to one that is much less fraught, but that still involves the question of how to use statistical facts in the search for individualized justice. A few weeks ago, in a post here on Dorf on Law, I noted that a law-and-economics idea called the "punitive multiple" had failed to take hold in U.S. civil courts. In the context of that earlier post, I was simply using that example to show that the great influence that law-and-econ scholars have had within law schools has not at all translated into similar influence in the real world. Here, the "punitive multiple" (which I'll refer to as PM, and which I will explain below) raises a different issue.
Frequent readers of this blog are well aware that I am most definitely not a fan of orthodox economics, and especially of its offshoot in the legal academy. Even so, that does not mean that I categorically disagree with everything that comes out of the law-and-econ trenches. PM always struck me as an appealing theory, because it is a plausible response to the problem of large corporations (and other repeat players) gaming the civil justice system. The idea is simple: If a company is, as a matter of course, likely to injure a large number of victims, then that company will be sued by those victims and, where it is appropriately found to be liable, forced to pay compensation to those victims. No victim should be overcompensated, the theory goes, because that would not correctly set the incentives of the company in relation to the likelihood of its injurious behaviors actually harming people.
So, if my company injures 1000 people each year, and each injury will cause $20,000 in damages, then I should pay each of my victims $20,000, no more nor less. If I know that I will pay for any damages that I might cause, then I will take appropriate care. But in the real world, we know that there are large numbers of people who will never sue in the first place (perhaps because they are uncomfortable with the "litigious culture" of America and choose not to contribute to it, or because reliving the events would be too painful, or whatever). We also know that some people will lose cases that they should win, because of bad lawyering, or disappearing witnesses, or biased judges, or jurors who are unsympathetic to the "culture of vicitimization," and on and on. (There is some evidence of non-meritorious cases actually winning in court, but even taken seriously, that evidence merely changes the degree of overall harm, not the direction.)
Imagine that only 1% of the people with meritorious claims actually bring and win cases each year. If limited to compensatory damages only, the company is able to inflict $20,000,000 worth of harm and pay only $200,000 in total damages to the ten people who made it through court. The punitive multiple is the reciprocal of the percentage likelihood of meritorious claims actually winning in court, which in this case means PM = 1/0.01 = 100. If total damages are set by multiplying PM times the compensatory damages, then each victim will be paid $20,000 in compensatories and $180,000 in punitives.
The logic and simplicity of this theory makes the little economist who is still hiding inside my chest swell with pride. The tortfeasor cannot complain, because he is being made to pay for the harms that he has caused, and all is right with the world. Sure, 990 victims receive nothing, and 10 get one hundred times more than they should have received, but that is better than the alternative.
We know, as I noted above, that courts and legislatures have roundly rejected this approach. Punitive damage rules (statutory and judicially created) require some especially vile or depraved act in order to impose punitives, whereas the PM approach says nothing about the tortfeasor's morality. Under-enforcement of tort claims is allowed in the real world, because our legal rules require the money to be paid on an individualized basis, from the specific wrongdoer to the specific victim, in amounts that reflect the specific bad act (possibly enhanced by even worse acts or motives).
We thus treat the tortfeasors more like the police officer in Ferguson than we should have treated the jury in McCleskey's case. The same motive that apparently underlies the desire for individualized justice in a criminal context -- "We cannot take away a man's freedom based on mere statistical inference" -- carries over into the civil context -- "We cannot force a company to pay more to a victim than the harm caused to the victim, unless we have a video of the CEO acting like Mr. Burns from The Simpsons, inflicting harm and saying, 'Excellent!' "
Of course, there are many practical problems with the computation of punitive multiples. And there are instances in which we do multiply compensatory damages, such as treble-damage provisions in antitrust cases. Still, the broad pattern is to treat civil damages as individualized justice, ignoring the obvious evidence that this allows a lot of harms to be uncompensated. This seems far too close to the Ferguson end, and too far from the McCleskey end, of the continuum.