The economist who wants to work on legal questions needs to know something about law, but he does not need a J.D.1

Analysis of jury selection, from which some identifiable group of the population allegedly has been excluded, has historically established the statistical approach to discrimination in selection of all kinds. The leading article by Michael Finkelstein, to which analysts still today pay homage, was published in 1966, providing the basis for the Supreme Court's 1977 finding of jury discrimination in Castaneda.2 As explained in this essay, Finkelstein's analytic approach was then and remains inappropriate.

What I call Finkelstein's "analytic approach" is the comparison of a number describing the outcome of a process with a number describing the standard that process is expected to meet. When this "standard" is a characteristic of a starting population (say, the percent Hispanic in a population), and is compared with the value of that characteristic among those selected (say, the percent Hispanics on the jury), it is called a "bottom-line" analysis. Analysts and lay people alike draw an inference from this comparison about the neutrality of the process with respect to this characteristic. Finkelstein's specific method, statistical decision theory, was developed to control the quality of manufacturing output. He erroneously applied it to social science statistics.3 Here I refer to a concept, not a calculation. Experts and attorneys still conceptualize "proof" of discrimination as such a comparison, albeit statistically mediated by reference to other factors. I will explain the errors in this "comparative outcome" approach, regardless what statistical technique is used to effectuate it.

#### What To Compare?

SCOTUS, the Supreme Court, calls for a jury, or some other collection in the jury formation process (such as the venire) to be "a representative cross section of the community."4 The key word is "community." The Court has never asked that juries represent the population. Even when accepting population data as evidence, the Court has been careful to say that juries should be representative of the "community."

If the community is not the same thing as the population, what is it? Hans Zeisel and David Kaye, in accepting the Finkelstein framework, describe a difference:5

The population of eligible jurors will almost never be identical to the adult community as a whole. Valid statutory disqualifications for jury duty, such as conviction of a felony, usually fall more heavily on some identifiable groups than others. Consequently, an analysis that rests on general population figures could be misleading.

Juries, they say, should be analyzed as having been drawn from persons eligible to be jurors. Using the word "community" as part of the wrong measure ("adult community") is confusing. It is impossible to know what this community-persons eligible to be jurors-looks like, although it must look different from the population (by age, race or ethnicity), or why have a separate name for it? Zeisel and Kaye, like everyone else who has ever written in this field, look to compare juries to some standard they cannot measure. This cannot be the right approach.

Equivalent comparisons are also made in other areas of civil rights law, such as when a company's work force is compared with the "labor force" to assess the hiring process. Would every worker in that "labor force" accept a position within the hiring firm? Or be able to do it? If not, such a hiring standard makes little sense. In public discourse, some people call for public service occupations, such as the police, to "look like" the population they serve, also without concern for qualifications or interest; without considering how openings occur or how they may be fairly filled.

Some of the jury selection literature has been about what aspect of the jury or workforce is to be measured for this comparison (the master wheel, qualified wheel, venire or jury). Some has been about how to derive the standard one sets, the model of what the picture "should" look like. Some effort has been expended by labor economists to determine the appropriate occupational classification for a given workforce position, from which the characteristics of the "available" labor force can be measured. Although "availability" is the comparison of hires with this "available" workforce, no one thinks that persons actually available for current job openings is well measured this way. For each kind of selection, if one follows the conventional analysis, a community from which that selection is to be made must be defined, and its components measured, in order to compare actual selections with those the model would project-the standard. Finally, in each field, there is a literature about ways to generate an outcome that meets that standard; that is, like Procrustes, that would mold the system to the faulty mathematics.

We cannot infer the failure of a social process from a comparison of situations. Such a comparison does not tell us how an outcome was achieved. Not knowing who did what, we cannot distinguish improper from proper selection. Inference of discrimination from comparing situations should not be accepted as "proof" in litigation. There must be a better way. There is. I will describe it.

Statistical analyses of discrimination have been misconceived not because the data may be insufficient, and not because we lack appropriate statistical methods, but because analysts do not adequately translate the legal question into a statistical question. Mesmerized by outcome comparison methodology, they lack a sufficient understanding of what they should be analyzing, and therefore how they should analyze it.

#### The Failure of Comparative Situation Analysis

________________________
* President of Longbranch Research Associates (www.LongbranchResearch.com). Although I am responsible for all content, I acknowledge and appreciate help from Mark Michelson, Michael O'Hare, Marc Rosenblum and Harry Weller.
1 Posner, "The Present Situation In Legal Scholarship," 90 Yale L. J. 1113, 1130 (1981).
2 Finkelstein, "The Application of Statistical Decision Theory to the Jury Discrimination Cases," 80 Harv. L. Rev. 338 (1966); Castaneda v. Partida, 430 U.S. 482 (1977).
3 The method described by Finkelstein has several other names, including Statistical Process Control and Acceptance Sampling. It was developed in the 1920s by Walter A. Shewhart and others, and was used in U. S. factories during World War II to detect batches of faulty products based on a sample. It became widely practiced in Japan after the war, due largely to the influence of W. Edwards Deming. The mathematics requires that there be a known, certain standard-a specification-and then a production process. A parameter measured in a sample of the goods is compared with its manufacturing specification. The size of the difference measures the accuracy of the process. The analogy to selection of jurors or workers fails, as there is no such certain standard, and the institution does not have absolute control of the inputs to the process.
4 Taylor v. Louisiana, 419 U.S. 522, 528 (1975) Earlier: It is part of the established tradition in the use of juries as instruments of public justice that the jury be a body truly representative of the community. Smith v. Texas, 311 U.S. 128, 130 (1940). See also Duren v. Missouri, 439 U.S. 357 (1979).
5 Zeisel and Kaye, Prove It With Figures: Empirical Methods in Law and Litigation, 183 (1997).

Stephan Michelson, PhD, has spent over 45 years working in the field of Statistics, analyzing complex legal issues, from developing data, to analysis, to presentation as an expert witness. President of Longbranch Research Associates (LRA), Dr. Michelson is responsible for staff and facilities administration and quality of product output. He is also the director of litigation analyses and research projects. He spent many years serving on the faculty at Reed, Harvard, the Brookings Institution, and the Urban Institute.