With all their knowledge, training and experience, expert witnesses can sometimes have difficulty explaining scientific research findings to judges and juries, who often lack any formal training in research methodologies. For many jurors, topics such as “epidemiology,” “control groups,” “confounds,” and “confidence intervals” are completely foreign. This can be problematic when cases hinge on these concepts to prove causation or a lack thereof. Social science research suggests that even judges struggle with these notions, which may affect whether the jury ever gets to hear an expert’s opinion on the matter.
After a brief review of the legal standards governing scientific testimony and the research on judges’ and jurors’ abilities to evaluate it, we provide a few suggestions and examples for aiding the triers of fact in understanding this critical information.
What Are the Evidentiary Standards for Admitting Scientific Evidence at Trial?
The introduction of scientific research to a trial typically comes through the testimony of an expert witness. Because experts are persuasive and have the potential to greatly influence juries, the courts established evidentiary standards for the admissibility of expert witness testimony in an attempt to rid trials of unqualified witnesses and invalid or unreliable science. In 1923, the federal courts adopted a general acceptance standard for judging the admissibility of scientific evidence (Frye v. United States). If the scientific method or technique on which the evidence is based is generally accepted within the scientific community, the expert testimony should be admitted. Most states adopted this standard, and it continues to be the law in a few states even today. However, with the adoption of the Federal Rules of Evidence in 1975, federal courts also required that the witness be “qualified as an expert by knowledge, skill, experience, training, or education” (FRE 702). In 1993, the Supreme Court declared in Daubert v. Merrell Dow Pharmaceuticals, Inc. that FRE 702 superseded Frye. The decision also placed responsibility for determining the reliability of expert evidence in the hands of federal trial court judges while providing these judges with a non-exhaustive list of criteria by which to judge the admissibility of that evidence.
Daubert directed judges to conduct a “preliminary assessment of whether that reasoning or methodology properly underlying the testimony is scientifically valid and of whether that reasoning or methodology properly can be applied to the facts in issue.” The Court specifically denied creating a definitive list of factors that judges must consider when making admissibility decisions, but recommended specific factors that judges may choose to utilize when evaluating scientific evidence.
- First, judges may consider whether the theory or technique used by the expert is falsifiable and whether proper methods of hypothesis testing were used.
- Second, they may consider the technique’s error rate and the standards controlling the technique’s operation.
- Third, judges may consider whether the theory or technique has been peer-reviewed or published.
- Finally, they may consider the degree to which the theory or technique is generally accepted.
Daubert is currently the evidentiary standard required in the federal courts and is followed in the majority of state jurisdictions. To reflect this ruling, the 2000 amendment to the Federal Rule of Evidence 702 adds that experts may offer opinions if: “(1) the testimony is based on sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.”
How Well Do Judges Understand Scientific Research?
A major criticism of Daubert is the requirement that judges act as gatekeepers for scientific evidence. This necessarily imposes a responsibility on trial judges to become familiar with statistics, research methods and other areas outside of their expertise. The majority in Daubert declared that it was “confident that federal judges possess the capacity to undertake this review.” However, the dissenters questioned whether judges would be competent to take on such a task when they have little training in these arenas (Rehnquist, J., dissenting).
Alarmingly, surveys of judges have not yielded comforting results. In one study, 96% of judges indicated that they had not received instruction about general scientific methods and principles, and less than 10% of judges exhibited a clear understanding of the reliability factors outlined in Daubert (Gatowksi et al., 2001). Similarly, in a survey of state court trial judges applying the Daubert standard, only 4% of judges exhibited a sufficient understanding of falsifiability and error rates (Vidmar & Seidman-Diamond, 2001). Even though a majority of surveyed judges had received some CLE training in scientific methodology, few were able to demonstrate a clear understanding of the topics that often arise when discussing scientific validity and reliability (e.g., control groups, confounds, double-blinds). Judges who lack adequate training in the sciences are likely to make the same common mistakes in evaluating evidence as the general public, rejecting evidence based on reliable scientific theories and techniques and allowing junk science into their courtrooms. In fact, many critics question whether judges are any more capable than juries of sifting through scientific evidence and sorting the reliable from the junk.
How Well Do Jurors Understand Scientific Research?
Research conducted with laypersons on methodological and statistical reasoning skills suggests that they have a great deal of difficulty differentiating between valid and flawed research. For instance, most people fail to understand the importance of a control group and seem to have difficulty applying statistical concepts to human behavior. In addition, laypersons fail to recognize that larger samples sizes tend to produce more reliable results than smaller sample sizes. In a trial setting, McAuliff & Kovera (2001) studied the reactions of real jurors to variations in the methodological quality of expert evidence; that is, they presented jurors with scientific research that was well-founded, along with research that contained several scientific flaws. This study found that expert evidence quality did not appear to influence jurors’ liability decisions or their evaluations of the expert evidence. These studies, taken together, suggest that in the absence of additional guidance, most jurors may be unable to recognize flaws in scientific research presented in the context of expert testimony.
How Can We Educate Judges and Juries about Scientific Research?
What this means for trial attorneys is that you have two opportunities to prevent your opposing counsel from using flawed research or “junk science” to prove their case. First, we recommend challenging admission of the expert testimony through Daubert and Frye motions. If there is a hearing on the motion, counsel should have the opportunity to educate the judge further about the flaws in the opposing expert’s research. Education can come in the form of oral argument and the presentation of an opposing expert to critique the science. A wealth of research shows that people learn best through teachings that are coupled with visuals or graphics – and yes, judges are people, too. Graphic depictions of concepts such as control groups, error rates and confidence intervals can help judges become better gatekeepers and may prevent your opponent from presenting their “junk science” to your jury. Second, even if the judge admits the “junk-science” evidence, similar graphics can be used to make arguments to the jury for why they should reject your opponent’s scientific findings and accept your expert’s opinions instead.
When competing experts present contradicting scientific research, we’ve heard many jurors say that the findings “cancel each other out,” and they either reject the expert testimony in its entirety or rely on whichever side they “feel” fits best with the story they want to believe. This apathy toward the experts’ evidence is likely the result of their inability to fully understand and critically evaluate the research to determine which side has the most reliable and valid scientific evidence.
To gain an edge at trial, we strongly suggest using graphics such as the samples below with animation to present small bits of information at a time. This approach will allow you to convey important concepts to your jurors to help them determine that your expert’s opinions should be trusted while those of your opposing expert should be rejected. Even better – graphics like the following could be used in pre-trial hearings to convince judges that your opponent’s science should never be admitted to the jury to begin with.
By: Christina Marinakis, J.D., Psy.D. – Director, Jury Research