By B. Rose Kelly, Woodrow Wilson School of Public and International Affairs
Governments around the world rely on scientific assessments to guide environmental policy and action. Yet, these assessments, like those produced by the U.N. Intergovernmental Panel on Climate Change and other organizations, can sometimes exhibit limitations, especially scientific bias and errors of omission in their search for the facts.
A new book co-authored by Michael Oppenheimer, the Albert G. Milbank Professor of Geosciences and International Affairs and the Princeton Environmental Institute, uncovers the systemic bias and errors of omission within scientific assessments, presenting a roadmap for how the process can be improved. The book, “Discerning Experts: The Practices of Scientific Assessment for Environmental Policy,” will be published in March by the University of Chicago Press.
In addition to Oppenheimer, the book was co-authored by Naomi Oreskes of Harvard University, Dale Jamieson of New York University, Keynyn Brysse of the University of Alberta, Jessica O’Reilly of Indiana University Bloomington, Matthew Shindell of the Smithsonian’s National Air and Space Museum, and Milena Wazeck of the Royal Society/British Academy. Oppenheimer is the director of the Center for Policy Research on Energy and the Environment (C-PREE) at the Woodrow Wilson School and faculty associate of the Atmospheric and Ocean Sciences Program, the Princeton Institute for International and Regional Studies.
Below, Oppenheimer answers some questions about his new book.
Q. Why did you write this book?
Oppenheimer: After having been an author on many assessments over several decades, particularly Intergovernmental Panel on Climate Change (IPCC) reports, and witnessing the way decisions are made by experts on what are the scientific facts surrounding an environmental problem, I came to appreciate the efficacy of the assessment process but also its imperfections. Specifically, on key questions characterized large uncertainty, I felt the answers delivered to governments might vary considerably depending on how the assessment was set up (i.e., institutional factors), who participates (who the expert authors are), and how deep the uncertainty is on particular questions.
So, nine years ago, I and colleagues now at Harvard and New York University established a research project to investigate the nature of environmental assessments with a view toward making recommendations that could improve their performance. Others have examined what makes any particular assessment successful in being accepted by governments and people.
Our study was the first to examine how the experts actually make their decisions as a group and what factors influence those decisions beyond the science itself. The overall method of the project is ethnographic, combining interviews, archival research, and observation of expert deliberations. We studied three sets of assessments from the 1970s up to the past decade: A series of national and international assessments of ozone depletion beginning in the 1970s, the U.S. National Acid Deposition Assessment Program of the 1980s, and a series of assessments since the 1970s regarding the stability of the West Antarctic ice sheet (including IPCC’s five assessments).
Q. What are the biggest takeaways?
Oppenheimer: These are the biggest takeaways:
Q. What are the policy implications?
Oppenheimer: Those sponsoring assessments, like the IPCC, should take a close look at the way their institutional setups and the demand for consensus limit the value of assessments and sometimes lead them to products that are not sufficiently helpful to policy makers. These arrangements truncate the information developed and stifle creativity by expert-authors. The problems being assessed are sufficiently important and threatening in many cases, so the sponsors need to be much more creative about the assessment process, willing to experiment and take risks, and be willing to free experts within assessments from unnecessary constraints. At the same time, all involved need to work to assure that science aimed at influencing assessments or generated by the needs of assessment doesn’t crowd out other types of research.