ActivePaper Archive Don’t expect science to be free of mistakes - Houston Chronicle, 4/25/2018

Don’t expect science to be free of mistakes

As a former Texan living abroad, I read with interest and dismay reports in 2016 that the state’s maternal mortality rate had doubled between 2010 and 2014. Texas women, it was said, faced the same risks in pregnancy as women in countries ravaged by war or natural disaster. I was a little quizzical — that result didn’t quite jibe with my wife’s experience giving birth to our daughter — but I figured that in a large, diverse state with areas of significant poverty my anecdata were unrepresentative.

Now a second study, using different methods, has cut estimates of Texas’ 2012 maternal mortality rate in half. The previous study, we’re told, relied on an electronic death registration system which was too prone to clerical errors. The number of deaths in pregnancy is small, so any mistake in counting leads to significant deviations. Systematic mistakes — like those afforded by a dropdown menu in which the “pregnant” and “not pregnant” options are next to each other — can lead to very large deviations.

What to make of this story? Here I can draw not just on personal experience, but also on my own and colleagues’ research in the history and sociology of science. And from that perspective, most of this episode just resembles science as I think it ought to work.

Consider what scientists do. Nature — from quasars to atmospheric carbon dioxide concentrations to maternal mortality rates — doesn’t speak directly to us. Scientists must isolate, purify, categorize, discard and interpret bits of the world before and while they make new knowledge of those bits of the world they themselves have created. That’s a messy process: too much or not enough of a sample might be discarded, inevitable in-between cases could go in more than one category, tools and materials obtained from colleagues or vendors might not behave as claimed. Even in a strictly technical sense, determining pregnancy isn’t entirely straightforward — as many women who’ve taken a home pregnancy test can affirm.

In public health research that messiness is magnified. People, it turns out, aren’t always reliable: they make mistakes, they fail to complete tasks, sometimes they even lie. Yet public health researchers have to rely on other people to create data about yet other people. It’s a recipe for uncertainty. That’s why the original 2016 study reported its results cautiously: “the Texas data are puzzling” and therefore “a future paper will examine Texas data … to better understand this unusual finding.” Today, we have that “future paper” and, as predicted, it helps us better understand the earlier anomaly. That’s a routine part of science: something doesn’t look right so you take a second glance. The new answer isn’t the final answer, because the world is still messy and people are still unreliable. But, all things considered, it sure seems like a better answer.

The story would end there if science were just by and for scientists. But it’s not. We’re all, however indirectly, both consumers and producers of research. Everyone — through the products they buy, donations they give, taxes and tuition they pay, etc. — contributes resources for science. Even more so in public health research, where nothing can be done without citizens’ willingness to offer information about (and samples of) themselves.

Because their work relies on the public, public health researchers should be held accountable for their work. The phrase “held accountable” is sometimes used in sinister ways but, taken literally, it means scientists should do exactly what they’ve done in this case: the second study accounts for the first’s anomalous result. In return, researchers should receive support commensurate with the value we place on their work. The second study was more robust because it was more careful, laborious and expensive (per data point) than the first study. All studies could be as reliable as the second one, but they would cost more. They would also take longer.

One of the quandaries of public health research is that speed is both necessary and risky. If maternal mortality rates are rising then those in a position to do something need to know sooner rather than later. But making results public quickly means they become fodder for debate before anomalies can be checked. Why did I come across those 2016 reports despite living an ocean away from Texas? Because the original result made a dramatic, attention-grabbing point in widely-circulating arguments against Texas’ defunding women’s health clinics. Those arguments could’ve been more circumspect in promoting the first study’s result — though if you wait for research to be done before making a point, you’ll never say anything at all. Conversely, those who believe the second study vindicates Texas’ public health system might want to read it all the way through: Even with more careful recordkeeping the maternal mortality rate for black women in Texas is still close to the level the first study reported as typical of war-torn countries.

Science is a human endeavor. It’s as prone to error as anything humans do. It will — and should — feed into political debate as much as any other activity that has implications for how society is organized. Criticisms that encourage scientists to do better are fair and necessary, particularly if they recognize that doing better often costs money. But critics are acting in bad faith if they hold scientists to a standard of error-free research, or if they claim results should only be announced and policy decided after “final” results are reached.

Modi is professor and chair in the History of Science, Technology and Innovation at

Maastricht University in the Netherlands.