Sounds good in theory: scientists check each other with peer review, and knowledge advances. In reality, scientists are only human.

Schools often present a rosy picture of science as the most reliable generator of knowledge. It uses a special scientific method, something like a secret sauce nobody else has. It employs mathematical proofs. Peer review confers additional reliability. Science marches on. Nevertheless, we have to ask some probing questions about the word “science” before it gets reified as something entirely new and different from any previous or contemporary method of inquiry. For instance, how did ancient Egyptians build the pyramids without modern science, or Mayans create accurate calendars, or Incas build Macchu Picchu, without peer review, p-values and the “scientific method”? To what extent does “science” differ from other fields in the academy, such as history, economics or even music? What subjects belong or don’t belong under the big tent we call “science”? How much of scientific activity involves plain old common sense and logic? What social, economic and cultural influences perturb the idealistic aspirations of science? As the articles below reveal, science cannot pretend to be any more reliable than the people who practice it.

A litany of problems with p-values (Statistical Thinking blog). Frank Harrell is a biostatistician at Vanderbilt University. In this blog entry from Feb. 5, he lists numerous problems with a highly-trusted mathematical method for measuring “significance” of a given factor as a cause of some effect. His work-in-progress has nine reasons so far to distrust p-values. “In my opinion,” he begins, “null hypothesis testing and p-values have done significant harm to science.” How many tens of thousands of research papers are in jeopardy of irrelevance if Harrell is correct? (See Statistics in the Baloney Detector.)

Certainty in complex scientific research an unachievable goal (University of Toronto). Donald Trump’s election and the Patriot’s win of the Superbowl are two recent examples of expert predictions gone awry. A new study published by the Royal Society “suggests that research in some of the more complex scientific disciplines, such as medicine or particle physics, often doesn’t eliminate uncertainties to the extent we might expect.” There’s always a “long tail of uncertainty” and a human tendency to underestimate the effect of small errors, especially as Big Data grows. Is this a problem just for soft sciences? No; “Physics studies did not fare significantly better than the medical and other research observed.”

Continue reading at CREV

Continue Reading on