Every year since 1998, Edge.org editor John Brockman has been posing one thought-provoking question to some of the world’s greatest thinkers across a variety of disciplines, and then assimilating the responses in an annual anthology. Last year, he published a book This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works which collects a number of these questions in a single volume.
For 2014, the annual Edge.org question is: What Scientific Idea is Ready for Retirement? I’ll be reading responses for the next few weeks, but for now, wanted to link to the main page and highlight a few notable responses:
1) Nassim Taleb, one of my all-time favourite thinkers and authors, who argues for the throwing out of standard deviation as a measure:
The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.
Say someone just asked you to measure the “average daily variations” for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?
Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to “real life” much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.
It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term “standard deviation” for what had been known as “root mean square error”. The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market “volatility”, it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.
But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.
2) Jay Rosen, who argues we should retire the concept of “information overload”:
Here’s the best definition of information that I know of: information is a measure of uncertainty reduced. It’s deceptively simple. In order to have information, you need two things: an uncertainty that matters to us (we’re having a picnic tomorrow, will it rain?) and something that resolves it (weather report.) But some reports create the uncertainty that is later to be solved.
Suppose we learn from news reports that the National Security Agency “broke” encryption on the Internet. That’s information! It reduces uncertainty about how far the U.S. government was willing to go. (All the way.) But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created. Which is probably what we mean when we say: “well, that raises more questions than it answers.”
3) Richard Dawkins thinks “essentialism” should be retired:
Essentialism—what I’ve called “the tyranny of the discontinuous mind”—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity’s late discovery of evolution—as late as the nineteenth century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won’t occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that theessence of rabbitness is “prior to” the existence of rabbits (whatever “prior to” might mean, and that’s a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.
Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus orHomo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It’s essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and “ring species” tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing “gaps” as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is “really” Australopithecus or Homo is like quarrelling over whether George should be called “tall”. He’s five foot ten, doesn’t that tell you what you need to know?
4) Kevin Kelly, who argues that “fully random mutations” should be retired from thought (this is something that I’ve known for a while, as I have taken a number of courses in molecular biology):
What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.
On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern.
5) Ian Bogost, professor at my alma mater of Georgia Tech, who thinks “science” should be retired:
Beyond encouraging people to see science as the only direction for human knowledge and absconding with the subject of materiality, the rhetoric of science also does a disservice to science itself. It makes science look simple, easy, and fun, when science is mostly complex, difficult, and monotonous.
A case in point: the popular Facebook page “I f*cking love science” posts quick-take variations on the “science of x” theme, mostly images and short descriptions of unfamiliar creatures like the pink fairy armadillo, or illustrated birthday wishes to famous scientists like Stephen Hawking. But as the science fiction writer John Skylar rightly insisted in a fiery takedown of the practice last year, most people don’t f*cking love science, they f*cking love photography—pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done—slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.
The rhetoric of science has consequences. Things that have no particular relation to scientific practice must increasingly frame their work in scientific terms to earn any attention or support. The sociology of Internet use suddenly transformed into “web science.” Long accepted practices of statistical analysis have become “data science.” Thanks to shifting educational and research funding priorities, anything that can’t claim that it is a member of a STEM (science, technology, engineering, and math) field will be left out in the cold. Unfortunately, the rhetoric of science offers the most tactical response to such new challenges. Unless humanists reframe their work as “literary science,” they risk getting marginalized, defunded and forgotten.
When you’re selling ideas, you have to sell the ideas that will sell. But in a secular age in which the abstraction of “science” risks replacing all other abstractions, a watered-down, bland, homogeneous version of science is all that will remain if the rhetoric of science is allowed to prosper.
We need not choose between God and man, science and philosophy, interpretation and evidence. But ironically, in its quest to prove itself as the supreme form of secular knowledge, science has inadvertently elevated itself into a theology. Science is not a practice so much as it is an ideology. We don’t need to destroy science in order to bring it down to earth. But we do need to bring it down to earth again, and the first step in doing so is to abandon the rhetoric of science that has become its most popular devotional practice.
If you want to get smarter today, go here and spend a few hours reading through the contributions.
Like this:
Like Loading...