An Obituary for Mae Young, Unladylike Wrestler

We’re not a month into 2014, but this obituary for the unladylike wrestler named Mae Young is surely going to be one of the most interesting ones this year:

Mae Young — make that the Great Mae Young — who pulled hair and took cheap shots, who preferred actually fighting to pretending, who was, by her own account and that of many other female wrestlers, the greatest and dirtiest of them all, died on Tuesday in Columbia, S.C. She was 90, and her last round in the ring was in 2010.

Mae Young, on the right, doing her thing.

Mae Young, on the right, doing her thing.

You have to love her bravado:

“Anybody can be a baby face, what we call a clean wrestler,” she said in “Lipstick & Dynamite: The First Ladies of Wrestling,” a 2004 documentary. “They don’t have to do nothing. It’s the heel that carries the whole show. I’ve always been a heel, and I wouldn’t be anything else but.”

“This is a business that you have to love, and if you love it you live it.”  —Mae Young, RIP.

The Human Element in Quantification

I enjoyed Felix Salmon’s piece in Wired titled “Why Quants Don’t Know Everything.” The premise of the piece is that while what quants do is important, the human element cannot be ignored.

The reason the quants win is that they’re almost always right—at least at first. They find numerical patterns or invent ingenious algorithms that increase profits or solve problems in ways that no amount of subjective experience can match. But what happens after the quants win is not always the data-driven paradise that they and their boosters expected. The more a field is run by a system, the more that system creates incentives for everyone (employees, customers, competitors) to change their behavior in perverse ways—providing more of whatever the system is designed to measure and produce, whether that actually creates any value or not. It’s a problem that can’t be solved until the quants learn a little bit from the old-fashioned ways of thinking they’ve displaced.

Felix discusses the four stages in the rise of the quants: 1) pre-disruption, 2) disruption, 3) overshoot, and 4) synthesis, described below:

It’s increasingly clear that for smart organizations, living by numbers alone simply won’t work. That’s why they arrive at stage four: synthesis—the practice of marrying quantitative insights with old-fashioned subjective experience. Nate Silver himself has written thoughtfully about examples of this in his book, The Signal and the Noise. He cites baseball, which in the post-Moneyball era adopted a “fusion approach” that leans on both statistics and scouting. Silver credits it with delivering the Boston Red Sox’s first World Series title in 86 years. Or consider weather forecasting: The National Weather Service employs meteorologists who, understanding the dynamics of weather systems, can improve forecasts by as much as 25 percent compared with computers alone. A similar synthesis holds in eco­nomic forecasting: Adding human judgment to statistical methods makes results roughly 15 percent more accurate. And it’s even true in chess: While the best computers can now easily beat the best humans, they can in turn be beaten by humans aided by computers.

Very interesting throughout, and highly recommended.

Google X Lab is Working on Smart Contact Lenses for Diabetic Patients

Google has just announced an interesting product they are working on in their secretive Google X lab: contact lenses that can be used to detect changes in blood glucose levels:

We’re now testing a smart contact lens that’s built to measure glucose levels in tears using a tiny wireless chip and miniaturized glucose sensor that are embedded between two layers of soft contact lens material. We’re testing prototypes that can generate a reading once per second. We’re also investigating the potential for this to serve as an early warning for the wearer, so we’re exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds. It’s still early days for this technology, but we’ve completed multiple clinical research studies which are helping to refine our prototype. We hope this could someday lead to a new way for people with diabetes to manage their disease.

This is very cool if slightly uncomfortable to think about.

On the Morality and Self-Awareness of Cards Against Humanity

This is an excellent post that categorizes the infamous Cards Against Humanity game as not a game that is “morally corrosive” (as argued in this post) but rather simply distasteful and provocative:

Cards Against Humanity is a type of humor-oriented carnival space in which norms about appropriate discussion, and appropriate topics of humor, are reversed. It may be acceptable to relax the rules within this space, but there is little danger of what Leah fears is a “leakage” of these rules into everyday life, just as there is little danger that a jester would seriously try to become a pope in everyday life. The fact that a theology school would defend such orgies is a testament to the fact that they serve to uphold the establishment.

It is key that Cards Against Humanity is a highly self-aware game. This is apparent in the tagline (“A free party game for horrible people”) and descriptions: “Unlike most of the party games you’ve played before, Cards Against Humanity is as despicable and awkward as you and your friends.” By pairing the game and its brand of humor with words like “horrible,” “despicable,” and “awkward,” it shows, again, that these are things we should not laugh about, despite doing so anyway. This self-awareness is at the heart of every, “I know I shouldn’t find this funny, but…” statement. ”Virginia Tech Massacre” is funny in this “Opposite Day” world. It’s really not funny in other contexts or in the “real world.” This is also why it’s generally OK for Jews to make Holocaust jokes when it is more frowned upon for others to do the same—it is far more likely that the non-Jew would have less awareness of the consequences of the Holocaust than the Jew, and therefore the lack of self-awareness makes the attempt at humor far less palpable.

I welcomed 2014 with a game of Cards Against Humanity. While certain cards make me uncomfortable, as argued in the post, I don’t take the view that the game has or is able to corrupt me.

The 2014 Edge Question: What Scientific Idea is Ready for Retirement?

Every year since 1998, Edge.org editor John Brockman has been posing one thought-provoking question to some of the world’s greatest thinkers across a variety of disciplines, and then assimilating the responses in an annual anthology. Last year, he published a book This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works which collects a number of these questions in a single volume.

For 2014, the annual Edge.org question is: What Scientific Idea is Ready for Retirement? I’ll be reading responses for the next few weeks, but for now, wanted to link to the main page and highlight a few notable responses:

1) Nassim Taleb, one of my all-time favourite thinkers and authors, who argues for the throwing out of standard deviation as a measure:

The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.

Say someone just asked you to measure the “average daily variations” for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to “real life” much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.

It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term “standard deviation” for what had been known as “root mean square error”. The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market “volatility”, it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.

But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

2) Jay Rosen, who argues we should retire the concept of “information overload”:

Here’s the best definition of information that I know of: information is a measure of uncertainty reduced. It’s deceptively simple. In order to have information, you need two things: an uncertainty that matters to us (we’re having a picnic tomorrow, will it rain?) and something that resolves it (weather report.) But some reports create the uncertainty that is later to be solved.

Suppose we learn from news reports that the National Security Agency “broke” encryption on the Internet. That’s information! It reduces uncertainty about how far the U.S. government was willing to go. (All the way.) But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created. Which is probably what we mean when we say: “well, that raises more questions than it answers.”

3) Richard Dawkins thinks “essentialism” should be retired:

Essentialism—what I’ve called “the tyranny of the discontinuous mind”—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity’s late discovery of evolution—as late as the nineteenth century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won’t occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that theessence of rabbitness is “prior to” the existence of rabbits (whatever “prior to” might mean, and that’s a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.

Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus orHomo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It’s essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and “ring species” tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing “gaps” as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is “really” Australopithecus or Homo is like quarrelling over whether George should be called “tall”. He’s five foot ten, doesn’t that tell you what you need to know?

4) Kevin Kelly, who argues that “fully random mutations” should be retired from thought (this is something that I’ve known for a while, as I have taken a number of courses in molecular biology):

What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. 

5) Ian Bogost, professor at my alma mater of Georgia Tech, who thinks “science” should be retired:

Beyond encouraging people to see science as the only direction for human knowledge and absconding with the subject of materiality, the rhetoric of science also does a disservice to science itself. It makes science look simple, easy, and fun, when science is mostly complex, difficult, and monotonous.

A case in point: the popular Facebook page “I f*cking love science” posts quick-take variations on the “science of x” theme, mostly images and short descriptions of unfamiliar creatures like the pink fairy armadillo, or illustrated birthday wishes to famous scientists like Stephen Hawking. But as the science fiction writer John Skylar rightly insisted in a fiery takedown of the practice last year, most people don’t f*cking love science, they f*cking love photography—pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done—slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.

The rhetoric of science has consequences. Things that have no particular relation to scientific practice must increasingly frame their work in scientific terms to earn any attention or support. The sociology of Internet use suddenly transformed into “web science.” Long accepted practices of statistical analysis have become “data science.” Thanks to shifting educational and research funding priorities, anything that can’t claim that it is a member of a STEM (science, technology, engineering, and math) field will be left out in the cold. Unfortunately, the rhetoric of science offers the most tactical response to such new challenges. Unless humanists reframe their work as “literary science,” they risk getting marginalized, defunded and forgotten.

When you’re selling ideas, you have to sell the ideas that will sell. But in a secular age in which the abstraction of “science” risks replacing all other abstractions, a watered-down, bland, homogeneous version of science is all that will remain if the rhetoric of science is allowed to prosper.

We need not choose between God and man, science and philosophy, interpretation and evidence. But ironically, in its quest to prove itself as the supreme form of secular knowledge, science has inadvertently elevated itself into a theology. Science is not a practice so much as it is an ideology. We don’t need to destroy science in order to bring it down to earth. But we do need to bring it down to earth again, and the first step in doing so is to abandon the rhetoric of science that has become its most popular devotional practice.

If you want to get smarter today, go here and spend a few hours reading through the contributions.

The New York Times Treatment of Bistro at Villard Michel Richard

Food critic Pete Wells at The New York Times has just come out with a scathing review of the Bistro at Villard Michel Richard, the fancy new restaurant at the newly renovated New York Palace in Midtown Manhattan. It’s worth reading in entirety but these two paragraphs are the best:

Think of everything that’s great about fried chicken. Now take it all away. In its place, right between dried-out strands of gray meat and a shell of fried bread crumbs, imagine a gummy white paste about a quarter-inch deep. This unidentifiable paste coats your mouth until you can’t perceive textures or flavors. It is like edible Novocain.

What Villard Michel Richard’s $28 fried chicken does to Southern cooking, its $40 veal cheek blanquette does to French. A classic blanquette is a gentle, reassuring white stew of sublimely tender veal. In this version, the veal cheeks had the dense, rubbery consistency of overcooked liver. Slithering around the meat was a terrifying sauce the color of jarred turkey gravy mixed with cigar ashes. If soldiers had killed Escoffier’s family in front of him and then forced him to make dinner, this is what he would have cooked.

Mmm, delicious.

The Secrets of Snow-Diving Foxes

This is a super interesting article by NPR’s Robert Krulwich, who summarizes research on why snow foxes jump the way they do in hunting for prey:

When they looked at each other’s notes, the researchers saw a pattern: For some reason, Czech foxes prefer to jump in a particular direction — toward the northeast. (To be more precise, it’s about 20 degrees off “magnetic north” — the “N” on your compass.) As the video above says, most of the time, most foxes miss their targets and emerge covered in snow and (one presumes) a little embarrassed. But when they pointed in that particular northeasterly direction, Ed writes, “they killed on 73 percent of their attacks.” If they reversed direction, and jumped exactly the opposite way, they killed 60 percent of the time. But in all other directions — east, south, west, whatever — they sucked. Only 18 percent of those jumps were successful.

Here’s a video of a hunting fox in action: