On the Morality and Self-Awareness of Cards Against Humanity

This is an excellent post that categorizes the infamous Cards Against Humanity game as not a game that is “morally corrosive” (as argued in this post) but rather simply distasteful and provocative:

Cards Against Humanity is a type of humor-oriented carnival space in which norms about appropriate discussion, and appropriate topics of humor, are reversed. It may be acceptable to relax the rules within this space, but there is little danger of what Leah fears is a “leakage” of these rules into everyday life, just as there is little danger that a jester would seriously try to become a pope in everyday life. The fact that a theology school would defend such orgies is a testament to the fact that they serve to uphold the establishment.

It is key that Cards Against Humanity is a highly self-aware game. This is apparent in the tagline (“A free party game for horrible people”) and descriptions: “Unlike most of the party games you’ve played before, Cards Against Humanity is as despicable and awkward as you and your friends.” By pairing the game and its brand of humor with words like “horrible,” “despicable,” and “awkward,” it shows, again, that these are things we should not laugh about, despite doing so anyway. This self-awareness is at the heart of every, “I know I shouldn’t find this funny, but…” statement. ”Virginia Tech Massacre” is funny in this “Opposite Day” world. It’s really not funny in other contexts or in the “real world.” This is also why it’s generally OK for Jews to make Holocaust jokes when it is more frowned upon for others to do the same—it is far more likely that the non-Jew would have less awareness of the consequences of the Holocaust than the Jew, and therefore the lack of self-awareness makes the attempt at humor far less palpable.

I welcomed 2014 with a game of Cards Against Humanity. While certain cards make me uncomfortable, as argued in the post, I don’t take the view that the game has or is able to corrupt me.

The 2014 Edge Question: What Scientific Idea is Ready for Retirement?

Every year since 1998, Edge.org editor John Brockman has been posing one thought-provoking question to some of the world’s greatest thinkers across a variety of disciplines, and then assimilating the responses in an annual anthology. Last year, he published a book This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works which collects a number of these questions in a single volume.

For 2014, the annual Edge.org question is: What Scientific Idea is Ready for Retirement? I’ll be reading responses for the next few weeks, but for now, wanted to link to the main page and highlight a few notable responses:

1) Nassim Taleb, one of my all-time favourite thinkers and authors, who argues for the throwing out of standard deviation as a measure:

The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.

Say someone just asked you to measure the “average daily variations” for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to “real life” much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.

It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term “standard deviation” for what had been known as “root mean square error”. The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market “volatility”, it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.

But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

2) Jay Rosen, who argues we should retire the concept of “information overload”:

Here’s the best definition of information that I know of: information is a measure of uncertainty reduced. It’s deceptively simple. In order to have information, you need two things: an uncertainty that matters to us (we’re having a picnic tomorrow, will it rain?) and something that resolves it (weather report.) But some reports create the uncertainty that is later to be solved.

Suppose we learn from news reports that the National Security Agency “broke” encryption on the Internet. That’s information! It reduces uncertainty about how far the U.S. government was willing to go. (All the way.) But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created. Which is probably what we mean when we say: “well, that raises more questions than it answers.”

3) Richard Dawkins thinks “essentialism” should be retired:

Essentialism—what I’ve called “the tyranny of the discontinuous mind”—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity’s late discovery of evolution—as late as the nineteenth century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won’t occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that theessence of rabbitness is “prior to” the existence of rabbits (whatever “prior to” might mean, and that’s a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.

Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus orHomo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It’s essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and “ring species” tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing “gaps” as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is “really” Australopithecus or Homo is like quarrelling over whether George should be called “tall”. He’s five foot ten, doesn’t that tell you what you need to know?

4) Kevin Kelly, who argues that “fully random mutations” should be retired from thought (this is something that I’ve known for a while, as I have taken a number of courses in molecular biology):

What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. 

5) Ian Bogost, professor at my alma mater of Georgia Tech, who thinks “science” should be retired:

Beyond encouraging people to see science as the only direction for human knowledge and absconding with the subject of materiality, the rhetoric of science also does a disservice to science itself. It makes science look simple, easy, and fun, when science is mostly complex, difficult, and monotonous.

A case in point: the popular Facebook page “I f*cking love science” posts quick-take variations on the “science of x” theme, mostly images and short descriptions of unfamiliar creatures like the pink fairy armadillo, or illustrated birthday wishes to famous scientists like Stephen Hawking. But as the science fiction writer John Skylar rightly insisted in a fiery takedown of the practice last year, most people don’t f*cking love science, they f*cking love photography—pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done—slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.

The rhetoric of science has consequences. Things that have no particular relation to scientific practice must increasingly frame their work in scientific terms to earn any attention or support. The sociology of Internet use suddenly transformed into “web science.” Long accepted practices of statistical analysis have become “data science.” Thanks to shifting educational and research funding priorities, anything that can’t claim that it is a member of a STEM (science, technology, engineering, and math) field will be left out in the cold. Unfortunately, the rhetoric of science offers the most tactical response to such new challenges. Unless humanists reframe their work as “literary science,” they risk getting marginalized, defunded and forgotten.

When you’re selling ideas, you have to sell the ideas that will sell. But in a secular age in which the abstraction of “science” risks replacing all other abstractions, a watered-down, bland, homogeneous version of science is all that will remain if the rhetoric of science is allowed to prosper.

We need not choose between God and man, science and philosophy, interpretation and evidence. But ironically, in its quest to prove itself as the supreme form of secular knowledge, science has inadvertently elevated itself into a theology. Science is not a practice so much as it is an ideology. We don’t need to destroy science in order to bring it down to earth. But we do need to bring it down to earth again, and the first step in doing so is to abandon the rhetoric of science that has become its most popular devotional practice.

If you want to get smarter today, go here and spend a few hours reading through the contributions.

The New York Times Treatment of Bistro at Villard Michel Richard

Food critic Pete Wells at The New York Times has just come out with a scathing review of the Bistro at Villard Michel Richard, the fancy new restaurant at the newly renovated New York Palace in Midtown Manhattan. It’s worth reading in entirety but these two paragraphs are the best:

Think of everything that’s great about fried chicken. Now take it all away. In its place, right between dried-out strands of gray meat and a shell of fried bread crumbs, imagine a gummy white paste about a quarter-inch deep. This unidentifiable paste coats your mouth until you can’t perceive textures or flavors. It is like edible Novocain.

What Villard Michel Richard’s $28 fried chicken does to Southern cooking, its $40 veal cheek blanquette does to French. A classic blanquette is a gentle, reassuring white stew of sublimely tender veal. In this version, the veal cheeks had the dense, rubbery consistency of overcooked liver. Slithering around the meat was a terrifying sauce the color of jarred turkey gravy mixed with cigar ashes. If soldiers had killed Escoffier’s family in front of him and then forced him to make dinner, this is what he would have cooked.

Mmm, delicious.

The Secrets of Snow-Diving Foxes

This is a super interesting article by NPR’s Robert Krulwich, who summarizes research on why snow foxes jump the way they do in hunting for prey:

When they looked at each other’s notes, the researchers saw a pattern: For some reason, Czech foxes prefer to jump in a particular direction — toward the northeast. (To be more precise, it’s about 20 degrees off “magnetic north” — the “N” on your compass.) As the video above says, most of the time, most foxes miss their targets and emerge covered in snow and (one presumes) a little embarrassed. But when they pointed in that particular northeasterly direction, Ed writes, “they killed on 73 percent of their attacks.” If they reversed direction, and jumped exactly the opposite way, they killed 60 percent of the time. But in all other directions — east, south, west, whatever — they sucked. Only 18 percent of those jumps were successful.

Here’s a video of a hunting fox in action:

Statistical Stylometry: Quantifying Elements of Writing Style that Differentiate Successful Fiction

Can good writing be differentiated from bad writing through some kind of algorithm? Many have tried to answer this research question. The latest news in this realm comes from Stony Brook University, in which a group of researchers:

…[T]ook 1000 sentences from the beginning of each book. They performed systematic analyses based on lexical and syntactic features that have been proven effective in Natural Language Processing (NLP) tasks such as authorship attribution, genre detection, gender identification, and native language detection.

“To the best of our knowledge, our work is the first that provides quantitative insights into the connection between the writing style and the success of literary works,” Choi says. “Previous work has attempted to gain insights into the ‘secret recipe’ of successful books. But most of these studies were qualitative, based on a dozen books, and focused primarily on high-level content—the personalities of protagonists and antagonists and the plots. Our work examines a considerably larger collection—800 books—over multiple genres, providing insights into lexical, syntactic, and discourse patterns that characterize the writing styles commonly shared among the successful literature.”

I had no idea there was a name for this kind of research. Statistical stylometry is the statistical analysis of variations in literary style between one writer or genre and another. This study reports, for the first time, that the discipline can be effective in distinguishing highly successful literature from its less successful counterpart, achieving accuracy rates as high as 84%.

The best book on writing that I’ve read is Stephen King’s On Writing, in which he echoes the descriptive nature of writing that the researchers back up as well:

[T]he less successful books also rely on verbs that explicitly describe actions and emotions (“wanted”, “took”, “promised”, “cried”, “cheered”), while more successful books favor verbs that describe thought-processing (“recognized”, “remembered”) and verbs that simply serve the purpose of quotes (“say”).

The Reader/Author Transaction Model

A very interesting reading tip from Bret Victor, compiled on this page in the sidebar:

Carver Mead describes a physical theory in which atoms exchange energy by resonating with each other. Before the energy transaction can happen, the two atoms must be phase-matched, oscillating in almost perfect synchrony with each other.

I sometimes think about resonant transactions as a metaphor for getting something out of a piece of writing. Before the material can resonate, before energy can be exchanged between the author and reader, the reader must already have available a mode of vibration at the author’s frequency. (This doesn’t mean that the reader is already thinking the author’s thought; it means the reader is capable of thinking it.)

People often describe written communication in terms of transmission(the author explained the concept well, or poorly) and/or absorption (the reader does or doesn’t have the background or skill to understand the concept). But I think of it more like a transaction — the author and the reader must be matched with each other. The author and reader must share a close-enough worldview, viewpoint, vocabulary, set of mental models, sense of aesthetics, and set of goals. For any particular concept in the material, if not enough of these are sufficiently matched, no resonance will occur and no energy will be exchanged.

Perhaps, as a reader, one way to get more out of more material is to collect and cultivate a diverse set of resonators, to increase the probability of a phase-match.

My brief thought: I wouldn’t be so pessimistic as to say there are no times no energy is exchanged in a reader-author relationship. I think there’s always a chance to learn something new.

###

(via Jason Kottke)

The Photography of Adam Magyar

In a long piece titled “Einstein’s Camera,” Joshua Hammer profiles the photography of Adam Magyar:

In a growing body of photographic and video art done over the past decade, Magyar bends conventional representations of time and space, stretching milliseconds into minutes, freezing moments with a resolution that the naked eye could never have perceived. His art evokes such variegated sources as Albert Einstein, Zen Buddhism, even the 1960s TV series The Twilight Zone.The images—sleek silver subway cars, solemn commuters lost in private worlds—are beautiful and elegant, but also produce feelings of disquiet. “These moments I capture are meaningless, there is no story in them, and if you can catch the core, the essence of being, you capture probably everything,” Magyar says in one of the many cryptic comments about his work that reflect both their hypnotic appeal and their elusiveness. There is a sense of stepping into a different dimension, of inhabiting a space between stillness and movement, a time-warp world where the rules of physics don’t apply.

Magyar’s work represents a fruitful cross-fertilization of technology and art, two disciplines—one objective and mathematical, the other entirely subjective—that weren’t always regarded as harmonious or compatible. Yet the two are intertwined, and breakthroughs in technology have often made new forms of art possible. Five thousand years ago, Egyptian technicians heated desert sand, limestone, potash, and copper carbonate in kilns to make a synthetic pigment known as “Egyptian blue,” which contributed to the highly realistic yet stylized portraiture of the Second and Third Dynasties.

adam_magyar_train

Fascinating profile. Definitely worth clicking over to see Adam’s photography, especially the Stainless photography and video features.

Why Everyone Seems to Have Cancer

A thoughtful take on comparing heart disease and cancer by George Johnson in The New York Times.

Half a century ago, the story goes, a person was far more likely to die from heart disease. Now cancer is on the verge of overtaking it as the No. 1 cause of death.

Troubling as this sounds, the comparison is unfair. Cancer is, by far, the harder problem — a condition deeply ingrained in the nature of evolution and multicellular life. Given that obstacle, cancer researchers are fighting and even winning smaller battles: reducing the death toll from childhood cancers and preventing — and sometimes curing — cancers that strike people in their prime. But when it comes to diseases of the elderly, there can be no decisive victory. This is, in the end, a zero-sum game.

As people age their cells amass more potentially cancerous mutations. Given a long enough life, cancer will eventually kill you — unless you die first of something else. That would be true even in a world free from carcinogens and equipped with the most powerful medical technology.

The author is keen on pointing out that the future of medicine will be focused on prevention rather than treatment.

On Hard-Won Lessons in Your Single, Solitary Years

Happy New Year!

I’ve cast this blog aside for the first few days of the year, spending some vacation time in Florida and working on building new relationship(s). To that end, a lot of what I have been reading online almost seems to come my way as something that I was meant to read, confirming my beliefs/values. Perhaps the best example is this Modern Love story in The New York Times, on what being single for many years teaches you:

Being a single person searching for love teaches you that not everything is under your control. You can’t control whether the person you’ve fallen for will call. You can’t force yourself to have feelings for the nice guy your best friend fixed you up with. You have no way to know whether attending this or that event — a co-worker’s art opening, a neighbor’s housewarming — will lead to the chance encounter that will forever alter your life. You simply learn to do your best, and leave it at that.

Ringing endorsement here:

Relationships are work, but so is being single, and I became pretty good at it.

The perspective in the story comes from someone older than me, but I sympathize with this:

Most important, I’ve realized I never needed a long boyfriend résumé for the experience. In the 20 years before I met Mark, I learned a lot of hard lessons: how to be a self-respecting adult in a world that often treats single people like feckless teenagers; how to stand at cocktail parties while my friends’ in-laws asked me if I had a boyfriend; how to have warm, friendly dinners with strangers I had met online as we delicately tried to determine whether we could possibly share our lives together; and how to come home to an empty apartment after a rotten day at work.

A wonderful way to start 2014.

###

(hat tip: @jennydeluxe)