How Dogs are Like Humans

A thought-provoking and interesting piece in The New York Times by Gregory Berns, a professor of neuroeconomics at Emory University, on how dogs are like humans in their thought processes. By teaching dogs to sit still in MRI machines, they were able to trace neurobiological evidence of emotions in dogs which are akin to the ones we experience:

By looking directly at their brains and bypassing the constraints of behaviorism, M.R.I.’s can tell us about dogs’ internal states. M.R.I.’s are conducted in loud, confined spaces. People don’t like them, and you have to hold absolutely still during the procedure. Conventional veterinary practice says you have to anesthetize animals so they don’t move during a scan. But you can’t study brain function in an anesthetized animal. At least not anything interesting like perception or emotion.

From the beginning, we treated the dogs as persons. We had a consent form, which was modeled after a child’s consent form but signed by the dog’s owner. We emphasized that participation was voluntary, and that the dog had the right to quit the study. We used only positive training methods. No sedation. No restraints. If the dogs didn’t want to be in the M.R.I. scanner, they could leave. Same as any human volunteer.

My dog Callie was the first. Rescued from a shelter, Callie was a skinny black terrier mix, what is called a feist in the southern Appalachians, from where she came. True to her roots, she preferred hunting squirrels and rabbits in the backyard to curling up in my lap. She had a natural inquisitiveness, which probably landed her in the shelter in the first place, but also made training a breeze.

With the help of my friend Mark Spivak, a dog trainer, we started teaching Callie to go into an M.R.I. simulator that I built in my living room. She learned to walk up steps into a tube, place her head in a custom-fitted chin rest, and hold rock-still for periods of up to 30 seconds. Oh, and she had to learn to wear earmuffs to protect her sensitive hearing from the 95 decibels of noise the scanner makes.

After months of training and some trial-and-error at the real M.R.I. scanner, we were rewarded with the first maps of brain activity. For our first tests, we measured Callie’s brain response to two hand signals in the scanner. In later experiments, not yet published, we determined which parts of her brain distinguished the scents of familiar and unfamiliar dogs and humans.

This is truly fascinating.

I have placed Gregory Berns’s upcoming book, How Dogs Love Us: A Neuroscientist and His Adopted Dog Decode the Canine Brain, into my Amazon queue.

Debunked: “Right-Brain” vs. “Left-Brain” Personalities

For years in popular culture, the terms “left-brained” and “right-brained” have come to signify disparate personality types, with an assumption that some people use the right side of their brain more, (those who are supposedly more creative/artistic) while some use the left side more (those who are more logical/analytical). But newly released research findings from University of Utah neuroscientists assert that there is no evidence within brain imaging that indicates some people are right-brained or left-brained:

Following a two-year study, University of Utah researchers have debunked that myth through identifying specific networks in the left and right brain that process lateralized functions. Lateralization of brain function means that there are certain mental processes that are mainly specialized to one of the brain’s left or right hemispheres. During the course of the study, researchers analyzed resting brain scans of 1,011 people between the ages of seven and 29. In each person, they studied functional lateralization of the brain measured for thousands of brain regions — finding no relationship that individuals preferentially use their left -brain network or right- brain network more often.

Following a two-year study, University of Utah researchers have debunked that myth through identifying specific networks in the left and right brain that process lateralized functions. Lateralization of brain function means that there are certain mental processes that are mainly specialized to one of the brain’s left or right hemispheres. During the course of the study, researchers analyzed resting brain scans of 1,011 people between the ages of seven and 29. In each person, they studied functional lateralization of the brain measured for thousands of brain regions — finding no relationship that individuals preferentially use their left -brain network or right- brain network more often.

“It’s absolutely true that some brain functions occur in one or the other side of the brain. Language tends to be on the left, attention more on the right. But people don’t tend to have a stronger left- or right-sided brain network. It seems to be determined more connection by connection, ” said Jeff Anderson, M.D., Ph.D., lead author of the study, which is formally titled “An Evaluation of the Left-Brain vs. Right-Brain Hypothesis with Resting State Functional Connectivity Magnetic Resonance Imaging.” It is published in the journal PLOS ONE this month.

From the paper’s abstract:

Lateralized brain regions subserve functions such as language and visuospatial processing. It has been conjectured that individuals may be left-brain dominant or right-brain dominant based on personality and cognitive style, but neuroimaging data has not provided clear evidence whether such phenotypic differences in the strength of left-dominant or right-dominant networks exist. We evaluated whether strongly lateralized connections covaried within the same individuals. Data were analyzed from publicly available resting state scans for 1011 individuals between the ages of 7 and 29. For each subject, functional lateralization was measured for each pair of 7266 regions covering the gray matter at 5-mm resolution as a difference in correlation before and after inverting images across the midsagittal plane. The difference in gray matter density between homotopic coordinates was used as a regressor to reduce the effect of structural asymmetries on functional lateralization. Nine left- and 11 right-lateralized hubs were identified as peaks in the degree map from the graph of significantly lateralized connections. The left-lateralized hubs included regions from the default mode network (medial prefrontal cortex, posterior cingulate cortex, and temporoparietal junction) and language regions (e.g., Broca Area and Wernicke Area), whereas the right-lateralized hubs included regions from the attention control network (e.g., lateral intraparietal sulcus, anterior insula, area MT, and frontal eye fields). Left- and right-lateralized hubs formed two separable networks of mutually lateralized regions. Connections involving only left- or only right-lateralized hubs showed positive correlation across subjects, but only for connections sharing a node. Lateralization of brain connections appears to be a local rather than global property of brain networks, and our data are not consistent with a whole-brain phenotype of greater “left-brained” or greater “right-brained” network strength across individuals. Small increases in lateralization with age were seen, but no differences in gender were observed.

So while there are more creative/artistic people in the world, this study purports that the active parts of the brain do not account for said personality traits. You learn something new every day, right?

Implanting False Memories in the Mouse Brain

A fascinating new paper coming out of MIT details how researchers were able to implant false memories in mice. From the abstract:

Memories can be unreliable. We created a false memory in mice by optogenetically manipulating memory engram–bearing cells in the hippocampus. Dentate gyrus (DG) or CA1 neurons activated by exposure to a particular context were labeled with channelrhodopsin-2. These neurons were later optically reactivated during fear conditioning in a different context. The DG experimental group showed increased freezing in the original context, in which a foot shock was never delivered. The recall of this false memory was context-specific, activated similar downstream regions engaged during natural fear memory recall, and was also capable of driving an active fear response. Our data demonstrate that it is possible to generate an internally represented and behaviorally expressed fear memory via artificial means.

In their research, scientist Susumu Tonagawa and his team used a technique known as optogenetics, which allows the fine control of individual brain cells. They engineered brain cells in the mouse hippocampus, a part of the brain known to be involved in forming memories, to express the gene for a protein called channelrhodopsin. When cells that contain channelrhodopsin are exposed to blue light, they become activated. The researchers also modified the hippocampus cells so that the channelrhodopsin protein would be produced in whichever brain cells the mouse was using to encode its memory engrams.

The Guardian summarizes:

In the experiment, Tonagawa’s team placed the mice in a chamber and allowed them to explore it. As they did so, relevant memory-encoding brain cells were producing the channelrhodopsin protein. The next day, the same mice were placed in a second chamber and given a small electric shock, to encode a fear response. At the same time, the researchers shone light into the mouse brains to activate their memories of the first chamber. That way, the mice learned to associate fear of the electric shock with the memory of the first chamber.

In the final part of the experiment, the team placed the mice back in the first chamber. The mice froze, demonstrating a typical fear response, even though they had never been shocked while there. “We call this ‘incepting’ or implanting false memories in a mouse brain,” Tonagawa told Science.

Why is this fascinating? Because a similar process may occur when powerful false memories are created in humans, even if the process is much more complicated in the human brain.


On Hearing vs. Listening

Seth Horowitz, an auditory neuroscientist and author of The Universal Sense: How Hearing Shapes the Mind, explains the difference between hearing something and actively listening:

But when you actually pay attention to something you’re listening to, whether it is your favorite song or the cat meowing at dinnertime, a separate “top-down” pathway comes into play. Here, the signals are conveyed through a dorsal pathway in your cortex, part of the brain that does more computation, which lets you actively focus on what you’re hearing and tune out sights and sounds that aren’t as immediately important.

In this case, your brain works like a set of noise-suppressing headphones, with the bottom-up pathways acting as a switch to interrupt if something more urgent — say, an airplane engine dropping through your bathroom ceiling — grabs your attention.

Hearing, in short, is easy. You and every other vertebrate that hasn’t suffered some genetic, developmental or environmental accident have been doing it for hundreds of millions of years. It’s your life line, your alarm system, your way to escape danger and pass on your genes. But listening, really listening, is hard when potential distractions are leaping into your ears every fifty-thousandth of a second — and pathways in your brain are just waiting to interrupt your focus to warn you of any potential dangers.

Listening is a skill that we’re in danger of losing in a world of digital distraction and information overload.

Are you listening?

On Memory Distortion and Invention

A new study by Brent Strickland and Frank Keil at Yale has shown that people’s memory may become distorted in just a few seconds:

Fifty-eight uni students watched three types of 30-second video clip, each featuring a person kicking, throwing, putting or hitting a ball or shuttlecock. All videos were silent. One type of video ended with the consequences of the athletic action implied in the clip – for example, a football flying off into the distance. Another type lacked that final scene and ended instead with an irrelevant shot, for example of a linesman jogging down the line. The final video type was scrambled, with events unfolding in a jumbled order. Crucially, regardless of the video type, sometimes the moment of contact – for example, the kicker actually striking the ball – was shown and sometimes it wasn’t. 

After watching each video clip, the participants were shown a series of stills and asked to say if each one had or hadn’t featured in the video they’d just watched. Here’s the main finding. Participants who watched the video type that climaxed with the ball (or shuttlecock etc) flying off into the distance were prone to saying they’d seen the causal moment of contact in the video, even when that particular image had in fact been missing.

In other words, because seeing the ball fly off implied that the kicker (or other protagonist) had struck the ball, the participants tended to invent a memory for having seen that causal action happen, even when they hadn’t. This memory distortion happened within seconds, sometimes as soon as a second after the relevant part of the video had been seen.

This memory invention didn’t happen for the videos that had an irrelevant ending, or that were scrambled. So memory invention was specifically triggered by observing a consequence (e.g. a ball flying off into the distance) that implied an earlier causal action had happened and had been seen. In this case, the participants appeared to have “filled in” the missing moment of contact from the video, thus creating a causally coherent episode package for their memories. A similar level of memory invention didn’t occur for other missing screen shots that had nothing to do with the implied causal action in the clip.

This isn’t the first of such studies, but it is further evidence that the way we process memories may be easily manipulated…


(via Research Digest)


On Winning

The notion of winning, as presented in this Newsweek piece:

Defined that way, winning becomes translatable into areas beyond the physical: chess, spelling bees, the corporate world, even combat. You can’t go forever down that road, of course. The breadth of our colloquial definition for winning—the fact that we use the same word for being handed an Oscar as for successfully prosecuting a war—means that there is no single gene for victory across all fields, no cerebral on-off switch that turns also-rans into champions. But neuroscientists, psychologists, and other researchers are beginning to better understand the highly interdisciplinary concept of winning, finding surprising links between brain chemistry, social theory, and even economics, which together give new insight into why some people come out on top again and again.

 I am not sure I agree with this, however. I would think it depends on one’s personality (do you have more or less empathy than the average individual?):

What’s better than winning? Doing it while someone else loses. An economist at the University of Bonn has shown that test subjects who receive a given reward for a task enjoy it significantly more if other subjects fail or do worse—a finding that upends traditional economic theories that absolute reward is a person’s central motivation. It’s one of several new inroads into the social dynamics of winning yielded by neuroeconomics, a trendy new field that mixes elements of neuroscience, economics, and cognitive psychology to determine why people make the choices they do—even, or especially, the irrational ones.

Also, the case study revolving around Agassi in the piece is interesting… Why do Americans love a winner? The last sentence provides the stimulating answer.

David Eagleman and The Brain on Trial

Imagine for a second that anything you know about the motivations behind criminal activity. For most of us, myself included, our assessment of burglars, murderers, and other deviants is that they have made a choice to act this way (to break the law).

In a remarkable, provocative piece by David Eagleman, he suggests that criminal activity is ingrained in our brains. In no uncertain terms, Eagleman argues that how the human brain is wired ultimately determines how people will act. There is no such thing as free will.

The piece is long (but a must-read in its entirety). I pull a few significant quotes below.

The piece begins about Charles Whitman, a student at the University of Texas at Austin and a former Marine who killed 16 people and wounded 32 others during a shooting rampage on and around the university’s campus on August 1, 1966. The question was why? Eagleman begins to make his argument here, after Whitman’s suicide:

Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction.

Perhaps the paragraph that tells the whole story of the piece:

When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

It is fascinating to learn how changing brain chemistry affects our moods, emotions, and behaviors. A classic example:

Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos.

Through the mini stories that Eagleman provides in his piece, he explains the lesson: there is no such thing as free will. Human behavior cannot be separated from our brain chemistry:

The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

Now, it’s a little hard to digest that paragraph above. Cleverly, Eagleman begins to question you, the reader, on how you feel about this hypothesis:

Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

At this point, Eagleman counters and perhaps worries that he is going to lose readers. Your ideas are crazy, you might think. But please read on, as Eagleman suggests:

If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. 

Some overwhelming statistics about criminal behavior:

Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

But what about the environmental effects? Surely someone growing up on the mean streets of Detroit would become more predisposed to crime than someone growing up in the quiet suburbs of Wichita, Kansas.

When it comes to nature and nurture, the important point is that we choose neither one. We are each constructed from a genetic blueprint, and then born into a world of circumstances that we cannot control in our most-formative years. The complex interactions of genes and environment mean that all citizens—equal before the law—possess different perspectives, dissimilar personalities, and varied capacities for decision-making. The unique patterns of neurobiology inside each of our heads cannot qualify as choices; these are the cards we’re dealt.

Eagleman further espouses on free will, and explains that it doesn’t exist with a striking example of Tourette’s syndrome:

The legal system rests on the assumption that we are “practical reasoners,” a term of art that presumes, at bottom, the existence of free will. The idea is that we use conscious deliberation when deciding how to act—that is, in the absence of external duress, we make free decisions. This concept of the practical reasoner is intuitive but problematic.

The existence of free will in human behavior is the subject of an ancient debate. Arguments in support of free will are typically based on direct subjective experience (“I feel like I made the decision to lift my finger just now”). But evaluating free will requires some nuance beyond our immediate intuitions.

Consider a decision to move or speak. It feels as though free will leads you to stick out your tongue, or scrunch up your face, or call someone a name. But free will is not required to play any role in these acts. People with Tourette’s syndrome, for instance, suffer from involuntary movements and vocalizations. A typical Touretter may stick out his tongue, scrunch up his face, or call someone a name—all without choosing to do so.

So what’s the purpose of this essay? What can we conclude? Comparatively speaking, we know so little of our brains, that the field of neuroscience can be said to be in its infancy.

Today, neuroimaging [editor’s note: I studied medical imaging both in undergrad at Georgia Tech and at the Brain Imaging Center at California Institute of Technology; I am familiar with the subject matter and for what it’s worth, agree with Eagleman’s assessment] is a crude technology, unable to explain the details of individual behavior. We can detect only large-scale problems, but within the coming decades, we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems. Neuroscience will be better able to say why people are predisposed to act the way they do. As we become more skilled at specifying how behavior results from the microscopic details of the brain, more defense lawyers will point to biological mitigators of guilt, and more juries will place defendants on the not-blameworthy side of the line.

Further conclusions from Eagleman. The wrong question to ask: how can we assign a blameworthiness scale in our legal system? Eagleman explain:

Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.

Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?

Speaking of wrong questions to ask, Eagleman brilliantly defends here:

The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals; we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go; the judge must keep society safe.

And then we come to the meat of the essay, where Eagleman gives us an idea of a forward-looking legal system:

Beyond customized sentencing, a forward-thinking legal system informed by scientific insights into the brain will enable us to stop treating prison as a one-size-fits-all solution. To be clear, I’m not opposed to incarceration, and its purpose is not limited to the removal of dangerous people from the streets. The prospect of incarceration deters many crimes, and time actually spent in prison can steer some people away from further criminal acts upon their release. But that works only for those whose brains function normally. The problem is that prisons have become our de facto mental-health-care institutions—and inflicting punishment on the mentally ill usually has little influence on their future behavior. An encouraging trend is the establishment of mental-health courts around the nation: through such courts, people with mental illnesses can be helped while confined in a tailored environment. Cities such as Richmond, Virginia, are moving in this direction, for reasons of justice as well as cost-effectiveness. Sheriff C. T. Woody, who estimates that nearly 20 percent of Richmond’s prisoners are mentally ill, told CBS News, “The jail isn’t a place for them. They should be in a mental-health facility.” Similarly, many jurisdictions are opening drug courts and developing alternative sentences; they have realized that prisons are not as useful for solving addictions as are meaningful drug-rehabilitation programs.

A forward-thinking legal system will also parlay biological understanding into customized rehabilitation, viewing criminal behavior the way we understand other medical conditions such as epilepsy, schizophrenia, and depression—conditions that now allow the seeking and giving of help. These and other brain disorders find themselves on the not-blameworthy side of the fault line, where they are now recognized as biological, not demonic, issues.

But Eagleman closes spectacularly:

As brain science improves, we will better understand that people exist along continua of capabilities, rather than in simplistic categories. And we will be better able to tailor sentencing and rehabilitation for the individual, rather than maintain the pretense that all brains respond identically to complex challenges and that all people therefore deserve the same punishments. Some people wonder whether it’s unfair to take a scientific approach to sentencing—after all, where’s the humanity in that? But what’s the alternative? As it stands now, ugly people receive longer sentences than attractive people; psychiatrists have no capacity to guess which sex offenders will reoffend; and our prisons are overcrowded with drug addicts and the mentally ill, both of whom could be better helped by rehabilitation. So is current sentencing really superior to a scientifically informed approach?

Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.


I’ve highlighted the major sections of the essay, but of course, I encourage you to read the whole thing. It will change your perspective on how you view and think about criminality and our legal system. If for some chance it did not change your course of thinking, why not? Shout out in the comments.

The Top Ten Wired Articles of 2010

I subscribed to Wired Magazine (print edition) in December of 2009. I’ve read almost all of the feature articles over the last twelve months. The following is my list of top ten Wired articles which have appeared in print from January until December of this year. I highlight notable passages from each piece as well.

(1) “The Neuroscience of Screwing Up” (January 2010). Jonah Lehrer is one of my favorite science writers (do subscribe to his excellent blog, The Frontal Cortex), and his piece in the January edition of Wired is a good way to begin this list. The piece challenges our preconceptions of the scientific process and how we make mistakes in the scientific quest for answers:

The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

(2) “Fill in the Blanks: Using Math to Turn Lo-Res Datasets into High-Res Samples” (March 2010). I highlighted this piece in this entry, and it’s still definitely of the most interesting articles I’ve read this year, not least because the entire concept of compressed sensing was totally new to me:

Compressed sensing works something like this: You’ve got a picture — of a kidney, of the president, doesn’t matter. The picture is made of 1 million pixels. In traditional imaging, that’s a million measurements you have to make. In compressed sensing, you measure only a small fraction — say, 100,000 pixels randomly selected from various parts of the image. From that starting point there is a gigantic, effectively infinite number of ways the remaining 900,000 pixels could be filled in.

(3) “Art of the Steal: On the Trail of World’s Most Ingenious Thief” (April 2010). A fascinating piece about Gerald Blanchard, who has been described as “cunning, clever, conniving, and creative.” Incredible what he was able to accomplish during his stint:

Over the years, Blanchard procured and stockpiled IDs and uniforms from various security companies and even law enforcement agencies. Sometimes, just for fun and to see whether it would work, he pretended to be a reporter so he could hang out with celebrities. He created VIP passes and applied for press cards so he could go to NHL playoff games or take a spin around the Indianapolis Motor Speedway with racing legend Mario Andretti. He met the prince of Monaco at a yacht race in Monte Carlo and interviewed Christina Aguilera at one of her concerts.

(4) “Getting LOST” (May 2010). LOST is my favorite show on television (by far), so it’s with some bias that I select this piece into the top 10. This piece has outstanding trivia about the show, an interview with executive producers Carlton Cuse and Damon Lindelof, and really excellent infographics (my favorite is this one).

(5) “The Man Who Could Unsnarl Manhattan Traffic” (June 2010). Felix Salmon (whose finance blog I follow at Reuters; unrelated, but I also recommend Salmon’s excellent take on bicycling in New York City.) reports on Charles Komanoff, the man whose goal is to alleviate traffic in New York City.

[It is ] the most ambitious effort yet to impose mathematical rigor and predictability on an inherently chaotic phenomenon. Despite decades of attempts to curb delays—adding lanes to highways, synchronizing traffic lights—planners haven’t had much success at unsnarling gridlock. A study by the Texas Transportation Institute found that in 2007, metropolitan-area drivers in the US spent an average of 36 hours stuck in traffic—up from 14 hours in 1982.

Komanoff tracks ALL of this data in a massive spreadsheet, dubbed Balanced Transportation Analyzer (warning! .xls link, 5.5MB):

Over the course of about 50 worksheets, the BTA breaks down every aspect of New York City transportation—subway revenues, traffic jams, noise pollution—in an attempt to discover which mix of tolls and surcharges would create the greatest benefit for the largest number of people.

(6) “Secret of AA” (July 2010). Some 1.2 million people belong to one of Alcoholic Anonymous’s 55,000 meeting groups in the United States. But after 75 years, we still don’t know how it works. Fascinating:

There’s no doubt that when AA works, it can be transformative. But what aspect of the program deserves most of the credit? Is it the act of surrendering to a higher power? The making of amends to people a drinker has wronged? The simple admission that you have a problem? Stunningly, even the most highly regarded AA experts have no idea.

(7) “The News Factory” (September 2010). You’ve probably seen those videos from Taiwan recounting events of the moment through hilarious animated videos (see The iPhone Antennagate; Chilean Miners). What’s fascinating is that there’s an entire company working to create these videos. Next Media Animation (NMA) is a factory churning out  videos:

The team at Next Media Animation cranks out about 20 short clips a day, most involving crimes and scandals in Hong Kong and Taiwan. But a few are focused on tabloid staples in the US—from Tiger Woods’ marital troubles to Michael Jackson’s death. Seeing them filtered through the Next Media lens is as disorienting as it is entertaining.

How can they create such impressive (relatively speaking) videos in such a short period of time?

It takes Pixar up to seven hours to render a single frame of footage—that is, to convert the computer data into video. NMA needed to create an animated clip in a third of that time and render more than a thousand frames of animation in just a few minutes. A team spent two years wrestling with the problem, experimenting with one digital tool after another—Poser, 3ds Max, Maya. “It didn’t look good, and it took too long,” says Eric Ryder, a Next art director. “But Jimmy doesn’t want excuses.”

(8) “The Nerd Superstore” (October 2010). An excellent look into ThinkGeek, a site for nerds. ThinkGeek is a profitable company that carries an assortment of products:

Today ThinkGeek has 51 employees. Single-day orders occasionally top out at $1 million, and an astonishing amount of that product is caffeine. You can purchase it online or from the mail-order catalog in the form of mints, candy, gum, jerky, sprays, capsules, chews, cookies, and powders, as well as in lip balms, brownie mix, and soaps (liquid and solid). The company has thus far pushed more than 1 billion milligrams of the stimulant.

Where else could you purchase awesome sauce, brain freeze ice cubes, and an 8-bit tie all in one place?

(9) “The Quantified City” (November 2010). What can a hundred million calls to 311 reveal about a city? Steven Johnson uses New York City as an example where the collected data is quantified:

As useful as 311 is to ordinary New Yorkers, the most intriguing thing about the service is all the information it supplies back to the city. Each complaint is logged, tagged, and mapped to make it available for subsequent analysis. In some cases, 311 simply helps New York respond more intelligently to needs that were obvious to begin with. Holidays, for example, spark reliable surges in call volume, with questions about government closings and parking regulations. On snow days, call volume spikes precipitously, which 311 anticipates with recorded messages about school closings and parking rules.

The 311 complaints, visualized in an infographic, for one week in September (question for the reader: do you think population density matters here?)

(10) “Teen Mathletes Do Battle at Algorithm Olympics” (December 2010). Excellent piece by Jason Fagone about kids competing at the International Olympiad in Informatics (IOI). While the piece focuses on two students, it’s important to note how elite this event is:

China’s approach to IOI is proof of just how serious the contest has become and how tied up it is in notions of national prestige and economic competitiveness. To earn a spot on the Chinese team, a coder has to beat 80,000 of his compatriots in a series of provincial elimination rounds that last an entire year.

But what’s the downside of such intense training and competition? I ponder the possibilities with some personal reflections in this post.



1) For some of the titles above, I’ve used the titles presented in the print edition of Wired (the titles are usually longer on the Web).

2) If you’re a fan of Wired, what’s your favorite article from 2010? Feel free to comment below.

Readings: Camera Head, Brain on Metaphors

Here are two excellent reads from this week:

1) “Sir, There’s a Camera in Your Head” [Wall Street Journal] – An Iraqi assistant professor in the photography and imaging department of NYU’s Tisch School of the Arts, Wafaa Bilal, intends to undergo surgery in coming weeks to install a camera on the back of his head. Why? It’s a commission by a museum in Qatar:

For one year, Mr. Bilal’s camera will take still pictures at one-minute intervals, then feed the photos to monitors at the museum. The thumbnail-sized camera will be affixed to his head through a piercing-like attachment.

Mr. Bilal’s camera-based work will be overseen by the Qatar Museums Authority where:

For one year, Mr. Bilal’s camera will take still pictures at one-minute intervals, then feed the photos to monitors at the museum. The thumbnail-sized camera will be affixed to his head through a piercing-like attachment.

It remains to be seen whether this project will see the light of day, as NYU administrators have raised privacy concerns (students being filmed without their consent/knowledge). Of course, Mr. Bilal isn’t new to controversial projects. In a 2008 project, Virtual Jihadi, Mr. Bilal hacked a video game to insert an avatar of himself as a suicide-bomber hunting President George W. Bush. In his 2007 work, Domestic Tension, Mr. Bilal trapped himself in a Chicago museum for a month, inviting the public to go to a website where they could “shoot” the artist remotely by firing a paintball gun at him. His other projects are interesting as well: Mona Lisa (the exploration of that enigmatic smile) and One Chair, based on Leonardo da Vinci’s The Last Supper.

2) “This is Your Brain on Metaphors” [New York Times] – this is a brilliant piece by Robert Sapolsky, a professor of Biology, Neurology and Neurosurgery at Stanford University. In this piece, Sapolsky explains how human brains are wired to understand metaphors surprisingly well. He explains:

Symbols, metaphors, analogies, parables, synecdoche, figures of speech: we understand them. We understand that a captain wants more than just hands when he orders all of them on deck. We understand that Kafka’s “Metamorphosis” isn’t really about a cockroach. If we are of a certain theological ilk, we see bread and wine intertwined with body and blood. We grasp that the right piece of cloth can represent a nation and its values, and that setting fire to such a flag is a highly charged act.

It’s interesting how our brains can be primed with sensory inputs, such as touch. For instance, I found this remarkable:

Volunteers were asked to evaluate the resumes of supposed job applicants where, as the critical variable, the resume was attached to a clipboard of one of two different weights. Subjects who evaluated the candidate while holding the heavier clipboard tended to judge candidates to be more serious, with the weight of the clipboard having no effect on how congenial the applicant was judged. After all, we say things like “weighty matter” or “gravity of a situation.”

The question is: knowing this information, how can you use it to your advantage in daily life? Next time you want someone to consider your question or idea, perhaps give them a cup of coffee or some item to hold while explaining yourself. Of course, now that you’ve read about this effect, you may be more attuned to it so that it doesn’t play as large a factor in your future decisions (I hope).

Perhaps the most interesting study profiled is that on cleanliness:

Another truly interesting domain in which the brain confuses the literal and metaphorical is cleanliness. In a remarkable study, Chen-Bo Zhong of the University of Toronto and Katie Liljenquist of Northwestern University demonstrated how the brain has trouble distinguishing between being a dirty scoundrel and being in need of a bath. Volunteers were asked to recall either a moral or immoral act in their past. Afterward, as a token of appreciation, Zhong and Liljenquist offered the volunteers a choice between the gift of a pencil or of a package of antiseptic wipes. And the folks who had just wallowed in their ethical failures were more likely to go for the wipes.

Sapolsky’s piece is one of the best short expositions I’ve read explaining how our brains are wired; the references to every day situations are particularly interesting. If you’re into neuroscience and want to learn more about the forces in our lives that shape our decisions, I cannot recommend Dan Ariely’s Predictably Irrational enough. It’s one of the best books I’ve read this year.