The Difference between Affluent, Rich, and Super-Rich

One of the best things I’ve read this week is Ben Casnocha’s blog post titled “The Goldilocks Theory of Being Rich” on what it means to be rich. In the post, Ben correctly posits that today there’s a very small difference between the rich and the American middle class in terms of quality of life. In the post, Ben differentiates among affluent, rich, and the super-rich…

The actual best part about being super rich, as far as I can tell, is this: You’re more likely to feel like you led a life of meaning. You might not be happy all the time or most of the time, but you will feel like your time on this earth counted for something. One way to distinguish happiness from meaning is that happiness is the day to day bounce of emotions while meaning is what you feel when you step back, take a minute, and reflect on what will go in your obituary. (Here’s my post on meaning vs. happiness.)

How so? The feeling of meaning and making a difference manifests in real, concrete ways. Someone like Meg Whitman can walk the HP campus and see thousands of employees who support their families thanks to employment at HP; she can read stories about the millions of people who use HP products every day to be better at their job. That imbues her life with a sense that her life matters. If you don’t have a corporate campus to walk around—if, for example, you’re an options trader and not a builder of things—fear not. With a supple bank account, you can still take actions that generate meaning. Write big checks to charity and you’ll get thank you notes from the children at the public school you helped. You’ll get enough feel-good ooze from your charitable giving to last you a lifetime. Entrepreneur and billionaire Marc Benioff has said, “Nothing is going to make you feel better. Philanthropy is absolutely the best drug I’ve ever taken.”

I liked this analogy posited by Tim O’Reilly:

…money is like gasoline while driving. You never want to run out, but the point of life is not to go on a tour of gas stations.

The distinction between affluent, rich, and super-rich:

Maybe wealth needs its own Goldilocks porridge story: you want not too much, not too little. And I think that ideal middle ground is the “Rich” category in the hierarchy I opened with. More crudely, this ideal amount of money is termed “fuck-you money.” With fuck you money, you can’t fly around the world on a private jet (so you’re not as rich as the Super Rich) but do you have the power to say fuck you to essentially anyone or anything that doesn’t interest you (which means you’re richer than the merely affluent).

Put another way, if you work on stuff that doesn’t excite you for more than one day a week, in my estimation you do not have fuck-you money. You’re still working for the man. At the other end of the spectrum, if you find yourself being invited to more than a few charity galas a year, worrying about physical and cyber security at your home, and asking a PR person to review your public statements, you have a lot more than fuck-you money and all the corresponding drawbacks.

Definitely worth reading this thought-provoking post in its entirety.

The 2014 Edge Question: What Scientific Idea is Ready for Retirement?

Every year since 1998, Edge.org editor John Brockman has been posing one thought-provoking question to some of the world’s greatest thinkers across a variety of disciplines, and then assimilating the responses in an annual anthology. Last year, he published a book This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works which collects a number of these questions in a single volume.

For 2014, the annual Edge.org question is: What Scientific Idea is Ready for Retirement? I’ll be reading responses for the next few weeks, but for now, wanted to link to the main page and highlight a few notable responses:

1) Nassim Taleb, one of my all-time favourite thinkers and authors, who argues for the throwing out of standard deviation as a measure:

The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.

Say someone just asked you to measure the “average daily variations” for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to “real life” much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.

It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term “standard deviation” for what had been known as “root mean square error”. The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market “volatility”, it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.

But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

2) Jay Rosen, who argues we should retire the concept of “information overload”:

Here’s the best definition of information that I know of: information is a measure of uncertainty reduced. It’s deceptively simple. In order to have information, you need two things: an uncertainty that matters to us (we’re having a picnic tomorrow, will it rain?) and something that resolves it (weather report.) But some reports create the uncertainty that is later to be solved.

Suppose we learn from news reports that the National Security Agency “broke” encryption on the Internet. That’s information! It reduces uncertainty about how far the U.S. government was willing to go. (All the way.) But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created. Which is probably what we mean when we say: “well, that raises more questions than it answers.”

3) Richard Dawkins thinks “essentialism” should be retired:

Essentialism—what I’ve called “the tyranny of the discontinuous mind”—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity’s late discovery of evolution—as late as the nineteenth century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won’t occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that theessence of rabbitness is “prior to” the existence of rabbits (whatever “prior to” might mean, and that’s a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.

Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus orHomo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It’s essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and “ring species” tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing “gaps” as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is “really” Australopithecus or Homo is like quarrelling over whether George should be called “tall”. He’s five foot ten, doesn’t that tell you what you need to know?

4) Kevin Kelly, who argues that “fully random mutations” should be retired from thought (this is something that I’ve known for a while, as I have taken a number of courses in molecular biology):

What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. 

5) Ian Bogost, professor at my alma mater of Georgia Tech, who thinks “science” should be retired:

Beyond encouraging people to see science as the only direction for human knowledge and absconding with the subject of materiality, the rhetoric of science also does a disservice to science itself. It makes science look simple, easy, and fun, when science is mostly complex, difficult, and monotonous.

A case in point: the popular Facebook page “I f*cking love science” posts quick-take variations on the “science of x” theme, mostly images and short descriptions of unfamiliar creatures like the pink fairy armadillo, or illustrated birthday wishes to famous scientists like Stephen Hawking. But as the science fiction writer John Skylar rightly insisted in a fiery takedown of the practice last year, most people don’t f*cking love science, they f*cking love photography—pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done—slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.

The rhetoric of science has consequences. Things that have no particular relation to scientific practice must increasingly frame their work in scientific terms to earn any attention or support. The sociology of Internet use suddenly transformed into “web science.” Long accepted practices of statistical analysis have become “data science.” Thanks to shifting educational and research funding priorities, anything that can’t claim that it is a member of a STEM (science, technology, engineering, and math) field will be left out in the cold. Unfortunately, the rhetoric of science offers the most tactical response to such new challenges. Unless humanists reframe their work as “literary science,” they risk getting marginalized, defunded and forgotten.

When you’re selling ideas, you have to sell the ideas that will sell. But in a secular age in which the abstraction of “science” risks replacing all other abstractions, a watered-down, bland, homogeneous version of science is all that will remain if the rhetoric of science is allowed to prosper.

We need not choose between God and man, science and philosophy, interpretation and evidence. But ironically, in its quest to prove itself as the supreme form of secular knowledge, science has inadvertently elevated itself into a theology. Science is not a practice so much as it is an ideology. We don’t need to destroy science in order to bring it down to earth. But we do need to bring it down to earth again, and the first step in doing so is to abandon the rhetoric of science that has become its most popular devotional practice.

If you want to get smarter today, go here and spend a few hours reading through the contributions.

On Luck, J.K. Rowling, and the Chamber of Literary Fame

I had a conversation today at lunch with a lady about the role of luck in her career. We both agreed that we shouldn’t underestimate chance encounters and how certain circumstances bring us opportunities. Too often we attribute success to diligence and/or hard work, while we (strongly) discount the role of luck that played in our successes.

In this spirit, I thought this was an excellent piece by Duncan J. Watts on the discovery of J.K. Rowling’s pseudonymously published novel, The Cuckoo’s Calling:

In the real world, of course, it’s impossible to travel back in time and start over, so it’s much harder to argue that someone who is incredibly successful may owe their success to a combination of luck and cumulative advantage rather than superior talent. But by writing under the pseudonym of Robert Galbraith, an otherwise anonymous name, Rowling came pretty close to recreating our experiment, starting over again as an unknown author and publishing a book that would have to succeed or fail on its own merits, just as Harry Potter had to 16 years ago — before anyone knew who Rowling was.

Rowling made a bold move and, no doubt, is feeling vindicated by the critical acclaim the book has received.

But there’s a catch: Until the news leaked about the author’s real identity, this critically acclaimed book had sold somewhere between 500 and 1,500 copies, depending on which report you read. As they say in the U.K., that’s rubbish! What’s more, had the author actually been Robert Galbraith, the book would almost certainly have continued to languish in obscurity, probably forever.

“The Cuckoo’s Calling” will now have a happy ending, and its success will only perpetuate the myth that talent is ultimately rewarded with success. What Rowling’s little experiment has actually demonstrated, however, is that quality and success are even more unrelated than we found in our experiment. It might be hard for a book to become a runaway bestseller if it’s unreadably bad (although one might argue that the Twilight series and “Fifty Shades of Grey” challenge this constraint), but it is also clear that being good, or even excellent, isn’t enough. As one of the hapless editors who turned down the Galbraith manuscript put it, “When the book came in, I thought it was perfectly good — it was certainly well-written — but it didn’t stand out.”

I highly recommend reading the entire thing where the author discusses a social experiment about the discovery of music by unknown artists.

###

Recommended related reading: Nassim Taleb on the role of luck.

A Reddit “Ask Me Anything” with Nassim Nicholas Taleb

I really enjoyed this Reddit “Ask Me Anything” with Nassim Nicholas Taleb. A few of my favorites Q&A exchanges below:

Q: How should a person use and not use the internet to make his life better?

Taleb: Bring email down to 15 a day. Meet internet friends in person.

His thoughts on cancer:

Q: Can you tell us more about your brush with cancer?

Taleb: I despise (that is have a moral revulsion against) cancer survivors like Armstrong who trade on it (and I got shellacked for saying it before his demise). And I hate the idea of boasting “winning” the war on cancer: radiation rooms are full of people who are “losing” for no fault of theirs.

On the obesity epidemic in the United States:

Q: According to your principles, how would you deal with the obesity epidemic hitting the U.S.?

Taleb: The general problem is that we are not made to control our environment, and we are designed for a degree of variability: in energy, temperature, food composition, sleep duration, exercise (by Jensen’s inequality). Depriving anyone of variations is silly. So we need to force periods of starvation/fasts , sleep deprivation , protein deprivation, etc. Religions force shabbats, fasts, etc. but we are no longer under the sway of religions… The solution is rules…

On the most important thing in the world:

Q: What is the most important skill or trait a human being can have in the modern world?

Taleb: A sense of honor. It puts you above everything else.

On having “skin in the game:”

Q: Should judges, jurors, and prosecutors have skin in the game?

Taleb: Skin in the game is about being harmed by an error if it harms others. Managers of large corporations can be forced to lose money beyond their compensation should the firm suffer. As to judges, I don’t know, but hopefully they have sufficient eye contact to suffer shame.

A lot more here.

Nassim Taleb’s The Black Swan is one of the best books I’ve ever read and has significantly informed my view of the world.

Nassim Taleb on Big Data

This is a strange article from Nassim Taleb, in which he cautions us about big data:

[B]ig data means anyone can find fake statistical relationships, since the spurious rises to the surface. This is because in large data sets, large deviations are vastly more attributable to variance (or noise) than to information (or signal). It’s a property of sampling: In real life there is no cherry-picking, but on the researcher’s computer, there is. Large deviations are likely to be bogus.

I had to re-read that sentence a few times. It still doesn’t make sense to me when I think of “big data.” As the sample size increases, large variations due to chance actually decrease. This is a good comment in the article which captures my thoughts:

This article is misleading. When the media/public talk about big data, they almost always mean big N data. Taleb is talking about data where P is “big” (i.e., many many columns but relative few rows, like genetic microarray data where you observe P = millions of genes for about N = 100 people), but he makes it sound like the issues he discuss apply to big N data as well. Big N data has the OPPOSITE properties of big P data—spurious correlations due to random noise are LESS likely with big N. Of course, the more important issue of causation versus correlation is an important problem when analyzing big data, but one that was not discussed in this article.

So I think Nassim Taleb should offer an explanation on what he means by BIG DATA.

Nassim Nicholas Taleb on Role of Luck

Nassim Nicholas Taleb has a new short paper titled “Why It is No Longer a Good Idea to Be in The Investment Industry” (PDF link). The concluding argument is:

To conclude, if you are starting a career, move away from investment management and performance related lotteries as you will be competing with a swelling future spurious tail. Pick a less commoditized business or a niche where there is a small number of direct competitors. Or, if you stay in trading, become a market-maker.

Felix Salmon weighs in and argues the opposite:

The professions you really want to avoid, after reading Taleb’s paper, are not financial but rather creative. Where do you find millions of people all trying to succeed against the odds? Just look at how many bands there are, how many aspiring novelists, how many struggling artists. Nearly all of them think that if they create something great, that will improve their chances of success in their field. But given the sheer number of people they’re competing against, and given the fact that the number of breakout stars in each field is shrinking rather than growing, the fact is that just about everybody with massive success will have got there by sheer luck.

Sometimes, the luck is obvious: EL James, by all accounts, is an absolutely dreadful writer, but has still somehow managed to become a multimillionaire best-selling author. Carly Rae Jepsen has a catchy pop tune, but is only really successful because she happened to be in the right place at the right time. Dan Colen might be a fantastic self-publicist, but not particularly more so than many other, much less successful artists. And so on.

Salmon is strong in his conviction that every successful musician, artist, novelist became successful mainly because of luck. I don’t agree with that premise entirely: I believe there are things you can do to sway the chances of luck helping you along the way. But that doesn’t mean hard work, confidence, and talent should be discounted.