Caroline Leavitt writes a beautiful Modern Love story about a beloved pet tortoise named Minnie, and how Caroline’s relationships were cemented in her adult life through Minnie. It is a story of finding happiness through uncommon love:
Because Minnie was so important to me, I began to measure my dates by how they treated him. If dates gave Minnie the stink eye, that was that. If they expressed interest or wanted to hold him, it made me warm to them. But sooner or later a date would ask, “Do we have to eat with the tortoise on the table?” or “This is a pet?” and my heart would shutter.
When I met Jeff, a smart, funny journalist who took me to a toy store for our first date, I was anxious about how much I liked him. I invited him to dinner, which I admit was more a dare than a meal. Minnie was on the table in a glass tank with us.
We were having spaghetti. Minnie was having live worms.
Jeff cautiously sat down. He looked from me to the tortoise tank and didn’t say a word. When Minnie lunged for a worm, Jeff flinched. But he didn’t get up and leave, and at the end of the evening, he asked for another date. He didn’t object weeks later when I told him I wanted us to take Minnie to Central Park, and he came with a picnic basket and a little wrapped gift. I opened it and inside was a little red rubber squid toy.
Read it for the ending. So touching.
Dustin Curtis begins his latest blog post with a question:
A question that inevitably comes up very early in the process of designing a new app is this: should the interface refer to the user as “your” or “my” when talking about the user’s stuff, as in “my profile” or “your settings”? For a long time, this question ate at my soul. Which is right?
It’s not something I thought about until reading his entry. I like his conclusion:
If we think about interfaces as literal “interfaces” to tasks (like how people are interfaces to their ideas), instead of as tools themselves, it makes sense for the interface to take on a personality, and to become a “you” to the user. Thus, it would make sense for the interface to refer to a user’s stuff as “your stuff,” because the interface is just a medium between the user and what she wants to accomplish or find. In a way, the interface takes on a social characteristic, and becomes a humanoid assistant by utilizing existing functions of the human brain’s social systems.
After thinking about this stuff for a very long time, I’ve settled pretty firmly in the camp of thinking that interfaces should mimic social creatures, that they should have personalities, and that I should be communicating with the interface rather than the interface being an extension of myself. Tools have almost always been physical objects that are manipulated tactually. Interfaces are much more abstract, and much more intelligent; they far more closely resemble social interactions than physical tools.
The answer for me, then, is that you’re having a conversation with the interface. It’s “Your stuff.”
Bill Gates took to Reddit this afternoon to do an “Ask me Anything.” Here were a selection of my favorite questions and answers.
What do you give a man who can buy almost everything?
Q: What do people give you for your birthday, given that you can buy anything you want?
A: Free software. Just kidding.
Q: Windows 7 or Windows 8? Be honest Bill.
A: Higher is better.
And one more:
Q: Since becoming wealthy, what’s the cheapest thing that gives you the most pleasure?
A: Kids. Cheap cheeseburgers. Open Course Ware courses…
Cheap kids? Where is acquiring them from? Bill’s answer is hilarious.
Bill Gates does a great job reviewing books on his Web site. Here are his favorite books from 2012, which I recommend perusing.
This is a strange article from Nassim Taleb, in which he cautions us about big data:
[B]ig data means anyone can find fake statistical relationships, since the spurious rises to the surface. This is because in large data sets, large deviations are vastly more attributable to variance (or noise) than to information (or signal). It’s a property of sampling: In real life there is no cherry-picking, but on the researcher’s computer, there is. Large deviations are likely to be bogus.
I had to re-read that sentence a few times. It still doesn’t make sense to me when I think of “big data.” As the sample size increases, large variations due to chance actually decrease. This is a good comment in the article which captures my thoughts:
This article is misleading. When the media/public talk about big data, they almost always mean big N data. Taleb is talking about data where P is “big” (i.e., many many columns but relative few rows, like genetic microarray data where you observe P = millions of genes for about N = 100 people), but he makes it sound like the issues he discuss apply to big N data as well. Big N data has the OPPOSITE properties of big P data—spurious correlations due to random noise are LESS likely with big N. Of course, the more important issue of causation versus correlation is an important problem when analyzing big data, but one that was not discussed in this article.
So I think Nassim Taleb should offer an explanation on what he means by BIG DATA.
Esquire Magazine details how the man who shot Osama bin Laden is left with no pension and no health insurance. The Shooter, as he is described in the piece, is struggling:
But the Shooter will discover soon enough that when he leaves after sixteen years in the Navy, his body filled with scar tissue, arthritis, tendonitis, eye damage, and blown disks, here is what he gets from his employer and a grateful nation:
Nothing. No pension, no health care, and no protection for himself or his family.
Since Abbottabad, he has trained his children to hide in their bathtub at the first sign of a problem as the safest, most fortified place in their house. His wife is familiar enough with the shotgun on their armoire to use it. She knows to sit on the bed, the weapon’s butt braced against the wall, and precisely what angle to shoot out through the bedroom door, if necessary. A knife is also on the dresser should she need a backup.
Then there is the “bolt” bag of clothes, food, and other provisions for the family meant to last them two weeks in hiding.
“Personally,” his wife told me recently, “I feel more threatened by a potential retaliatory terror attack on our community than I did eight years ago,” when her husband joined ST6.
The text accompanying the headline: “A startling failure of the United States government to help its most experienced and skilled warriors carry on with their lives.” Depressing.
During my last visit to New York City, I avoided going to the “Top of the Rock” observation deck of the GE Building in favor of this view. In the process, I saved $25 and hours waiting in line.
The Economist published an interesting chart showing the price of admission to height of the public viewing platforms, sampling the most popular destinations around the world. Topping the list is the new building in London dubbed “The Shard”:
THE SHARD, the latest big skyscraper to pierce London’s skyline and the tallest building in Europe, recently opened for business—and to the general public. Some visitors have marvelled at the view from the top. Others have complained at the hefty entrance fee of £29.95 ($47) for an adult paying on the door. At a mere 244m (800 feet) high, the Shard is poor value for money when measured against its height.
The Empire State Building ranks third on this list. I think they are using the $42 adult admission price that includes both the 86th and 102nd floor viewings. Using the top deck height of 1250ft = 381.0m, the price per 1 meter of observation viewing is equal to 11.02 cents.
Missing on that chart is the price/height for “Top of the Rock,” which I calculate to be 9.65 cents (850 feet = 259.1m and a $25 admission price). That would put “Top of the Rock” as sixth most expensive observation viewing, which isn’t too bad.
What other observation towers are you familiar with that The Economist didn’t incorporate on their chart?
Question: When, if ever, will the bandwidth of the Internet surpass that of FedEx?
That’s the question that Randall Munroe tackles in his latest “what-if” blog post. His conclusion? 2040. That answer depends on this huge assumption: if Internet transfer rates grow much faster than storage rates on hard drives, SD cards, etc.:
Those thumbnail-sized flakes have a storage density of up to 160 terabytes per kilogram, which means a FedEx fleet loaded with MicroSD cards could transfer about 177 petabits per second, or two zettabytes per day—a thousand times the internet’s current traffic level. (The infrastructure would be interesting—Google would need to build huge warehouses to hold a massive card-processing operation.)
Cisco estimates internet traffic is growing at about 29% annually. At that rate, we’d hit the FedEx point in 2040. Of course, the amount of data we can fit on a drive will have gone up by then, too. The only way to actually reach the FedEx point is if transfer rates grow much faster than storage rates. In an intuitive sense, this seems unlikely, since storage and transfer are fundamentally linked—all that data is coming from somewhere and going somewhere—but there’s no way to predict usage patterns for sure.
While FedEx is big enough to keep up with the next few decades of actual usage, there’s no technological reason we can’t build a connection that beats them on bandwidth. There are experimental fiber clusters that can handle over a petabit per second. A cluster of 200 of those would beat FedEx.
If you recruited the entire US freight industry to move SD cards for you, the throughput would be on the order of 500 exabits—half a zettabit—per second. To match that transfer rate digitally, you’d need take half a million of those petabit cables.