The Wi-Fi Blocking Wallpaper

Worried about your neighbor stealing your Wi-Fi signal? You could go the encryption route, or you could wait for the Wi-Fi blocking wallpaper to come to market:

French researchers have developed wallpaper that’s designed to trap Wi-Fi signals, without interfering with radio or cellphone signals. It uses conductive ink containing silver crystals to to block a Wi-Fi router’s operating frequencies: your router should work as expected, but the signal won’t travel beyond the wallpaper’s boundaries. While currently only a prototype, researchers at the Grenoble Institute of Technology hope to make the wallpaper commercially available early next year.

No word on how much it will cost, but I am guessing it won’t be cheap. The other downside? For full protection, you’d have to cover your ceiling with it as well.

The Six-Year-Old in the National Spelling Bee

This is a great story about Lori Anne Madison, a six-year old girl. She’s the youngest person ever to compete in the National Spelling Bee:

She is blonde and adorable and talks at 100 mph. In the last few weeks, she has won major awards in both swimming and math, but one accomplishment above all has made her an overnight national celebrity: This week, the precocious girl from Lake Ridge, Va., will be onstage with youngsters more than twice her age and twice her size as one of 278 spellers who have qualified for the Scripps National Spelling Bee.

Sounds like she’s destined to do great things:

She hit all her milestones early, walking and talking well before others in her playgroup. She was reading before she was 2. She swims four times a week, keeping pace with 10-year-old boys, and wants to be in the Olympics. When her mother tried to enroll her in a private school for the gifted, the headmaster said Lori Anne was just way too smart to accommodate and needed to be home-schooled.

But as far as interviewing goes? Lori had this to say: “I want to go back to being a kid and playing with my friends.”

On Apple’s Exponential Growth

Blogger Horace Dediu was recently asked the following question for MacUser Magazine UK:

The exponential growth of Apple products has to end some time soon doesn’t it? How many high-income buyers for expensive products can there be left for Apple to target?

And this was Horace’s intelligent response:

Trying to calculate the limits to growth is futile. There are limits but they are not calculable and inaccurate estimates don’t offer any useful information.

In 1939 a total of 921 military aircraft were built in the United States. Five years later, in 1944, annual production was 96,318. A question could have been asked by an aviation analyst in 1939 about whether the American aircraft industry could grow. Aircraft, especially multi-engined ones used by the military were _extremely_ expensive. The reason an industry grew exponentially making extraordinarily expensive products was not because of organic demand but because the primary buyers engaged in a cataclysmic global war. In other words, there was a will to buy and hence there came a way to build. Therefore the answer to the question of sustainable growth comes not from an analysis of demand but from an analysis of the consequences of not growing. Not growing would have meant the end of many nation states.

Your question was framed by an implied market categorization: that buyers are either high-income or, presumably, low-income. This is a false dichotomy. Buyers are either needful of a job to be done or not. If the job is important enough, money will be found to hire a product. Sellers of products will also find ways to meet the demand through lower prices and increased capacity. Every product Apple makes used to be out of the reach of all consumers. Whether computers (portable or not), music players and professional grade software, voice recognizing personal assistant cellular phones and tablets are all luxuries or necessities is only a question of timing.

Highly recommend reading the full interview here.

The End of Moore’s Law

Theoretical physicist Michio Kaku argues that the end of Moore’s Law is coming sooner than later:

Years ago, we physicists predicted the end of Moore’s Law that says a computer power doubles every 18 months.  But we also, on the other hand, proposed a positive program.  Perhaps molecular computers, quantum computers can takeover when silicon power is exhausted.  But then the question is, what’s the timeframe?  What is a realistic scenario for the next coming years?  

Well, first of all, in about ten years or so, we will see the collapse of Moore’s Law.  In fact, already, already we see a slowing down of Moore’s Law.  Computer power simply cannot maintain its rapid exponential rise using standard silicon technology.  Intel Corporation has admitted this.  In fact, Intel Corporation is now going to three-dimensional chips, chips that compute not just flatly in two dimensions but in the third dimension.  But there are problems with that.  The two basic problems are heat and leakage.  That’s the reason why the age of silicon will eventually come to a close.  No one knows when, but as I mentioned we already now can see the slowing down of Moore’s Law, and in ten years it could flatten out completely.  So what is the problem?  The problem is that a Pentium chip today has a layer almost down to 20 atoms across, 20 atoms across.  When that layer gets down to about 5 atoms across, it’s all over.  You have two effects.  Heat–the heat generated will be so intense that the chip will melt.  You can literally fry an egg on top of the chip, and the chip itself begins to disintegrate  And second of all, leakage–you don’t know where the electron is anymore.  The quantum theory takes over.  The Heisenberg Uncertainty Principle says you don’t know where that electron is anymore, meaning it could be outside the wire, outside the Pentium chip, or inside the Pentium chip.  So there is an ultimate limit set by the laws of thermal dynamics and set by the laws of quantum mechanics as to how much computing power you can do with silicon.  

You can watch the video here.

Soccer: World’s Most Corrupt Game

A very good ESPN Magazine piece on the world’s most corrupt game, football (or soccer):

Here’s a mere sampling of events since the beginning of last year: Operation Last Bet rocked the Italian Football Federation, with 22 clubs and 52 players awaiting trial for fixing matches; the Zimbabwe Football Association banned 80 players from its national-team selection due to similar accusations; Lu Jun, the first Chinese referee of a World Cup match, was sentenced to five and a half years in prison for taking more than $128,000 in bribes to fix outcomes in the Chinese Super League; prosecutors charged 57 people with match fixing in the South Korean K-League, four of whom later died in suspected suicides; the team director of second-division Hungarian club REAC Budapest jumped off a building after six of his players were arrested for fixing games; and in an under-21 friendly, Turkmenistan reportedly beat Maldives 3-2 in a “ghost match” — neither country knew about the contest because it never actually happened, yet bookmakers still took action and fixers still profited.

Soccer match fixing has become a massive worldwide crime, on par with drug trafficking, prostitution and the trade in illegal weapons. As in those criminal enterprises, the match-fixing industry has been driven by opportunistic greed. According to Interpol figures, sports betting has ballooned into a $1 trillion industry, 70 percent of which is gambled on soccer. 

A lot more facts and figures here.

Facebook’s Business Model

Before you go out and buy that Facebook stock when it IPOs today, consider the warnings. There are plenty of opinions out there. But the best consideration of the whole matter I have read in the last two weeks comes courtesy of Chris Dixon, who considers Facebook’s business model. Namely: display ads. Display ads generally hurt the user experience, and are also not very efficient at producing revenues. The crux of the matter:

The key question when trying to value Facebook’s stock is: can they find another business model that generates significantly more revenue per user without hurting the user experience? (And can they do that in an increasingly mobile world where display ads have been even less effective.) Perhaps that business model is sponsored feed entries, as Facebook seems to be hoping (along with Twitter and perhaps Tumblr). The jury is still out on that model. Personally, I have trouble seeing how insertions into the feeds aren’t just more prominent display ads. You still have to stoke demand and convert people from non-purchasing to purchasing intents. A more likely outcome is that Facebook uses their assets – a vast number of extremely engaged users, it’s social graph, Facebook Connect – to monetize through another business model. If they do that, the company is probably worth a lot more than the expected $100B IPO valuation. If they don’t, it’s probably worth a lot less.

Chris’s short post is worth reading in entirety.

The Decline of the Public Company

As Facebook debuts its IPO today, a good reminder at The Economist on the decline of the public company:

The number of public companies has fallen dramatically over the past decade—by 38% in America since 1997 and 48% in Britain. The number of initial public offerings (IPOs) in America has declined from an average of 311 a year in 1980-2000 to 99 a year in 2001-11. Small companies, those with annual sales of less than $50m before their IPOs—have been hardest hit. In 1980-2000 an average of 165 small companies undertook IPOs in America each year. In 2001-09 that number fell to 30. Facebook will probably give the IPO market a temporary boost—several other companies are queuing up to follow its lead—but they will do little to offset the long-term decline.

So why is Facebook going public, anyway? It’s not like it needs to raise the cash.

Mark Zuckerberg has resisted going public for as long as he could, not least because so many heads of listed companies advised him to. He is taking the plunge only because American law requires any firm with more than a certain number of shareholders to publish quarterly accounts just as if it were listed.

 

The Origin of Food Criticism

On May 18, 1962 Craig Claiborne prefaced an article he wrote with a short note: “The following is a listing of New York restaurants that are recommended on the basis of varying merits. Such a listing will be published every Friday in The New York Times.” And so, on that day, the food critic was born (or at least, the contemporary version of it). This New York Times article provides the detail of the growing trend. Claiborne’s Directory to Dining, which celebrates a 50 year anniversary this month, marks the day when the country began paying attention to restaurant reviews in the newspaper.

The column’s most easily recognized field mark, the starred ranking, made its debut on May 24, 1963, with a three-star scale. A fourth star, still the newspaper’s top grade, was placed on the top of the tree a year later. The arguments about what it all means have been going on ever since.

Most influential of all were the rules Claiborne set for himself, which became the industry ideal. He was independent of advertising, tried to dine anonymously, and before passing judgment would eat at least two meals (later three) that were paid for by The Times, not the restaurants. Claiborne’s guidelines sent a message that he wasn’t an overprivileged and overfed man about town. He was a critic with a job to do.

Claiborne’s dedication to his job, I would argue, is unrivaled to this day:

Most influential of all were the rules Claiborne set for himself, which became the industry ideal. He was independent of advertising, tried to dine anonymously, and before passing judgment would eat at least two meals (later three) that were paid for by The Times, not the restaurants. Claiborne’s guidelines sent a message that he wasn’t an overprivileged and overfed man about town. He was a critic with a job to do.

Definitely some great trivia for all the foodies out there.

Aaron Sorkin’s Commencement Speech at Syracuse University

Aaron Sorkin gave the commencement speech to the 2012 graduates of Syracuse University. It is excellent:

I’d like to say to the parents that I realized something while I was writing this speech: the last teacher your kids will have in college will be me.  And that thought scared the hell out of me. Frankly, you should feel exactly the same way.  But I am the father of an 11-year-old daughter, so I do know how proud you are today, how proud your daughters and your sons make you every day, and that they did just learn how to walk last week, that you’ll never not be there for them, that you love them more than they’ll ever know and that it doesn’t matter how many degrees get put in their hand, they will always be dumber than you are.

 And make no mistake about it, you are dumb.  You’re a group of incredibly well-educated dumb people.  I was there.  We all were there.  You’re barely functional.  There are some screw-ups headed your way.  I wish I could tell you that there was a trick to avoiding the screw-ups, but the screw-ups, they’re a-coming for ya.  It’s a combination of life being unpredictable, and you being super dumb.

An example of how a failure in college served as motivation for Sorkin:

As a freshman drama student—and this story is now becoming famous—I had a play analysis class—it was part of my requirement…The play analysis class met for 90 minutes twice a week.  We read two plays a week and we took a 20-question true or false quiz at the beginning of the session that tested little more than whether or not we’d read the play.  The problem was that the class was at 8:30 in the morning, it met all the way down on East Genesee, I lived all the way up at Brewster/Boland, and I don’t know if you’ve noticed, but from time to time the city of Syracuse experiences inclement weather.  All this going to class and reading and walking through snow, wind chill that’s apparently powered by jet engines, was having a negative effect on my social life in general and my sleeping in particular.  At one point, being quizzed on “Death of a Salesman,” a play I had not read, I gave an answer that indicated that I wasn’t aware that at the end of the play the salesman dies.  And I failed the class.  I had to repeat it my sophomore year; it was depressing, frustrating and deeply embarrassing.    And it was without a doubt the single most significant event that occurred in my evolution as a writer.  I showed up my sophomore year and I went to class, and I paid attention, and we read plays and I paid attention, and we discussed structure and tempo and intention and obstacle, possible improbabilities, improbable impossibilities, and I paid attention, and by God when I got my grades at the end of the year, I’d turned that F into a D.  I’m joking: it was pass/fail.

And I think this is the best part of the speech:

Don’t ever forget that you’re a citizen of this world, and there are things you can do to lift the human spirit, things that are easy, things that are free, things that you can do every day. Civility, respect, kindness, character. You’re too good for schadenfreude, you’re too good for gossip and snark, you’re too good for intolerance—and since you’re walking into the middle of a presidential election, it’s worth mentioning that you’re too good to think people who disagree with you are your enemy.

 

Should You Learn to Code?

There’s a meme out there this year on learning to code (proliferated by sites like this). Jeff Atwood is against this idea. He has a great post at Coding Horror where he elaborates his idea:

To those who argue programming is an essential skill we should be teaching our children, right up there with reading, writing, and arithmetic: can you explain to me how Michael Bloomberg would be better at his day to day job of leading the largest city in the USA if he woke up one morning as a crack Java coder? It is obvious to me how being a skilled reader, a skilled writer, and at least high school level math are fundamental to performing the job of a politician. Or at any job, for that matter. But understanding variables and functions, pointers and recursion? I can’t see it.

Look, I love programming. I also believe programming is important … in the right context, for some people. But so are a lot of skills. I would no more urge everyone to learn programming than I would urge everyone to learn plumbing. That’d be ridiculous, right?

He continues with an excellent bullet list:

The “everyone should learn to code” movement isn’t just wrong because it falsely equates coding with essential life skills like reading, writing, and math. I wish. It is wrong in so many other ways.

  • It assumes that more code in the world is an inherently desirable thing. In my thirty year career as a programmer, I have found this … not to be the case. Should you learn to write code? No, I can’t get behind that. You should be learning to write as little code as possible. Ideally none.
  • It assumes that coding is the goal. Software developers tend to be software addicts who think their job is to write code. But it’s not. Their job is to solve problems. Don’t celebrate the creation of code, celebrate the creation of solutions. We have way too many coders addicted to doing just one more line of code already.
What are your thoughts on programming? Have you taken the initiative to learn to code sometime in your life?