Researchers Find a Way to Unboil an Egg

Consider this one of the fundamental truths as I was growing up and taking a number of courses in basic biology (and later, microbiology and chemistry): once you boil an egg, there is no way to unboil that egg. Proteins denature once subjected to heat, and do not re-fold back to their original shape/structure.

Turns out, based on most recent research, that may not necessarily be the case: According to findings published in the journal ChemBioChem,

University of California Irvine and Australian chemists have figured out how to unboil egg whites – an innovation that could dramatically reduce costs for cancer treatments, food production and other segments of the $160 billion global biotechnology industry.

To re-create a clear protein known as lysozyme once an egg has been boiled, he and his colleagues add a urea substance that chews away at the whites, liquefying the solid material. That’s half the process; at the molecular level, protein bits are still balled up into unusable masses. The scientists then employ a vortex fluid device, a high-powered machine designed by Professor Colin Raston’s laboratory at South Australia’s Flinders University. Shear stress within thin, microfluidic films is applied to those tiny pieces, forcing them back into untangled, proper form.

In a paper titled “Shear-Stress-Mediated Refolding of Proteins from Aggregates and Inclusion Bodies,” this is the abstract:

Recombinant protein overexpression of large proteins in bacteria often results in insoluble and misfolded proteins directed to inclusion bodies. We report the application of shear stress in micrometer-wide, thin fluid films to refold boiled hen egg white lysozyme, recombinant hen egg white lysozyme, and recombinant caveolin-1. Furthermore, the approach allowed refolding of a much larger protein, cAMP-dependent protein kinase A (PKA). The reported methods require only minutes, which is more than 100 times faster than conventional overnight dialysis. This rapid refolding technique could significantly shorten times, lower costs, and reduce waste streams associated with protein expression for a wide range of industrial and research applications.

Obviously, this is tremendous news that will seek other labs trying to replicate the study.

IBM’s SyNAPSE Chip Moves Closer to Brain-Like Computing

This week, scientists at IBM research unveiled a brain-inspired computer and ecosystem. From their press release on the so-called SyNAPSE chip:

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor.

MIT Technology Review has a good summary as well:

IBM’s SyNapse chip processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.

The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.

The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.

I think this kind of experimentation is fascinating. You can read more at Science Magazine (subscription required to view full text).

 

Implanting False Memories in the Mouse Brain

A fascinating new paper coming out of MIT details how researchers were able to implant false memories in mice. From the abstract:

Memories can be unreliable. We created a false memory in mice by optogenetically manipulating memory engram–bearing cells in the hippocampus. Dentate gyrus (DG) or CA1 neurons activated by exposure to a particular context were labeled with channelrhodopsin-2. These neurons were later optically reactivated during fear conditioning in a different context. The DG experimental group showed increased freezing in the original context, in which a foot shock was never delivered. The recall of this false memory was context-specific, activated similar downstream regions engaged during natural fear memory recall, and was also capable of driving an active fear response. Our data demonstrate that it is possible to generate an internally represented and behaviorally expressed fear memory via artificial means.

In their research, scientist Susumu Tonagawa and his team used a technique known as optogenetics, which allows the fine control of individual brain cells. They engineered brain cells in the mouse hippocampus, a part of the brain known to be involved in forming memories, to express the gene for a protein called channelrhodopsin. When cells that contain channelrhodopsin are exposed to blue light, they become activated. The researchers also modified the hippocampus cells so that the channelrhodopsin protein would be produced in whichever brain cells the mouse was using to encode its memory engrams.

The Guardian summarizes:

In the experiment, Tonagawa’s team placed the mice in a chamber and allowed them to explore it. As they did so, relevant memory-encoding brain cells were producing the channelrhodopsin protein. The next day, the same mice were placed in a second chamber and given a small electric shock, to encode a fear response. At the same time, the researchers shone light into the mouse brains to activate their memories of the first chamber. That way, the mice learned to associate fear of the electric shock with the memory of the first chamber.

In the final part of the experiment, the team placed the mice back in the first chamber. The mice froze, demonstrating a typical fear response, even though they had never been shocked while there. “We call this ‘incepting’ or implanting false memories in a mouse brain,” Tonagawa told Science.

Why is this fascinating? Because a similar process may occur when powerful false memories are created in humans, even if the process is much more complicated in the human brain.

 

We Are All Addicted to the Internet

Jared B. Keller summarizes some research on Internet addition:

The cognitive-reward structure offered by services like email and social media are similar to those of a casino slot machine: “Most of it is junk, but every so often, you hit the jackpot.” This is a symptom of low-risk/high-reward activities like lotteries in general. As researchers found in a 2001 article in International Gambling Studies, systems that offer a low-cost chance of winning a very large prize are more likely to attract repetitive participation and, in turn, stimulate excessive (and potentially problematic) play. Although the stimuli are different (the payoff on the Internet being juicy morsels of information and entertainment rather than money), Stafford says that the immediacy and ubiquity of Internet “play”—i.e. being able to check your tweets or emails on your phone with no major transaction cost—only increases the likelihood that someone will get sucked into a continuous cycle.

If you answer yes to five or more of the questions below, you may be addicted to the Internet:

01. Do you feel preoccupied with the Internet (think about previous online activity or anticipate next online session)?

02. Do you feel the need to use the Internet with increasing amounts of time in order to achieve satisfaction?

03. Have you repeatedly made unsuccessful efforts to control, cut back, or stop Internet use?

04. Do you feel restless, moody, depressed, or irritable when attempting to cut down or stop Internet use?

05. Do you stay online longer than originally intended?

06. Have you jeopardized or risked the loss of significant relationship, job, educational, or career opportunity because of the Internet?

07. Have you lied to family members, therapists, or others to conceal the extent of involvement with the Internet?

08. Do you use the Internet as a way of escaping from problems or of relieving a dysphoric mood (e.g., feelings of helplessness, guilt, anxiety, depression)?

Oh come on, #5? I am sure that has happened to everyone. Every day.

###

(hat tip: Andrew Sullivan)

The Girl That Doesn’t Feel Pain

Justin Heckert, writing for New York Times Magazine, spent some time with Ashlyn Blocker and her parents, Tara and John. Ashlyn suffers from a rare condition called congenital insensitivity to pain in which she doesn’t feel pain:

Tara and John weren’t completely comfortable leaving Ashlyn alone in the kitchen, but it was something they felt they had to do, a concession to her growing independence. They made a point of telling stories about how responsible she is, but every one came with a companion anecdote that was painful to hear. There was the time she burned the flesh off the palms of her hands when she was 2. John was using a pressure-washer in the driveway and left its motor running; in the moments that they took their eyes off her, Ashlyn walked over and put her hands on the muffler. When she lifted them up the skin was seared away. There was the one about the fire ants that swarmed her in the backyard, biting her over a hundred times while she looked at them and yelled: “Bugs! Bugs!” There was the time she broke her ankle and ran around on it for two days before her parents realized something was wrong…

The article goes a bit into the genetic reason for Ashlyn’s insensitivity to pain, namely a mutated SCN9A gene. Interestingly, SCN9A.com lists an older NYT article about the gene.

Is Our Universe a Giant Simulation?

The physics paper of the week is “Constraints on the Universe as a Numerical Simulation” by Silas Beane and company at University of Bonn in Germany. Their fundamental question: is the universe just a giant simulation, in which we are all puppets? From their abstract:

Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice.

Could you fathom the possibility that our entire cosmos is running on a vastly powerful computer? I cannot.

###

(via Technology Review)

A Brief History of Sleep

From a very interesting Wall Street Journal piece on sleep, we learn some history about how humans used to get two sleeping chunks at night:

So why is sleep, which seems so simple, becoming so problematic? Much of the problem can be traced to the revolutionary device that’s probably hanging above your head right now: the light bulb. Before this electrically illuminated age, our ancestors slept in two distinct chunks each night. The so-called first sleep took place not long after the sun went down and lasted until a little after midnight. A person would then wake up for an hour or so before heading back to the so-called second sleep.

It was a fact of life that was once as common as breakfast—and one which might have remained forgotten had it not been for the research of a Virginia Tech history professor named A. Roger Ekirch, who spent nearly 20 years in the 1980s and ’90s investigating the history of the night. As Prof. Ekirch leafed through documents ranging from property records to primers on how to spot a ghost, he kept noticing strange references to sleep. In “The Canterbury Tales,” for instance, one of the characters in “The Squire’s Tale” wakes up in the early morning following her “first sleep” and then goes back to bed. A 15th-century medical book, meanwhile, advised readers to spend their “first sleep” on the right side and after that to lie on their left. A cleric in England wrote that the time between the first and second sleep was the best time for serious study.

The time between the two bouts of sleep was a natural and expected part of the night, and depending on your needs, was spent praying, reading, contemplating your dreams or having sex. The last one was perhaps the most popular. A noted 16th-century French physician named Laurent Joubert concluded that plowmen, artisans and others who worked with their hands were able to conceive more children because they waited until after their first sleep, when their energy was replenished, to make love.

The phrase is “segmented sleep” and it can be reproduced:

Studies show that this type of sleep is so ingrained in our nature that it will reappear if given a chance. Experimental subjects sequestered from artificial lights have tended to ease into this rhythm. What’s more, cultures without artificial light still sleep this way. In the 1960s, anthropologists studying the Tiv culture in central Nigeria found that group members not only practiced segmented sleep, but also used roughly the same terms to describe it.

Fascinating.

15-Year-Old Improves Pancreatic Cancer Test

Maryland teenager Jack Andraka (featured in the video above) isn’t old enough to drive yet, but he’s just pioneered a new, improved test for diagnosing pancreatic cancer that is 90% accurate, 400 times more sensitive, and 26,000 times less expensive than existing methods.

When Andraka had solidified ideas for his novel paper sensor, he wrote out his procedure, timeline, and budget, and emailed 200 professors at research institutes. He got 199 rejections and one acceptance from Johns Hopkins: “If you send out enough emails, someone’s going to say yes.” Andraka was recently awarded the grand prize at the Intel International Science and Engineering Fair for his groundbreaking discoveries.

Persistence is the key.

###

(via Make)

Why Healthcare in America Is So Expensive

In his latest post for The Washington Post, Ezra Klein compares the cost of healthcare procedures in the United States to that in France, Britain, Canada, and India. The obvious answer why healthcare is so expensive in America is that the prices are higher.

On Friday, the International Federation of Health Plans — a global insurance trade association that includes more than 100 insurers in 25 countries — released more direct evidence. It surveyed its members on the prices paid for 23 medical services and products in different countries, asking after everything from a routine doctor’s visit to a dose of Lipitor to coronary bypass surgery. And in 22 of 23 cases, Americans are paying higher prices than residents of other developed countries. Usually, we’re paying quite a bit more. The exception is cataract surgery, which appears to be costlier in Switzerland, though cheaper everywhere else.

Prices don’t explain all of the difference between America and other countries. But they do explain a big chunk of it. The question, of course, is why Americans pay such high prices — and why we haven’t done anything about it.

“Other countries negotiate very aggressively with the providers and set rates that are much lower than we do,” Anderson says. They do this in one of two ways. In countries such as Canada and Britain, prices are set by the government. In others, such as Germany and Japan, they’re set by providers and insurers sitting in a room and coming to an agreement, with the government stepping in to set prices if they fail.

In America, Medicare and Medicaid negotiate prices on behalf of their tens of millions of members and, not coincidentally, purchase care at a substantial markdown from the commercial average. But outside that, it’s a free-for-all. Providers largely charge what they can get away with, often offering different prices to different insurers, and an even higher price to the uninsured.

Some specific examples:

In 2009, Americans spent $7,960 per person on health care. Our neighbors in Canada spent $4,808. The Germans spent $4,218. The French, $3,978. If we had the per-person costs of any of those countries, America’s deficits would vanish. Workers would have much more money in their pockets. Our economy would grow more quickly, as our exports would be more competitive.

There is a quote from Tom Sackville, who served in Margaret Thatcher’s administration. He explains that in America, healthcare is much more of a routine business (“very much something people make money out of” than anywhere else, where there may be embarrassment at making so much money from patients.

Something Klein neglects to mention but is picked up in the comments by a user named “blert” is how Americans subsidize the cost of medicine for everyone else by investing so much money in research and development:

[S]ome of the development of MRI technology happened in Britain. Most was performed in the U.S. Who is paying for the cost of development of this and other new technology and drugs? It’s often not the people in places like Canada and France, where government controls hold down prices. Most of the cost of research and development is paid for by Americans. We pay perhaps five times more in the U.S. for some procedures than people in France pay, but the technology might not exist in the first place if we didn’t pay this disproportionate share. Once the technology exists, companies keep charging as much as they can in the U.S. to recoup costs and to fund development of the next big thing in medicine, and meanwhile other countries in the world adopt the technology, gaining benefits from it without actually paying the costs. This is Canada, France, and much of Europe. Plenty of medical research goes on in these countries, but American consumers ultimately bear most of the cost. It’s an unfair system in many respects, but it’s what has kept medical research moving ahead for the last several decades.

Healthcare is such a complex topic that I realize that nothing I (or Klein) can write in the blog post can begin to fully explain the difference in healthcare costs in America vs. that of Europe. But hopefully the quotes I provided are food for thought.

How Alzheimer’s Disease Spreads

Researchers at Columbia and Harvard performed an experiment with genetically engineered mice that could make abnormal human tau proteins and have found a path for the spread of Alzheimer’s disease:

Alzheimer’s researchers have long known that dying, tau-filled cells first emerge in a small area of the brain where memories are made and stored. The disease then slowly moves outward to larger areas that involve remembering and reasoning.

But for more than a quarter-century, researchers have been unable to decide between two explanations. One is that the spread may mean that the disease is transmitted from neuron to neuron, perhaps along the paths that nerve cells use to communicate with one another. Or it could simply mean that some brain areas are more resilient than others and resist the disease longer.

The new studies provide an answer. And they indicate it may be possible to bring Alzheimer’s disease to an abrupt halt early on by preventing cell-to-cell transmission, perhaps with an antibody that blocks tau.

According to Wikipedia, there are more than 25 million sufferers of Alzheimer’s worldwide. This is a disease that is predicted to affect 1 in 85 people globally by 2050. It’s encouraging to see progress being made in this field, even if we are many years away from a cure.