Researchers Find a Way to Unboil an Egg

Consider this one of the fundamental truths as I was growing up and taking a number of courses in basic biology (and later, microbiology and chemistry): once you boil an egg, there is no way to unboil that egg. Proteins denature once subjected to heat, and do not re-fold back to their original shape/structure.

Turns out, based on most recent research, that may not necessarily be the case: According to findings published in the journal ChemBioChem,

University of California Irvine and Australian chemists have figured out how to unboil egg whites – an innovation that could dramatically reduce costs for cancer treatments, food production and other segments of the $160 billion global biotechnology industry.

To re-create a clear protein known as lysozyme once an egg has been boiled, he and his colleagues add a urea substance that chews away at the whites, liquefying the solid material. That’s half the process; at the molecular level, protein bits are still balled up into unusable masses. The scientists then employ a vortex fluid device, a high-powered machine designed by Professor Colin Raston’s laboratory at South Australia’s Flinders University. Shear stress within thin, microfluidic films is applied to those tiny pieces, forcing them back into untangled, proper form.

In a paper titled “Shear-Stress-Mediated Refolding of Proteins from Aggregates and Inclusion Bodies,” this is the abstract:

Recombinant protein overexpression of large proteins in bacteria often results in insoluble and misfolded proteins directed to inclusion bodies. We report the application of shear stress in micrometer-wide, thin fluid films to refold boiled hen egg white lysozyme, recombinant hen egg white lysozyme, and recombinant caveolin-1. Furthermore, the approach allowed refolding of a much larger protein, cAMP-dependent protein kinase A (PKA). The reported methods require only minutes, which is more than 100 times faster than conventional overnight dialysis. This rapid refolding technique could significantly shorten times, lower costs, and reduce waste streams associated with protein expression for a wide range of industrial and research applications.

Obviously, this is tremendous news that will seek other labs trying to replicate the study.

IBM’s SyNAPSE Chip Moves Closer to Brain-Like Computing

This week, scientists at IBM research unveiled a brain-inspired computer and ecosystem. From their press release on the so-called SyNAPSE chip:

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor.

MIT Technology Review has a good summary as well:

IBM’s SyNapse chip processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.

The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.

The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.

I think this kind of experimentation is fascinating. You can read more at Science Magazine (subscription required to view full text).

 

A Zircon Crystal on Earth Dated to 4.4 Billion Years Old

From a recently published paper in Nature Geoscience, we learn that the oldest dated piece of Earth’s crust is currently dated to 4.4 billion years old. It is a piece of zircon crystal measuring just 400 micrometers long, and its biggest dimension is just a bit larger than a house dust mite, or about four human hairs:

The only physical evidence from the earliest phases of Earth’s evolution comes from zircons, ancient mineral grains that can be dated using the U–Th–Pb geochronometer1. Oxygen isotope ratios from such zircons have been used to infer when the hydrosphere and conditions habitable to life were established23. Chemical homogenization of Earth’s crust and the existence of a magma ocean have not been dated directly, but must have occurred earlier4. However, the accuracy of the U–Pb zircon ages can plausibly be biased by poorly understood processes of intracrystalline Pb mobility567. Here we use atom-probe tomography8 to identify and map individual atoms in the oldest concordant grain from Earth, a 4.4-Gyr-old Hadean zircon with a high-temperature overgrowth that formed about 1 Gyr after the mineral’s core. Isolated nanoclusters, measuring about 10 nm and spaced 10–50 nm apart, are enriched in incompatible elements including radiogenic Pb with unusually high 207Pb/206Pb ratios. We demonstrate that the length scales of these clusters make U–Pb age biasing impossible, and that they formed during the later reheating event. Our tomography data thereby confirm that any mixing event of the silicate Earth must have occurred before 4.4 Gyr ago, consistent with magma ocean formation by an early moon-forming impact4 about 4.5 Gyr ago.

###

(via CNN)

The 2014 Edge Question: What Scientific Idea is Ready for Retirement?

Every year since 1998, Edge.org editor John Brockman has been posing one thought-provoking question to some of the world’s greatest thinkers across a variety of disciplines, and then assimilating the responses in an annual anthology. Last year, he published a book This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works which collects a number of these questions in a single volume.

For 2014, the annual Edge.org question is: What Scientific Idea is Ready for Retirement? I’ll be reading responses for the next few weeks, but for now, wanted to link to the main page and highlight a few notable responses:

1) Nassim Taleb, one of my all-time favourite thinkers and authors, who argues for the throwing out of standard deviation as a measure:

The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.

Say someone just asked you to measure the “average daily variations” for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to “real life” much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.

It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term “standard deviation” for what had been known as “root mean square error”. The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market “volatility”, it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.

But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

2) Jay Rosen, who argues we should retire the concept of “information overload”:

Here’s the best definition of information that I know of: information is a measure of uncertainty reduced. It’s deceptively simple. In order to have information, you need two things: an uncertainty that matters to us (we’re having a picnic tomorrow, will it rain?) and something that resolves it (weather report.) But some reports create the uncertainty that is later to be solved.

Suppose we learn from news reports that the National Security Agency “broke” encryption on the Internet. That’s information! It reduces uncertainty about how far the U.S. government was willing to go. (All the way.) But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created. Which is probably what we mean when we say: “well, that raises more questions than it answers.”

3) Richard Dawkins thinks “essentialism” should be retired:

Essentialism—what I’ve called “the tyranny of the discontinuous mind”—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity’s late discovery of evolution—as late as the nineteenth century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won’t occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that theessence of rabbitness is “prior to” the existence of rabbits (whatever “prior to” might mean, and that’s a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.

Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus orHomo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It’s essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and “ring species” tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing “gaps” as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is “really” Australopithecus or Homo is like quarrelling over whether George should be called “tall”. He’s five foot ten, doesn’t that tell you what you need to know?

4) Kevin Kelly, who argues that “fully random mutations” should be retired from thought (this is something that I’ve known for a while, as I have taken a number of courses in molecular biology):

What is commonly called “random mutation” does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

On the contrary, there’s much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism’s predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. 

5) Ian Bogost, professor at my alma mater of Georgia Tech, who thinks “science” should be retired:

Beyond encouraging people to see science as the only direction for human knowledge and absconding with the subject of materiality, the rhetoric of science also does a disservice to science itself. It makes science look simple, easy, and fun, when science is mostly complex, difficult, and monotonous.

A case in point: the popular Facebook page “I f*cking love science” posts quick-take variations on the “science of x” theme, mostly images and short descriptions of unfamiliar creatures like the pink fairy armadillo, or illustrated birthday wishes to famous scientists like Stephen Hawking. But as the science fiction writer John Skylar rightly insisted in a fiery takedown of the practice last year, most people don’t f*cking love science, they f*cking love photography—pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done—slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.

The rhetoric of science has consequences. Things that have no particular relation to scientific practice must increasingly frame their work in scientific terms to earn any attention or support. The sociology of Internet use suddenly transformed into “web science.” Long accepted practices of statistical analysis have become “data science.” Thanks to shifting educational and research funding priorities, anything that can’t claim that it is a member of a STEM (science, technology, engineering, and math) field will be left out in the cold. Unfortunately, the rhetoric of science offers the most tactical response to such new challenges. Unless humanists reframe their work as “literary science,” they risk getting marginalized, defunded and forgotten.

When you’re selling ideas, you have to sell the ideas that will sell. But in a secular age in which the abstraction of “science” risks replacing all other abstractions, a watered-down, bland, homogeneous version of science is all that will remain if the rhetoric of science is allowed to prosper.

We need not choose between God and man, science and philosophy, interpretation and evidence. But ironically, in its quest to prove itself as the supreme form of secular knowledge, science has inadvertently elevated itself into a theology. Science is not a practice so much as it is an ideology. We don’t need to destroy science in order to bring it down to earth. But we do need to bring it down to earth again, and the first step in doing so is to abandon the rhetoric of science that has become its most popular devotional practice.

If you want to get smarter today, go here and spend a few hours reading through the contributions.

Why Everyone Seems to Have Cancer

A thoughtful take on comparing heart disease and cancer by George Johnson in The New York Times.

Half a century ago, the story goes, a person was far more likely to die from heart disease. Now cancer is on the verge of overtaking it as the No. 1 cause of death.

Troubling as this sounds, the comparison is unfair. Cancer is, by far, the harder problem — a condition deeply ingrained in the nature of evolution and multicellular life. Given that obstacle, cancer researchers are fighting and even winning smaller battles: reducing the death toll from childhood cancers and preventing — and sometimes curing — cancers that strike people in their prime. But when it comes to diseases of the elderly, there can be no decisive victory. This is, in the end, a zero-sum game.

As people age their cells amass more potentially cancerous mutations. Given a long enough life, cancer will eventually kill you — unless you die first of something else. That would be true even in a world free from carcinogens and equipped with the most powerful medical technology.

The author is keen on pointing out that the future of medicine will be focused on prevention rather than treatment.

Remembering Carl Sagan: “We Are the Custodians of Life’s Meaning”

We lost Carl Sagan on this day, seventeen years ago. It was only in the last few years that I have discovered his voice and his wisdom. And I wanted to share one of the best compilations in his memory, compiled by Reid Grower and simply titled The Sagan Series. It’s a series of ten YouTube videos with Sagan narrating the wonder of our planet, space exploration, and our life’s purpose.

My favourite is probably the first video, which to this day, is still the best encapsulation of why man should and will venture out into space.

But my favorite quote probably comes from the third video, titled “A Reassuring Fable.” In it, Sagan notes on the meaning of life:

We long to be here for a purpose. Even though, despite much self-deception, none is evident. The significance of our lives and our fragile planet is then determined only by our own wisdom and courage. We are the custodians of life’s meaning.

He goes on to say:

We long for a Parent to care for us, to forgive us our errors, to save us from our childish mistakes. But knowledge is preferable to ignorance. Better, by far, to embrace the hard truth than a reassuring fable…If we crave some cosmic purpose, then let us find ourselves a worthy goal.

Amen.

Regardless of where you stand in the religion/science spectrum, The Sagan Series is the best thing you can watch today.

The Technologies That Read Your Facial Expressions

An interesting, if somewhat disconcerting, overview of the rising technologies/algorithms that can interpret the emotions on your face:

Ever since Darwin, scientists have systematically analyzed facial expressions, finding that many of them are universal. Humans are remarkably consistent in the way their noses wrinkle, say, or their eyebrows move as they experience certain emotions. People can be trained to note tiny changes in facial muscles, learning to distinguish common expressions by studying photographs and video. Now computers can be programmed to make those distinctions, too.

Companies in this field include Affectiva, based in Waltham, Mass., andEmotient, based in San Diego. Affectiva used webcams over two and a half years to accumulate and classify about 1.5 billion emotional reactions from people who gave permission to be recorded as they watched streaming video, said Rana el-Kaliouby, the company’s co-founder and chief science officer. These recordings served as a database to create the company’s face-reading software, which it will offer to mobile software developers starting in mid-January.

Face-reading technology may one day be paired with programs that have complementary ways of recognizing emotion, such as software that analyzes people’s voices, said Paul Saffo, a technology forecaster. If computers reach the point where they can combine facial coding, voice sensing, gesture tracking and gaze tracking, he said, a less stilted way of interacting with machines will ensue.

One book I recommend that is related to this topic is by Joe Navarro, an ex-FBI agent, titled What Every Body is Saying (a guide to speed-reading people, including when they are telling a lie, etc.).

On How Memories Pass Between Generations

The BBC highlights a recent study titled “Parental olfactory experience influences behavior and neural structure in subsequent generations” in which the researchers at Emory University in Atlanta, GA trained mice to avoid a smell, and subsequently, these mice passed this aversion on to their “grandchildren.” The results are important for phobia and anxiety research.

Both the mice’s offspring, and their offspring, were “extremely sensitive” to cherry blossom and would avoid the scent, despite never having experiencing it in their lives. Remarkably, these changes extended to changes in brain structure.

From the abstract:

Using olfactory molecular specificity, we examined the inheritance of parental traumatic exposure, a phenomenon that has been frequently observed, but not understood. We subjected F0 mice to odor fear conditioning before conception and found that subsequently conceived F1 and F2 generations had an increased behavioral sensitivity to the F0-conditioned odor, but not to other odors. When an odor (acetophenone) that activates a known odorant receptor (Olfr151) was used to condition F0 mice, the behavioral sensitivity of the F1 and F2 generations to acetophenone was complemented by an enhanced neuroanatomical representation of the Olfr151 pathway. Bisulfite sequencing of sperm DNA from conditioned F0 males and F1 naive offspring revealed CpG hypomethylation in the Olfr151 gene. In addition, in vitro fertilization, F2 inheritance and cross-fostering revealed that these transgenerational effects are inherited via parental gametes. Our findings provide a framework for addressing how environmental information may be inherited transgenerationally at behavioral, neuroanatomical and epigenetic levels.

Fascinating.

A New Species: The Clean Room Bacteria

A fascinating piece in Scientific American, summarizing how scientists discovered a new species of bacterium in two separate clean room facilities (one at the European Space Agency and the other at Kennedy Space Center):

The researchers named the bacterium Tersicoccus phoenicis. “Tersi” is Latin for clean, as in clean room, and “coccus” comes from Greek and describes the bacterium in this genus’s berrylike shape. “Phoenicis” as the species name pays homage to thePhoenix lander. The scientists determined that T. phoenicis shares less than 95 percent of its genetic sequence with its closest bacterial relative. That fact, combined with the unique molecular composition of its cell wall and other properties, was enough to classify Tersicoccus phoenicis as part of a new genus—the next taxonomic level up from species in the system used to classify biological organisms. The researchers are not sure yet if the bug lives only in clean rooms or survives elsewhere but has simply escaped detection so far, says Christine Moissl-Eichinger of the University of Regensburg in Germany, who identified the species at the ESA’s Guiana Space Center in Kourou, French Guiana. Some experts doubt thatTersicoccus phoenicis would fare well anywhere other than a clean room. “I think these bugs are less competitive, and they just don’t do so well in normal conditions,” says Cornell University astrobiologist Alberto Fairén, who was not involved in the analysis of the new genus. “But when you systematically eliminate almost all competition in the clean rooms, then this genus starts to be prevalent.”

Only the hardiest of microbes can survive inside a spacecraft clean room, where the air is stringently filtered, the floors are cleansed with certified cleaning agents, and surfaces are wiped with alcohol and hydrogen peroxide, then heated to temperatures high enough to kill almost any living thing. Any human who enters the room must be clad head to foot in a “bunny suit” with gloves, booties, a hat and a mask, so that the only exposed surface is the area around a person’s eyes. Even then, the technician can enter only after stomping on sticky tape on the floor to remove debris from the soles of her booties, and passing through an “air shower” to blow dust away from the rest of her. 

As always: life finds a way. Not only was this a discovery of a new species, it was a discovery of a new genus.

The full paper, for those of you interested, is here.

 

Do Humans Pick Friends Who Have Similar Genetic Makeup?

Nicholas A. Christakis and James H. Fowler, in a recent paper titled “Friendship and Natural Selection,” make an interesting hypothesis: that we select friends who have similar genetic makeup as ourselves. The dataset used was the famous Framingham Heart Study. From their abstract:

More than any other species, humans form social ties to individuals who are neither kin nor mates, and these ties tend to be with similar people. Here, we show that this similarity extends to genotypes. Across the whole genome, friends’ genotypes at the SNP level tend to be positively correlated (homophilic); however, certain genotypes are negatively correlated (heterophilic). A focused gene set analysis suggests that some of the overall correlation can be explained by specific systems; for example, an olfactory gene set is homophilic and an immune system gene set is heterophilic. Finally, homophilic genotypes exhibit significantly higher measures of positive selection, suggesting that, on average, they may yield a synergistic fitness advantage that has been helping to drive recent human evolution.

So the interesting question is why would this happen? The arXiv blog goes into possible explanations:

Perhaps the genetic links are simply a reflection of this common background. Not so, say Christakis and Fowler. The correlation they have found exists only between friends but not between strangers. If this was a reflection of their common ancestry, then the genomes of strangers should be correlated just as strongly. “Pairs of (strictly unrelated) friends generally tend to be more genetically homophilic than pairs of strangers from the same population,” they point out.

There are certainly other processes that could lead to friends having similar genomes. One idea that dates back some 30 years is that a person’s genes causes them to seek out circumstances that are compatible with their phenotype. If that’s the case, then people with similar genes should end up in similar environments.

Personally, I don’t buy this:

There may be another mechanism at work. One idea is that humans can somehow identify people with similar genetic make up, perhaps with some kind of pheromone detector. Indeed, Christakis and Fowler say that some of the genes they found in common are related to olfaction, a discovery they describe as “intriguing and supportive”.

While interesting, I’m not entirely convinced of the overall findings and would be curious to see this study expand. What do you think?