Why are apes skinny and humans fat?

Scientists studied dead humans and bonobos in an effort to understand why humans became the fat primate. What happened when chimps and humans diverged? It's not clear, but the results thousands of years later are.

...humans got fat. Chimps and bonobos are 13 percent skin, and we're only 6 percent skin, but we compensate for that by being up to 36 percent body fat on the high end of average, while bonobos average 4 percent. That's a wildly disproportional fatness differential.
 

From an interview of one of the authors of the paper.

So what happened on the path from common ancestor to Homo sapiens?
One of the things is, you've gotta shift the body around and change the muscle from the forelimbs if you're a quadrupedal ape. Our ancestors—and most apes—can venture into open areas, but they live in forests. They're really tied to having tree cover available, because they get hot.
 
So we developed fat so we could get away from forests?
Compared to the apes, we have less muscle, which is an energy savings, because it's such an expensive tissue. Two important things about the way we store fat: We store it around our buttocks and thighs, but you want to make sure that you're storing fat so it doesn't interfere with locomotion. You don't want it on your feet, for instance. So you concentrate it around the center of gravity. And you also don't want it to interfere with being able to get rid of heat.
 
What was the benefit of having fat down low and weak arms?
If you're moving away from the forest and tree cover, you want to be able to exploit food in a more mosaic habitat that has areas of bush and a few forests around rivers. You want to be able to move into a lot of different areas. So you've gotta get rid of your hair, and really ramp up those sweat glands. Our skin has really been reorganized for a lot of different functions.
 
Do chimps and bonobos not have sweat glands? 
They have sweat glands. They're not really functioning. All primates have eccrine sweat glands in their hands and feet. Monkeys have them on their chests. [But] they're not stimulated by heat.

The biology of risk

That was the title of a fascinating opinion piece by John Coates from back in early June in the NYTimes.

Most of us tend to believe that stress is largely a psychological phenomenon, a state of being upset because something nasty has happened. But if you want to understand stress you must disabuse yourself of that view. The stress response is largely physical: It is your body priming itself for impending movement.

As such, most stress is not, well, stressful. For example, when you walk to the coffee room at work, your muscles need fuel, so the stress hormones adrenaline and cortisol recruit glucose from your liver and muscles; you need oxygen to burn this fuel, so your breathing increases ever so slightly; and you need to deliver this fuel and oxygen to cells throughout your body, so your heart gently speeds up and blood pressure increases. This suite of physical reactions forms the core of the stress response, and, as you can see, there is nothing nasty about it at all.

Far from it. Many forms of stress, like playing sports, trading the markets, even watching an action movie, are highly enjoyable. In moderate amounts, we get a rush from stress, we thrive on risk taking. In fact, the stress response is such a healthy part of our lives that we should stop calling it stress at all and call it, say, the challenge response.
 

Coates notes that the challenge response in humans is particularly sensitive to uncertainty and novelty, causing an elevation in cortisol which reduces our appetite for risk.

Based on that thesis, Coates argues that in reducing uncertainty about upcoming interest rate movies, Greenspan and Bernanke have actually “one of the most powerful brakes on excessive risk taking in stocks was released,” leading to much greater stock market volatility and more dramatic stock market booms and busts.

It may seem counterintuitive to use uncertainty to quell volatility. But a small amount of uncertainty surrounding short-term interest rates may act much like a vaccine immunizing the stock market against bubbles. More generally, if we view humans as embodied brains instead of disembodied minds, we can see that the risk-taking pathologies found in traders also lead chief executives, trial lawyers, oil executives and others to swing from excessive and ill-conceived risks to petrified risk aversion. It will also teach us to manage these risk takers, much as sport physiologists manage athletes, to stabilize their risk taking and to lower stress.
 

I watched more soccer (I'd use football but most posts tagged football in my blog will be about the American rendition, so for disambiguation I'm going to use what soccer fans would consider the profane nomenclature) than I have in my entire life up until now this summer because of the World Cup. People put on the game in the office, and everywhere I went it seemed some American TV was dialed to a game.

[This is a topic for another post, but I am curious about what drove this noticeable uptick in interest in soccer in the U.S. this summer. Was it the greater build out of social media? The success of the U.S. team? Increased coverage on ESPN? The fact that games were in the U.S. timezone this time, unlike the next World Cup or this past Winter Olympics? The rise of MLS? All or none of the above?]

While I'm far from a soccer expert, I did detect a noticeable tightening of game play in overtime. This could purely be because of fatigue, but it led to a less interesting form of play in those periods.

Coates' theory of the relationship between risk-taking and uncertainty reminded me of one of my pet peeves about many sports: the many mental traps that reduce risk-taking in athletes and coaches. From a sports design perspective, I'd argue that fans would prefer greater daring from players and teams more often. Volatility, with dramatic boom and busts, may be not be desirable when it comes to your finances, but in sports and entertainment it's the building block for more compelling drama.

It's not just soccer. The reluctance of football coaches to go for it on fourth down more often is another example where reduced risk lowers entertainment value. The irony is that mathematically, what feels like riskier behavior may be the more rational play. The math supports going for it on fourth down most if not all the time. In soccer, ending overtime deadlocked leads to the randomness of penalty kicks. I'm not conversant on the statistics around soccer, but leaving your fate to penalty kicks feels less certain than just trying to win in overtime.

If the players and coaches won't behave rationally, however, they can have their hands forced by rule changes. What if the NFL just banned punting? Everyone would go for it on fourth down, and I'm confident that would be a more exciting game. What if soccer's overtime were sudden death?

My ideal sports design guiding philosophy: maximize entropy but still reward skilled play. That is, you can let the rough at the U.S. Open grow wild so golfers have to try to stay in the fairways or risk having to hack their way out of the weeds. You can shorten the first round 7 game series in the NBA to 5 games to give the underdog a greater chance.

Don't design a game so skill-based that the outcome is never in doubt. There's a reason why people don't watch televised checkers. But also don't design a game so random that every contestant has an equal chance of winning regardless of skill. You might as well watch two teams flip a coin.

Cities are superlinear, companies are not

But unlike animals, cities do not slow down as they get bigger. They speed up with size! The bigger the city, the faster people walk and the faster they innovate. All the productivity-related numbers increase with size---wages, patents, colleges, crimes, AIDS cases---and their ratio is superlinear. It's 1.15/1. With each increase in size, cities get a value-added of 15 percent. Agglomerating people, evidently, increases their efficiency and productivity.

Does that go on forever? Cities create problems as they grow, but they create solutions to those problems even faster, so their growth and potential lifespan is in theory unbounded.

...

Are corporations more like animals or more like cities? They want to be like cities, with ever increasing productivity as they grow and potentially unbounded lifespans. Unfortunately, West et al.'s research on 22,000 companies shows that as they increase in size from 100 to 1,000,000 employees, their net income and assets (and 23 other metrics) per person increase only at a 4/5 ratio. Like animals and cities they do grow more efficient with size, but unlike cities, their innovation cannot keep pace as their systems gradually decay, requiring ever more costly repair until a fluctuation sinks them. Like animals, companies are sublinear and doomed to die.
 

From a Stewart Brand summary of research by Geoffrey West.

From a long conversation with West at Edge:

Let me tell you the interpretation. Again, this is still speculative.

The great thing about cities, the thing that is amazing about cities is that as they grow, so to speak, their dimensionality increases. That is, the space of opportunity, the space of functions, the space of jobs just continually increases. And the data shows that. If you look at job categories, it continually increases. I'll use the word "dimensionality."  It opens up. And in fact, one of the great things about cities is that it supports crazy people. You walk down Fifth Avenue, you see crazy people, and there are always crazy people. Well, that's good. It is tolerant of extraordinary diversity.

This is in complete contrast to companies, with the exception of companies maybe at the beginning (think of the image of the Google boys in the back garage, with ideas of the search engine no doubt promoting all kinds of crazy ideas and having maybe even crazy people around them).

Well, Google is a bit of an exception because it still tolerates some of that. But most companies start out probably with some of that buzz. But the data indicates that at about 50 employees to a hundred, that buzz starts to stop. And a company that was more multi dimensional, more evolved becomes one-dimensional. It closes down.

Indeed, if you go to General Motors or you go to American Airlines or you go to Goldman Sachs, you don't see crazy people. Crazy people are fired. Well, to speak of crazy people is taking the extreme. But maverick people are often fired.

It's not surprising to learn that when manufacturing companies are on a down turn, they decrease research and development, and in fact in some cases, do actually get rid of it, thinking "oh, we can get that back, in two years we'll be back on track."

Well, this kind of thinking kills them. This is part of the killing, and this is part of the change from super linear to sublinear, namely companies allow themselves to be dominated by bureaucracy and administration over creativity and innovation, and unfortunately, it's necessary. You cannot run a company without administrative. Someone has got to take care of the taxes and the bills and the cleaning the floors and the maintenance of the building and all the rest of that stuff. You need it. And the question is, “can you do it without it dominating the company?” The data suggests that you can't.
 

Lastly, from an article about West and his research in the NYTimes.

The mathematical equations that West and his colleagues devised were inspired by the earlier findings of Max Kleiber. In the early 1930s, when Kleiber was a biologist working in the animal-husbandry department at the University of California, Davis, he noticed that the sprawlingly diverse animal kingdom could be characterized by a simple mathematical relationship, in which the metabolic rate of a creature is equal to its mass taken to the three-fourths power. This ubiquitous principle had some significant implications, because it showed that larger species need less energy per pound of flesh than smaller ones. For instance, while an elephant is 10,000 times the size of a guinea pig, it needs only 1,000 times as much energy. Other scientists soon found more than 70 such related laws, defined by what are known as “sublinear” equations. It doesn’t matter what the animal looks like or where it lives or how it evolved — the math almost always works.

West’s insight was that these strange patterns are caused by our internal infrastructure — the plumbing that makes life possible. By translating these biological designs into mathematics, West and his co-authors were able to explain the existence of Kleiber’s scaling laws. “I can’t tell you how satisfying this was,” West says. “Sometimes, I look out at nature and I think, Everything here is obeying my conjecture. It’s a wonderfully narcissistic feeling.”
 

The pace of technology has already shifted some of the old company scaling constraints in the past two decades. When I first joined Amazon, one of the first analyses I performed was a study of the fastest growing companies in history. Perhaps it was Jeff, perhaps it was Joy (our brilliant CFO at the time), but someone had in their mind that we could be the fastest growing company in history as measured by revenue. Back in 1997, no search engine gave good results for the question "what is the fastest growing company in history."

Some clear candidates emerged, like Wal-Mart and Sam's Club or Costco. I looked at technology giants like IBM and Microsoft. Two things were clear: most every company had some low revenue childhood years when they were finding their footing before they achieved the exponential growth they became famous for. Second, and this was most interesting to us, many companies seemed to suffer some distress right around $1B in revenue.

This was very curious, and a deeper examination revealed that many companies went through some growing pains right around that milestone because smaller company processes, systems, and personnel that worked fine until that point broke down at that volume of business. This was a classic scaling problem, and around $1B or just before it, many companies hit that wall, like the fabled 20 mile wall in a marathon.

Being as competitive as we were, we quickly turned our gaze inward to see which of our own systems and processes might break down as we approached our first billion in revenue (by early 1998 it was already clear to us that we were going to hit that in 1999).

Among other things, it led us to the year of GOHIO. Reminiscent of how, in David Foster Wallace's Infinite Jest, each year in the future had a corporate sponsor, each year at Amazon we had a theme that tied our key company goals into a memorable saying or rubric. One year it was Get Big Fast Baby because we were trying to achieve scale ahead of our competitors. GOHIO stood for Getting Our House In Order.

In finance, we made projections for all aspects of our business at $1B+ in revenue: orders, customer service contacts, shipments out of our distribution centers, website traffic, everything. In the year of GOHIO, the job of each division was to examine their processes, systems, and people and ensure they could support those volumes. If they couldn't, they had to get them ready to do so within that year.

Just a decade later, the $1B scaling wall seems like a distant memory. Coincidentally, Amazon has helped to tear down that barrier with Amazon Web Services (AWS) which makes it much easier for technology companies to scale their costs and infrastructure linearly with customer and revenue growth. GroupOn came along and vaulted to $1B in revenue faster than any company in history.

[Yes, I realize Groupon revenue is built off of what consumers pay for a deal and that Groupon only keeps a portion of that, but no company takes home 100% of its revenue. I also realize Groupon has since run into issues, but those are not ones of scaling as much as inherent business model problems.]

Companies like Instagram and WhatsApp now routinely can scale to hundreds of millions of users with hardly a hiccup and with many fewer employees than companies in the past. Unlike biological constraints like the circulation of blood, oxygen, or nutrients, technology has pushed some of the business scaling constraints out.

Now we look to companies like Google, Amazon, and Facebook, companies that seem to want to compete in a multitude of businesses, to study what the new scaling constraints might be. Technology has not removed all of them: government regulation, bureaucracy or other forms of coordination costs, and employee churn or hiring problems remain some of the common scaling constraints that put the brakes on growth.

Plant intelligence and the Turing Test

The New Yorker unlocked Michael Pollan's latest piece for them and it's a good one. The Intelligent Plant offer much more of interest than to just plant lovers.

Researchers have observed plant behavior which looks to be intelligence. Accompanying the article is a video of a bean plant that seems to sense a metal pole a few feet away that it can wrap itself around. Like Adam and God reaching out with their fingertips in Michelangelo's The Creation of Adam, the plant casts its outermost stalk to and fro like a fishing line, trying to make contact with the pole. Later in the video, we see two bean plants reaching for the same pole, and once one reaches it, the other bean plant seems to turn away as if realizing it has to find another vertical column to call home.

Many scientists dispute the concept of plant intelligence because plants have no brain, but perhaps that's just a human-centric view of intelligence.

No one I spoke to in the loose, interdisciplinary group of scientists working on plant intelligence claims that plants have telekinetic powers or feel emotions. Nor does anyone believe that we will locate a walnut-shaped organ somewhere in plants which processes sensory data and directs plant behavior. More likely, in the scientists’ view, intelligence in plants resembles that exhibited in insect colonies, where it is thought to be an emergent property of a great many mindless individuals organized in a network. Much of the research on plant intelligence has been inspired by the new science of networks, distributed computing, and swarm behavior, which has demonstrated some of the ways in which remarkably brainy behavior can emerge in the absence of actual brains.

●●●●●

In Mancuso’s view, our “fetishization” of neurons, as well as our tendency to equate behavior with mobility, keeps us from appreciating what plants can do. For instance, since plants can’t run away and frequently get eaten, it serves them well not to have any irreplaceable organs. “A plant has a modular design, so it can lose up to ninety per cent of its body without being killed,” he said. “There’s nothing like that in the animal world. It creates a resilience.”

Indeed, many of the most impressive capabilities of plants can be traced to their unique existential predicament as beings rooted to the ground and therefore unable to pick up and move when they need something or when conditions turn unfavorable. The “sessile life style,” as plant biologists term it, calls for an extensive and nuanced understanding of one’s immediate environment, since the plant has to find everything it needs, and has to defend itself, while remaining fixed in place. A highly developed sensory apparatus is required to locate food and identify threats. Plants have evolved between fifteen and twenty distinct senses, including analogues of our five: smell and taste (they sense and respond to chemicals in the air or on their bodies); sight (they react differently to various wavelengths of light as well as to shadow); touch (a vine or a root “knows” when it encounters a solid object); and, it has been discovered, sound. In a recent experiment, Heidi Appel, a chemical ecologist at the University of Missouri, found that, when she played a recording of a caterpillar chomping a leaf for a plant that hadn’t been touched, the sound primed the plant’s genetic machinery to produce defense chemicals. Another experiment, done in Mancuso’s lab and not yet published, found that plant roots would seek out a buried pipe through which water was flowing even if the exterior of the pipe was dry, which suggested that plants somehow “hear” the sound of flowing water.

Given that Alan Turing has just been given a royal pardon, I couldn't help but think of the Turing Test while reading this piece. Recall that Turing found the question of whether machines are intelligent to be “too meaningless.” That is, it's not a question that offers any concrete goalpost or test to prove or disprove itself. Instead, he proposed a question that could be answered in the form of a test:

Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine.

What's often missed is one of the most profound and generous aspects of the Turing Test. By proposing what's often referred to as an Imitation Game, Turing recognizes that there may be forms of intelligence that we don't recognize. It's difficult to read Turing's proposal for the test and not think of his own life history, including his persecution for his homosexuality, a vicious intolerance that many believe led to his suicide (though in the past year there has been some dissent). He spent much of his life trying to pass for straight, and that subtext always hangs in the room when contemplating the Imitation Game.

It's this broad interpretation of intelligence that I thought of as reading Pollan's article on plant intelligence. One of the central debates surrounding the Turing Test is the same one raised in Pollan's article: does intelligence require consciousness?

Perhaps the most troublesome and troubling word of all in thinking about plants is “consciousness.” If consciousness is defined as inward awareness of oneself experiencing reality—“the feeling of what happens,” in the words of the neuroscientist Antonio Damasio—then we can (probably) safely conclude that plants don’t possess it. But if we define the term simply as the state of being awake and aware of one’s environment—“online,” as the neuroscientists say—then plants may qualify as conscious beings, at least according to Mancuso and Baluška. “The bean knows exactly what is in the environment around it,” Mancuso said. “We don’t know how. But this is one of the features of consciousness: You know your position in the world. A stone does not.”

In support of their contention that plants are conscious of their environment, Mancuso and Baluška point out that plants can be rendered unconscious by the same anesthetics that put animals out: drugs can induce in plants an unresponsive state resembling sleep. (A snoozing Venus flytrap won’t notice an insect crossing its threshold.) What’s more, when plants are injured or stressed, they produce a chemical—ethylene—that works as an anesthetic on animals.

In the article I linked earlier on the Turing Test is this passage on the consciousness objection, from Sir Geoffrey Jefferson's Lister Oration (1949):

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

Now hop back to Pollan's article:

The central issue dividing the plant neurobiologists from their critics would appear to be this: Do capabilities such as intelligence, pain perception, learning, and memory require the existence of a brain, as the critics contend, or can they be detached from their neurobiological moorings? The question is as much philosophical as it is scientific, since the answer depends on how these terms get defined. The proponents of plant intelligence argue that the traditional definitions of these terms are anthropocentric—a clever reply to the charges of anthropomorphism frequently thrown at them. Their attempt to broaden these definitions is made easier by the fact that the meanings of so many of these terms are up for grabs. At the same time, since these words were originally created to describe animal attributes, we shouldn’t be surprised at the awkward fit with plants. It seems likely that, if the plant neurobiologists were willing to add the prefix “plant-specific” to intelligence and learning and memory and consciousness (as Mancuso and Baluška are prepared to do in the case of pain), then at least some of this “scientific controversy” might evaporate.

Indeed, I found more consensus on the underlying science than I expected. Even Clifford Slayman, the Yale biologist who signed the 2007 letter dismissing plant neurobiology, is willing to acknowledge that, although he doesn’t think plants possess intelligence, he does believe they are capable of “intelligent behavior,” in the same way that bees and ants are. In an e-mail exchange, Slayman made a point of underlining this distinction: “We do not know what constitutes intelligence, only what we can observe and judge as intelligent behavior.” He defined “intelligent behavior” as “the ability to adapt to changing circumstances” and noted that it “must always be measured relative to a particular environment.” Humans may or may not be intrinsically more intelligent than cats, he wrote, but when a cat is confronted with a mouse its behavior is likely to be demonstrably more intelligent.

Slayman went on to acknowledge that “intelligent behavior could perfectly well develop without such a nerve center or headquarters or director or brain—whatever you want to call it. Instead of ‘brain,’ think ‘network.’ It seems to be that many higher organisms are internally networked in such a way that local changes,” such as the way that roots respond to a water gradient, “cause very local responses which benefit the entire organism.” Seen that way, he added, the outlook of Mancuso and Trewavas is “pretty much in line with my understanding of biochemical/biological networks.” He pointed out that while it is an understandable human prejudice to favor the “nerve center” model, we also have a second, autonomic nervous system governing our digestive processes, which “operates most of the time without instructions from higher up.” Brains are just one of nature’s ways of getting complex jobs done, for dealing intelligently with the challenges presented by the environment. But they are not the only way: “Yes, I would argue that intelligent behavior is a property of life.”

Emergent or network-based intelligence does have the advantage of not being dependent on some central brain. The concentration of human intelligence in one area has always been a core vulnerability of humans.

The same can be said of organizations. The more intelligence can be distributed throughout the organization, the less vulnerable it is to the departure of any one person. The larger the organization, the more critical it is to codify more of that intelligence in processes, culture, rituals, and habits. 

Constants in language, lifetimes

A study in the journal Language finds that even though different languages sound like they run at different speeds, the average information conveyed by each over a constant period of time is more or less equivalent.

I wonder if this is constant is a result of the transmission limits of the speaker or of the processing capabilities of the listener? Or both?

This finding reminded me of the odd fact that the average lifespan of amphibians, birds, fish, mammals, reptiles, and humans all cluster around one constant: the total number of heartbeats. That is, while all those animals live different life spans in terms of years, all average about 1 billion heartbeats. Animals that live for fewer years, on average, tend to have really high average heart rates, while animals that tend to outlive humans have slower heart rates. The mass of the animal seems to play a role. In the animal kingdom, larger species tend to have slower pulse rates and longer life spans.

While there isn't complete consensus around why this is, one oft-cited explanation is Kleiber's law. The theory is that the internal networks needed to distribute nutrients across an animal's structure achieve certain economies of scale. Mathematical models have found the same scaling efficiency as has been measured in the animal kingdom.

Those interested in the topic should definitely read this article.