The Logic of Nuclear Exchange and the Refinement of American Deterrence Strategy

The most spectacular event of the past half century is one that did not occur. We have enjoyed sixty years without nuclear weapons exploded in anger.

What a stunning achievement—or, if not achievement, what stunning good fortune. In 1960 the British novelist C. P. Snow said on the front page of the New York Times that unless the nuclear powers drastically reduced their nuclear armaments thermonuclear warfare within the decade was a “mathematical certainty.” Nobody appeared to think Snow’s statement extravagant.

We now have that mathematical certainty compounded more than four times, and no nuclear war.

- Thomas Schelling, 2005 Nobel Prize in Economics Lecture

When Robert McNamara was named the Secretary of Defense in 1961, he brought to the Pentagon a group of aides who came to be known as the Whiz Kids. They were young, book-smart men eager to apply the latest in systems analysis, game theory, and operations research to military strategy.

It did not take them long to alienate senior officers. Once to settle a particularly heated argument about nuclear plans, a 29-year-old Whiz Kid declared: “General, I have fought just as many nuclear wars as you have.”

The flip remark understates a fact that deserves great wonder: The world has now gone for seven decades while avoiding nuclear destruction. The thermonuclear war that was once regarded with the greatest of fears and as a mathematical certainty has not come to pass.

In addition, it’s also a startling display of the role that a group of civilians played in defining U.S. nuclear strategy. After a first draft by the military, American strategic objectives were subject to continuous refinements. Many of these refinements came from civilian theorists, most of whom came from the RAND Corporation, and few of whom had seen war. One of the earliest nuclear intellectuals from RAND started out as a naval strategist; when he produced his most important work on naval strategy, he had never seen the ocean, let alone set foot on a ship. In seminar rooms, these strategists pondered the novel challenges of the nuclear world and worked out ideas by discussing not the efficient application of force but rather the exploitation of potential force.

This essay is a short introduction to how nuclear weapons are created and deployed, and the ideas that strategists, policymakers, and the military implemented to reduce the risk of nuclear war.

(Published in prettier formatting on Medium.)

What Are Nuclear Weapons?

 

Thirty years after the detonation on Hiroshima, the world had produced enough nuclear weapons to create the equivalent of about 3 tons of TNT for every man, woman, and child on earth. Here’s context to put that figure into some sort of perspective.

Nuclear Explosions

What happens in a nuclear explosion? First, a huge blast drives air away, producing high winds and changes in air pressure that crush objects. Then come radiation: direct radiation will cause fatal illness in a matter of a few weeks, while thermal radiation will cause first-degree burns a few miles away. Fires immediately follow; a strong blast can generate a firestorm, which destroys everything in a concentrated area, or a conflagration, which is not so strong but spreads along a front. Then there’s fallout: particles are scooped up from the ground, get irradiated by the explosion, and spread depending on wind conditions. Finally, at a sufficiently high altitude, a blast might produce electrons that interact with the earth’s magnetic field, setting off an electromagnetic pulse that can destroy electronics and metal objects.

The world has set off over 2400 nuclear explosions, nearly all of them by America or the Soviet Union, most of them underground. Americans have tested most of their weapons in the southwestern states of Nevada or New Mexico, or on islands in the Pacific. The Soviet Union has conducted mostly in Kazakhstan or archipelagos in the Arctic Ocean.

Nuclear detonations have been set off underground, underwater, and in the atmosphere. They’ve had usually minor and sometimes permanent effects on the earth. As a dramatic example, America’s first hydrogen bomb, named “Ivy Mike,” completely obliterated the small Pacific island on which it was tested.

The effects of nuclear explosions have always provoked anxiety. Before the first nuclear test in New Mexico, Enrico Fermi rounded up his fellow scientists to place a grim bet. Some of them speculated that an atomic bomb would ignite the atmosphere, and Fermi offered wagers on whether the Trinity test might destroy the atmosphere of the planet, or merely that of New Mexico state. More recently, Carl Sagan wrote that instead of igniting the atmosphere, nuclear weapons may cool the world enough to produce a nuclear winter.

The effects of nuclear tests have not always been well controlled. Shortly after the Ivy Mike test, America detonated the most powerful thermonuclear bomb it would ever construct. “Castle Bravo” was expected to yield a blast of five or six megatons, but instead produced a blast of 15 megatons. The blast carried fallout to inhabitants on the Marshall Islands, some of whom ate the radioactive powder they believed to be snow. Hundreds were overexposed to radiation, and a nearby Japanese fishing ship crew suffered from radiation poisoning. Fallout from that blast eventually spread 7,000 miles, including to India, the United States, and Europe.

The Mechanics of Atomic and Hydrogen Bombs

There are two types of nuclear bombs. The atomic bomb creates temperatures equal to those on the surface of the sun; and the much more powerful hydrogen bomb bring the equivalent of a small piece of the sun to earth.

The basic nuclear weapon is the atomic bomb, otherwise known as the fission bomb. Atomic bombs typically have yields measured in the thousands of tons of TNT, or kilotons. Their explosive force is generated from a fission process; fission occurs when a neutron enters the nucleus of an atom of a nuclear material, which is either enriched uranium or enriched plutonium. A large amount of energy is released in the process, which causes the nucleus to release a few more neutrons. In the presence of a critical mass, these neutrons go on to create a chain reaction. There are two types of bomb designs for initiating fission. The first is the gun assembly technique, which brings together two subcritical masses to form a critical mass; the second is the implosion technique, which compresses a single subcritical mass into a critical density.

On August 6th, 1945, the U.S. Air Force dropped the atomic bomb known as “Little Boy” on Hiroshima. Little Boy was a gun-type bomb with a core of 60 kilograms of uranium-235. About 700 grams of it fissioned (just over 10%), generating a blast of 12.5 kilotons; about 60,000 to 80,000 people were killed by the blast, while up to twice that number were killed by burns and radiation. Three days later, the U.S. dropped an atomic bomb on Nagasaki. The Nagasaki bomb, known as “Fat Man,” was an implosion-style bomb carrying 8 kilograms of plutonium-239. Once again about 10% of the material fissioned, producing a yield of about 22 kilotons and instantly killing about 40,000 people. The complete detonation of its plutonium would have caused an explosion 10 times its size.

The more sophisticated and far more destructive kind of nuclear weapon is hydrogen bomb, otherwise known as the thermonuclear bomb, the fusion bomb, or the H-bomb. In hydrogen bombs, heavier isotopes of hydrogen are fused together to form helium. That reaction creates a great deal of energy, far more than the chain reaction possible in fission bombs. Hydrogen bombs are far more difficult to construct than the atomic bomb; nine countries possess nuclear weapons, but only five have definitely developed hydrogen bombs. A successful detonation requires the explosion of a fission bomb (the “primary”) to ignite a fusion (the “secondary”). The difficulty presented by the hydrogen bomb is the risk that the atomic bomb might explode prematurely and blow up the whole bomb, an event referred to as a “fizzle.”

Hydrogen bombs are hundreds or thousands of times more powerful than atomic bombs. The first hydrogen device, which couldn’t be used as a weapon, was detonated by the United States in November of 1952. A true hydrogen weapon was not detonated by America until March, 1954. The bomb, Castle Bravo, was the most powerful nuclear explosion America would ever generate; at 15 megatons, it was over 700 times more powerful than the blast at Nagasaki. The Soviet Union would detonate its first hydrogen bomb in November, 1955. In 1961, it would go on to detonate the largest nuclear weapon ever: The Tsar Bomba had a yield of over 50 megatons, or over 2500 Nagasakis.

The value of a weapon of these sizes is not immediately obvious. A 1-megaton bomb would kill most people within hundreds of miles, while the largest of cities would be destroyed by a bomb of 10 megatons.

How Are Nuclear Weapons Delivered?

There are two types of nuclear deployments. Strategic weapons are launched against homelands, while tactical weapons are used on battlefields.

Strategic nuclear weapons are typically delivered in one of three ways. First, they may be launched from bombers; these can either take the form of free-fall gravity bombs or as air-launched cruise missiles (ALCMs). Second, they’re deployed on intercontinental ballistic missiles (ICBMs), which are launched from underground silos and are capable of reaching any target on earth. Finally, submarine-launched ballistic missiles (SLBMs) are deployed by submarines, which can lie at sea for months and surface only to launch. The majority of warheads are deployed on ballistic missiles, while a few hundred are located at bomber bases.

There has been a greater variety of tactical nuclear weapons, though they’re no longer deployed. They were once a regular part of arsenals, including as torpedoes, mines, artillery, and rocket launchers. A young Colin Powell was an officer stationed in West Germany in 1958 when he was tasked with guarding against a Soviet invasion; if the enemy came over, he was to launch 280 mm atomic cannons, which fired artillery shells with yields of 15 kilotons (or about the explosive force of Hiroshima). These tactical weapons have never actually been put to use.

Stockpiles

The global nuclear stockpile peaked at 70,000 weapons in 1986. Most have been owned either by the Americans or the Soviets.

Both countries have vastly reduced their arsenal. In the last 25 years, America has reduced its stockpile from about 23,000 weapons to around approximately 7000 today. Meanwhile, Russia has brought down its stockpile to around 8000 weapons, from a peak of 30,000 inherited from the Soviet Union.

There are seven other countries with confirmed nuclear weapons: France, China, the U.K., Pakistan, India, and North Korea. Israel is rumored but not officially confirmed to have nuclear weapons. Of all nine countries, five are confirmed to have hydrogen bombs: the U.S., Russia, France, China, and the U.K. India has claimed to have detonated a hydrogen bomb, but scientists debate whether it was a true two-stage thermonuclear device.

The vast majority of nuclear weapons are and have been operated by the U.S. and the Soviet Union; the stockpiles of other countries are miniscule in comparison. Currently France has the next largest stockpile, at around 300 weapons, while North Korea has fewer than 10. Motivations for acquiring the bomb have varied for every country. China, for example, sought not to depend too heavily on protection from the Soviet Union, just as Britain decided that it wanted warheads not controlled by America. Meanwhile, though France was motivated by a similar concern not to depend too much on the United States, it has also developed nuclear weapons because it craved status. Charles de Gaulle believed that that the bomb would “place France where she belonged, among the Great Powers.”

American Nuclear Strategy

 

As America demobilized after the Second World War, Eisenhower believed that nuclear weapons were a cheap substitute to maintaining a large army to deter Soviet aggression. With his Secretary of State John Foster Dulles, he defined a policy called “New Look” that relied on nuclear forces, as opposed to conventional forces, to deter aggression. The United States would be “willing and able to respond vigorously at places and with means of its own choosing.”

What did that mean in practice? At the discretion of the president, the entirety of the American nuclear stockpile would be delivered to enemy targets, both military and civilian. It was a first-strike policy: The enemy faced vast destruction if the United States determined that it crossed a line. Eisenhower and his staff considered it the ultimate deterrence.

It also attracted immediate skepticism from strategists. Critics of the policy considered it reckless and crude. First, it seemed practically an invitation for the Soviets to strike America; before a major action, Soviet forces should eliminate the American means to respond. Second, Eisenhower drew no bright line for incurring nuclear attack. Was America ready to initiate nuclear exchange, and guarantee the deaths of millions, in order to prevent a small country from turning Communist? What about Soviet meddling in the internal affairs of an allied country? In other words, this commitment to initiate exchange was insufficiently credible.

Strategists who made it their living to think about nuclear exchange attempted to make improvements. Many of the them were analysts at the RAND Corporation, a research institute set up by the Air Force to improve engineering and ponder novel scenarios for the modern world. These analysts tried to create options between official U.S. displeasure and full-scale thermonuclear exchange.

The rest of this essay is about certain ideas they developed to reduce the likelihood of mutual destruction. It gives a broad overview of the evolution of American strategic thinking, which started from massive deterrence, then moved through to reject elaborate methods of defense, and ended up on relying on once again on a robust system of deterrence.

Counterforce

William Kaufmann was a RAND analyst and political science professor who tried to create opportunities to wage limited war given weapons of unlimited power. He developed and was the proponent of a strategy that came to be known as “counterforce.”

There are two types of targets: military, which includes airbases, command stations, barracks, etc.; and civilian, which means cities and industrial sites. Early nuclear plans made no distinction between them. When authorized by the president, the stockpile would be launched against every target deemed to be valuable.

Kaufmann developed a different strategy: In case of conflict, not every warhead would be launched, and those that were launched would strike only military targets. The goal was to wipe out the enemy’s military capabilities while warheads held in reserve would threaten enemy cities. In the ideal world, after suffering a (reduced) retaliatory strike, the United States would have eliminated Soviet military capabilities and would be able to use Soviet cities as hostages to bargain for surrender.

What were the virtues of counterforce, as opposed to the cities-also countervalue, strategy?

First, civilians would avoid the brunt of the force. Vast numbers of innocents in cities would be largely spared. In a full-scale nuclear exchange, defense scenarios anticipated hundreds of millions of Soviet and American deaths, no matter who launched first. A counterforce strike also gives an incentive to the retaliating side to also target only military sites. A successful counterforce attack was projected to save over 100 million lives. Moreover, from a strategic standpoint, it created a chance for nuclear war to be limited. Counterforce offered the enemy an opportunity to recognize defeat early and so surrender with its civilian force intact.

The Pentagon warmed to counterforce. By 1962, Secretary McNamara publicly declared counterforce to be official U.S. policy, and encouraged the Soviets to adopt it as well.

It also had its skeptics, who argued that there were no guarantees that it might work as planned. When the enemy detected ICBMs, SLBMs, and strategic bombers racing towards its territory, it had no way to determine that it was subject to a “mere” counterforce strike. It was not clear that counterforce might really stave off escalation, and perhaps the simplicity of massive deterrence was still the best strategy after all.

Curtis LeMay, director of the Strategic Air Command (SAC), thought it meant going soft on the enemy; Thomas Schelling, who worked at RAND and consulted for the Pentagon, never fully embraced it; and even McNamara ended up skeptical of its usefulness. As a result, American nuclear strategy see-sawed between counterforce and massive deterrence; it would be integrated into nuclear plans, and then quickly stripped away, only to be re-introduced years later.

Conventional War

In addition to counterforce, Kaufmann also advocated for another way to keep war limited: Building up conventional military forces.

This was precisely the strategy rejected by Eisenhower. The Soviets were far superior in troops and tanks, enough to overrun Europe. Instead of trying to match their forces, Eisenhower wanted to rely on the massively-destructive and easily-deployable nuclear bomb to stave off attack or deter aggression in the first place.

But massive deterrence was risky. The enemy will try out many gray areas to test which actions were permissible; in each instance the American president has to decide whether it permits the action and lose face or launch the warheads, which risks national suicide while guaranteeing the deaths of millions.

Kaufmann thought that it was reckless to use nuclear weapons at all, save only in the gravest of circumstances. He observed that America’s most successful foreign actions were carried out without the use of nuclear weapons (as was the case with the Berlin Airlift and the intervention in Korea), and continued to believe that their use could be spared.

But it would require that the United States invest in different means for response. He suggested building up conventional forces, which meant included significant ground forces to beat back a Soviet invasion of Europe and smaller scale teams that can be rapidly deployed to “hot spots.”

In the logic of deterrence, an investment in conventional warfare is a signal that nuclear arms were too dangerous to be used. Building up conventional forces was advocated not only by Kaufmann but also important figures like Bernard Brodie and Herman Kahn, two of the earliest nuclear strategists. The growth of conventional forces in the Kennedy Administration was an acknowledgment to the Soviets that they could meet conflict without compelling the use of nuclear arms.

Schelling, in his Nobel Prize lecture, considered conventional forces to be a form of arms control, one as if both sides signed a treaty not to engage in nuclear change: “The investment in restraints on the use of nuclear weapons was real as well as symbolic.” With more options available, going nuclear was moved even further back to be the path of last resort.

SIOP: Single Integrated Operational Plan

Until the end of the Eisenhower Administration, nuclear target planning was delegated to senior military commanders. No single group or person oversaw the selection of targets nor organized the deployment of the nuclear force.

Take a second to imagine what that meant. The president had only the binary decision to strike or not strike. If he decided to strike, it’s up to the different services, each with their own stockpile, to deploy the weapons. The Air Force, the Navy, and the Army made their own war plans. Multiple, redundant warheads would be delivered to a target if it was selected by more than one branch. Due to a lack of coordination, an attacking force might be wiped out by the detonation caused by another American strike. Everyone launched at their own pace; the Navy was found to have been planning strikes fifteen days after the start of war.

This was the state of American nuclear plans for over a decade. The Joint Chiefs of Staff were resistant to the idea of a single branch, which would most likely end up being the Air Force, to control all warheads. The Navy was loath to give up its prized nuclear-armed Polaris submarines, and regarded all moves to centralize to be a plot by the Air Force to monopolize nuclear weapons.

It was only towards the end of the Eisenhower Administration that military objections were overruled. In 1960, Eisenhower authorized the creation of the Single Integrated Operational Plan (pronounced SEYE-OP) to coordinate a contingent plan of nuclear strikes. Only then did the United States integrate target selection into a national plan led by a single organization: the SAC.

SIOP went through different iterations. Some of them integrated the doctrine of counterforce, giving the president different options for launching strikes.

But still it didn’t eliminate concerns about overkill. SAC made extremely pessimistic assumptions about the probability of a successful strike. They planned to lay down four thermonuclear warheads with the power of 7.8 megatons for a Russian city the size of Hiroshima; the successful detonation of all of them would generate an explosive force 600 times more powerful than the 12.5 kiloton bomb that wiped out the Japanese city. It also did not consider the impact of fallout damage, because fallout generates little military value. These assumptions gave SAC the chance to constantly demand more bombs and bombers.

Still, most iterations of SIOP still emphasized the launch of nearly the entire stockpile. Plan 1-A would involve launching over 3000 nuclear weapons, projected to kill nearly 300 million people mostly in Russia and China. SIOP also targeted countries like Albania, for which the presence of a single large air-defense radar was enough to justify a strike by a megaton bomb; no consideration was given to the political fact that the country had been drifting away from the Soviet bloc.

SIOP was refined by different administrations and by different secretaries of defense, but it always suffered two flaws: massive overkill and relative inflexibility in the severity of response. Reading SIOP made presidents and generals feel “appalled” and “stunned”; it would be referred to by Henry Kissinger as a “horror strategy.”

Continue reading

Herman Kahn and the Bomb

Herman Kahn is the kind of eccentric whom you no longer publicly see. In his capacity as an analyst at the RAND Corporation, he made a vast effort to get the public to consider his ideas: What happens next after thermonuclear exchange.

The thought of nuclear war was mostly too grim to behold, and Kahn acknowledged that head on by writing a book called Thinking About the Unthinkable. With morbid humor he challenged people to think about deterrence strategies, mineshaft shelters, and the hydrogen bomb. He loved debate, and he reached out to the public with “twelve-hour lecture, split into three parts over two days, with no text but with plenty of charts and slides.”

Kahn was the main inspiration for Dr. Strangelove. He was supposed to have the highest I.Q. ever recorded, and he made real contributions in shaping U.S. nuclear strategy through his commentary and analysis. He was accused by his colleagues and by the public of treating the annihilation of millions with far too much levity than the subject deserved. Some of the things he said have been really shocking, but it’s a bit of a shame that we don’t really see brilliant oddballs like him much in public, to listen to his ideas and then debate them.

Here are a few interesting facts about him, from two sources. First, Fred Kaplan’s book Wizards of Armageddon, which features him for a chapter:

Brodie and Kauffman approached the business of first-use and counterforce strikes uneasily, as acts of desperation among a terrible set of choices. Kahn, on the other hand, dived in eagerly.

At one point, Kahn had calculations on bomb designs plugged into all twelve high-speed computers then operating in the United States.

Calculations suggested that even with a purely countermilitary attack, two million people would die, a horrifyingly high number… (But) as Kahn phrased it, only two million people would die. Alluding almost casually to “only” two million dead was part of the image that Kahn was fashioning for himself, the living portrait of the ultimate defense intellectual, cool and fearless… Kahn’s specialty was to express the RAND conventional wisdom in the most provocative and outrageous fashion possible.

Along with an engineer at RAND, Kahn figured out on paper that such a Doomsday Machine was technologically feasible.

In the early-to-mid 1960s, Kahn would work out an elaborate theory of “escalation, ” conceiving 44 “rungs of escalation” from “Ostensible Crisis” to “Spasm of Insensate War,” with the rungs in between including “Harassing Acts of Violence,” “Barely Nuclear War,” “Justifiable Counterforce Attacks” and “Slow-Motion Countercity War.”

Kahn felt that having a good civil-defense system made the act of going to the nuclear brink an altogether salutary thing to do on occasion.

More than 5,000 people heard (his lectures) before Kahn finally compiled them into a 652-page tome called On Thermonuclear War. It was a massive, sweeping, disorganized volume, presented as if a giant vacuum cleaner had swept through the corridors of RAND, sucking up every idea, concept, metaphor, and calculation that anyone in the strategic community had conjured up over the previous decade. The book’s title was an allusion to Clausewitz’s On War… Published in 1960 by the Princeton University Press, it sold an astonishing 30,000 copies in hardcover, quickly became known simply as OTW among defense intellectuals, and touched off fierce controversy among nearly everyone who bore through it.

Strangelove, the character and the film, struck insiders as a parody of Herman Kahn, some of the dialogue virtually lifted from the pages of On Thermonuclear War. But the film was also a satire of the whole language and intellectual culture of the strategic intellectual set. Kahn’s main purpose in writing OTW was “to create a vocabulary” so that strategic issues can be “comfortably and easily” discussed, a vocabulary that reduces the emotions surrounding nuclear war to the dispassionate cool of scientific thought. To the extent that many people today talk about nuclear war in such a nonchalant, would-be scientific manner, their language is rooted in the work of Herman Kahn.

And from Louis Menand, writing in the New Yorker:

In his day, Kahn was the subject of many magazine stories, and most of them found it important to mention his girth—he was built, one journalist recorded, “like a prize-winning pear”—and his volubility.

He became involved in the development of the hydrogen bomb, and commuted to the Livermore Laboratory, near Berkeley, where he worked with Edward Teller, John von Neumann, and Hans Bethe. He also entered the circle of Albert Wohlstetter, a mathematician who had produced an influential critique of nuclear preparedness, and who was the most mandarin of the rand intellectuals. And he became obsessed with the riddles of deterrence.

For many readers, this has seemed pathologically insensitive. But these readers are missing Kahn’s point. His point is that unless Americans really do believe that nuclear war is survivable, and survivable under conditions that, although hardly desirable, are acceptable and manageable, then deterrence has no meaning. You can’t advertise your readiness to initiate a nuclear exchange if you are unwilling to accept the consequences. If the enemy believes that you will not tolerate the deaths of, say, twenty million of your own citizens, then he has called your bluff. It’s the difference between saying, “You get one scratch on that car and I’ll kill you,” and saying, “You get one scratch on that car and you’re grounded for a week.”

Kubrick was steeped in “On Thermonuclear War”; he made his producer read it when they were planning the movie. Kubrick and Kahn met several times to discuss nuclear strategy, and it was from “On Thermonuclear War” that Kubrick got the term “Doomsday Machine.”

Kubrick’s plan to make a comedy about nuclear war didn’t bother Kahn. He thought that humor was a good way to get people thinking about a subject too frightening to contemplate otherwise, and although his colleagues rebuked him for it—“Levity is never legitimate,” Brodie told him—he used jokes in his lectures.

Kahn died, of a massive stroke, in 1983. That was the year a group headed by Carl Sagan released a report warning that the dust and smoke generated by a thermonuclear war would create a “nuclear winter,” blocking light from the sun and wiping out most of life on the planet. Kahn’s friends were confident that he would have had a rebuttal.

How growing and marketing apples turned industrial

It’s apple season. A few years ago John Seabrook wrote a great history of the growing, distribution, and marketing of apples. It features the new breed SweeTango, which is nearly an industrial product which enjoys many IP protections and a sophisticated multi-national distribution strategy. The piece features lots of reporting and plenty of Red Delicious bashing, while the best part is about how new breeds are discovered. Here are some excerpts.

The history of apples in America…

Malus pumila, of the family Rosaceae and the tribe Pyreae, was domesticated some four thousand years ago, in the fruit forests of what is now southeastern Kazakhstan, near the city of Almaty. Frank Browning, the author of “Apples,” reports seeing apple trees growing up through cracks in the pavement there. The wild horses of the nearby steppe liked to eat apples, and could cover long distances, carrying the seeds in their guts. Apples travelled westward along trade routes, and show up in Persia around the time of Alexander the Great, and in Europe not long after; the Romans cultivated them widely. (The apple in the Garden of Eden was most likely a pomegranate, or possibly an orange.) The species came to the New World with the first European settlers, in the form of seeds, and the pioneers, as they pushed westward, took apples with them.

As the industry moved away from cider-making and toward table fruit, some of these apples were named, propagated by cloning—the method of grafting a piece of one tree onto the trunk of another, which produces fruit that is an exact genetic copy of the first tree’s—and promoted like pop stars. The Northeast had Jonathan, Esopus Spitzenburg, and Blue Pearmain (Thoreau’s favorite); the South claimed Winesap, Sally Gray, and Disharoon; the Midwest boasted Hawkeye and Detroit Red; and from the West came the Gravenstein and the Yellow Newtown Pippin. Their flavors were shaped by their respective climates—the shorter the growing season the tarter the apples tended to be.

In the twenties and thirties, refrigerated railcars allowed growers to transport apples over great distances, and, thanks to cold-storage warehouses, wholesalers and retailers could keep them for long periods of time. As regional markets gave way to supermarket chains, the number of available apple varieties shrank, and those which endured shed their regional associations. By the nineteen-sixties, most supermarkets carried three types of apple: McIntosh, a small, tart apple that John McIntosh had found growing on his farm in Ontario, Canada, in 1811; Red Delicious, originally the Hawkeye, a sweet apple discovered on a farm in Iowa in the eighteen-seventies; and Golden Delicious, found in a hay field in West Virginia in the eighteen-nineties. Apple breeders tweaked these apples, to enhance their industrial potential—they had to be durable, long-lasting, and attractive—generally at the expense of texture and taste (unlike many fruits, apples can look wonderful and taste terrible, and so they lend themselves to horticultural sleight of hand).

In the United States, apple production happens mainly in the shoulders of the nation—Washington State is the largest producer, and New York is the next largest. Not surprisingly, each of those states has a breeding program, at Washington State University and at Cornell. Minnesota is twenty-third among the twenty-nine apple-growing states, in volume of production; up through the eighteen-fifties almost no apples grew there, because it was too cold. Its breeding program was born not of abundance but of necessity. “I wouldn’t live in Minnesota,” Horace Greeley once said, while visiting the state, “because you can’t grow apples here.” That remark inspired a cantankerous apple breeder named Peter Gideon to prove Greeley wrong with an apple he named Wealthy, after his wife. The success of the Wealthy apple, introduced in 1861 and still grown in heritage orchards around the country, was the inspiration for the university’s apple-breeding program, in 1878, which was followed by the founding of the Minnesota Agricultural Experiment Station, where Bedford works. The station was built with funds authorized by the Hatch Act of 1887, which provided research-and-development money to land-grant universities for the promotion of agriculture.

How the apple market took off…

When Bedford assumed control of the apple-breeding program, in the early eighties, the U.S. apple industry was poised for a profound transformation. Something like the pre-industrial world of apples, where an apple lover had the choice of many varieties, was returning, not through heirlooms but through new breeds of super apples from other countries. Instead of standing mostly for places and people, the new apples would stand for images, sounds, and ideas—Royal Gala, Pink Lady, Jazz. This transformation had begun in 1975, when a Washington grower named Grady Auvil introduced a tart, green, hard-fleshed apple originally from Australia that Maria Ann Smith, a farmer’s wife, had discovered growing on the family’s compost pile in New South Wales, in the eighteen-sixties. The Granny Smith apple was widely propagated in New Zealand, became famous in the United Kingdom in the nineteen-sixties as the logo on the Beatles’ Apple Records, and, on arriving in the United States, expanded the pantheon of supermarket apples to four, demonstrating to apple breeders everywhere that U.S. consumers would respond favorably to a new apple. In the early seventies, President Nixon had imposed price freezes on all foods except fresh produce. Grocery retailers, looking to increase profits, expanded their produce sections. After controls were lifted, they continued to seek out new varieties of fruits and vegetables that could be marketed at a premium.

In the eighties, the Fuji, a large, sweet apple that was originally bred in Japan, was brought to the U.S., and quickly caught on. That decade, Braeburn and Gala apples, both from New Zealand, were also introduced to the U.S., to great acclaim; the Gala is now one of the most popular apples in many parts of the country. To Bedford these successes demonstrated that “if the consumer is given choices, and if they realize, by eating some of these apples, how good an apple can be, then the market can’t keep supplying lousy apples, because the consumer is not going to tolerate that.” The other thing the new apples proved, Bedford added, was that “an apple doesn’t have to look that good. The original Fuji was an ugly apple. It showed that if the flavor was pleasing, the customer could get past the appearance.” Meanwhile, Red Delicious began to decline. Washington produced roughly sixty million bushels in 1995; the state produces a little more than half that much now. In 2002, Congress spent ninety-two million dollars to assist struggling apple growers.

How new breeds are discovered…

Bedford’s apple laboratory, a thirty-acre parcel of rolling land about thirty miles west of Minneapolis, is planted with about twenty thousand apple trees. In May, during blossom time, Bedford and his student assistants make crosses between promising varieties: taking pollen from one variety and swabbing it onto the stamen of another, and then bagging those flowers to keep pollen from other trees out. Although the apple that grows on that branch will be true to the mother tree’s DNA, the seeds will be heterozygous, combining equal and unique parts of both parents’ genes so that every seed is distinct—another thing apple trees and humans have in common. Bedford hopes to get the best characteristics of both parents into the offspring, while producing an apple with an identity all its own. “Some apples look great but don’t pass those traits on,” he told me, “while others are not so great-looking but make good parents.” Each one of the three to five thousand seeds that result from a season of crosses will be unlike all the others and will produce a different tree. Bedford plants the seeds in a greenhouse, and grafts the budding trees onto outdoor rootstock the following summer. In about five years, he will have four thousand or so brand-new apples to taste.

In the fall, during the apple harvest, Bedford tastes apples from blossom times past, up to five hundred apples a day, in the hope of finding that one apple in ten thousand that will be released as a commercial variety. I spent an afternoon with him in early September, walking through long rows of young trees, and tasting apples of every imaginable size, shape, hue, and flavor, from musky melonlike apples to bright lemony apples and apples that tasted like licorice. “We don’t actually swallow, and we don’t really even have time to spit,” Bedford explained. “You just kind of hold a bit in your mouth for a while, until you get the flavor, and then let it fall out.”

If a tree produces exceptionally good apples for several years in a row, it achieves élite status and is awarded a number. Four clones are made from the mother tree’s wood, and those trees are grown in another orchard on the property, under commercial conditions. To evaluate the élite trees, Bedford carries a field notebook with twenty categories on a page, which, in addition to the “organoleptics”—all the sensory stuff, like flavor, texture, and color—include tree size, shape, and yield. He scores each category from one to nine. He generally continues these yearly evaluations for a decade or longer, in order to subject the trees to a representative range of extreme summers and winters and drought and flood, and in the hope of ferreting out all the quirks that apple trees are heir to. Some are wild in their youth but eventually settle down, while others bear fruit every other year; some bear smaller fruits as the trees age, while others drop their apples before they’re ripe.

Finally, a truly outstanding apple is named, the tree is patented, and clones are released to nurseries, where thousands of copies of the trees are made and sold to growers, for which the university collects a royalty of around a dollar per tree during the life of the patent. Large color posters of the five apples released during Bedford’s time at the agricultural station decorate his office, their swollen flesh glistening with beads of moisture, like centerfold pinups in a mechanic’s shop.

As we walked the rows, Bedford carried a can of orange spray paint. If an apple wasn’t reasonably tasty—and only two of the scores of varieties we tasted made the grade—and if he determined the apple to be fully ripe (which he did by cutting it open with a long-bladed knife and spraying iodine on the flesh; the starch in an unripe apple will turn black) then he coldly marked the tree for extermination by spraying orange paint on its trunk. That day, I watched him terminate dozens of unique hybrids whose like the world will never see again, and by the end of the day I had a newfound respect for the breeder as the godlike master of his domain, the ultimate arbiter of life and death in the orchard.

Continue reading

Peter Thiel on what it means to bet on China and Alibaba

Peter Thiel is fond of a rhetorical device: “What seems like X should really be understand as ~X’” An example: “You have to think of companies like Microsoft or Oracle or Hewlett-Packard as fundamentally bets against technology.”

Here’s the trick applied to understand what it means to invest in Alibaba. Is Alibaba the next great technology company, or is it really something else?

Thiel touched on this in a talk at the American Enterprise Institute with James Glassman.

China thinks of the internet in a way that’s very different from the U.S. They think of information technology as something less important than we do economically, and more important politically.

And so these companies are fundamentally political investments. They’re protected by the government. They won’t face competition from Western companies.

So an investment in Alibaba is fundamentally a bet that Jack Ma will stay in the good graces of the Chinese Communist Party. I suspect that that’s a good bet…

Relatedly, here are his more general thoughts on China, via his essay published (in 2008) for Hoover’s Policy Review. Is a bet on China a bet on successful globalization, or is it really something else?

One intermediate possibility is that the China of 2014 will be like the internet of 2007 — much larger, but with winners very different from the ones that investors today expect. The largest New Economy business is Google, a company that scarcely registered in early 2000. Might it also turn out that the greatest Chinese companies of 2014 will be concerns that are private and tightly controlled businesses today, rather than the high-profile and money-losing companies that have been floated by the Chinese state?

At the very least, outsiders need to understand that China is controlled for the benefit of insiders. The insiders know when to sell, and so one would expect the businesses that have been made available to the outside world systematically to underperform those ventures still controlled by card-carrying members of the Chinese Communist Party. “China” will underperform China, and a “China” bubble exists to the extent that investors underestimate the degree of this underperformance.

This limitation also may be framed in terms of globalization. In important respects, “China” as a financial economy is sustained by the absence of globalization — in particular, by the enormous amounts of capital trapped within China’s borders that must either suffer slow death from inflation (now running higher than Chinese bank deposit rates) or brave the acute sense of vertigo of the elevated stock market. Because the free convertibility of the renminbi would dampen equity speculation, a long “China” position is not a forecast that financial globalization will succeed, but rather a bet that its internal contradictions will persist.

Michael Nielsen’s rules on editing

Michael Nielsen has a post called “Six Rules for Rewriting.” They’re good rules to keep in mind as you edit, but really they’re good principles that can be applied as you write.

These rules are as appropriate for emails as they are for longer pieces. Here they are:

  1. Every sentence should grab the reader and propel them forward.
  2. Every paragraph should contain a striking idea, originally expressed.
  3. The most significant ideas should be distilled into the most potent sentences possible.
  4. Use the strongest appropriate verb.
  5. Beware of nominalization. (Contrast the wishy-washy “I conducted an investigation of rules for rewriting” with the more direct “I investigated rules for rewriting”.)
  6. None of the above rules should be consciously applied while drafting material.

Once you’ve properly internalized the first five then you wouldn’t need to worry about #6: It’ll all be natural!

Here’s my own summary, with a little added interpretation, of the rules: Make sure that every sentence says something. A sentence is a success if the reader moves on to the next sentence; a paragraph is a success if the reader moves on to the next paragraph.

Why Is Peter Thiel Pessimistic About Technological Innovation?

We’ve all heard this quote from Peter Thiel: “We wanted flying cars, instead we got 140 characters.” It’s the introduction of his VC’s manifesto entitled “What Happened to the Future?”, and it neatly sums up his argument that we’re economically stagnant and no longer living in a technologically-accelerating civilization.

Less well-known is a slightly longer quote from Thiel that also summarizes his views on the technological slowdown. This is from a recent debate with Marc Andreessen:

“You have as much computing power in your iPhone as was available at the time of the Apollo missions. But what is it being used for? It’s being used to throw angry birds at pigs; it’s being used to send pictures of your cat to people halfway around the world; it’s being used to check in as the virtual mayor of a virtual nowhere while you’re riding a subway from the nineteenth century.”

Why is Thiel pessimistic about the future of technology and of economic growth? Here’s a selection of his evidence that we’re no longer technologically accelerating, collected from his writings and public talks.

(Remarks from talks are lightly edited for clarity. Click here to see this article in slightly prettier formatting.)

Energy


Look at the Forbes list of the 92 people who are worth ten billion dollars or more in 2012. Where do they make money? 11 of them made it in technology, and all 11 were in computers. You’ve heard of all of them: It’s Bill Gates, it’s Larry Ellison, Jeff Bezos, Mark Zuckerberg, on and on. There are 25 people who made it in mining natural resources. You probably haven’t heard their names. And these are basically cases of technological failure, because commodities are inelastic goods, and farmers make a fortune when there’s a famine. People will pay way more for food if there’s not enough. 25 people in the last 40 years made their fortunes because of the lack of innovation; 11 people made them because of innovation. (Source: 39:30)

Real oil prices today exceed those of the Carter catastrophe of 1979–80. Nixon’s 1974 call for full energy independence by 1980 has given way to Obama’s 2011 call for one-third oil independence by 2020. (Source)

“Clean tech” has become a euphemism for “energy too expensive to afford,” and in Silicon Valley it has also become an increasingly toxic term for near-certain ways to lose money. (Source)

One of the smartest investors in the world is considered to be Warren Buffett. His single biggest investment is in the railroad industry, which I think is a bet against technological progress, both in transportation and energy. Most of what gets transported on railroads is coal, and Buffett is essentially betting that after the 21st century, we’ll look more like the 19th rather than the 20th century. We’ll go back to rail, and back to coal; we’re going to run out of oil, and clean-tech is going to fail. (Source: 10:00.)

There was a famous bet in the between Julian Simon, an economist, and Paul Ehrlich in 1980 about whether a basket of commodity prices will go down in price over the next decade. Simon famously won this bet and this was sort of taken as evidence that we have tremendous technological progress and things are steadily getting better. But if you had to re-run the Simon-Ehrlich bet on a rolling decade basis then Paul Ehrlich has been winning the bet every year since 1994 when the price of this basket of goods has been getting more expensive on a decade-by-decade basis. (Source: 8:30)

Transportation


Consider the most literal instance of non-acceleration: We are no longer moving faster. The centuries-long acceleration of travel speeds — from ever-faster sailing ships in the 16th through 18th centuries, to the advent of ever-faster railroads in the 19th century, and ever-faster cars and airplanes in the 20th century — reversed with the decommissioning of the Concorde in 2003, to say nothing of the nightmarish delays caused by strikingly low-tech post-9/11 airport-security systems. (Source)

Biotech


Today’s politicians would find it much harder to persuade a more skeptical public to start a comparably serious war on Alzheimer’s disease — even though nearly a third of America’s 85-year-olds suffer from some form of dementia. (Source)

The cruder measure of U.S. life expectancy continues to rise, but with some deceleration, from 67.1 years for men in 1970 to 71.8 years in 1990 to 75.6 years in 2010. (Source)

We have one-third of the patents approved by the FDA as we have 20 years ago. (Source: 7:35)

Space


The reason that all the rocket scientists went to Wall Street was not only because they got paid more on Wall Street, but also because they were not allowed to build rockets and supersonic planes and so on down the line. (Source: 45:50.)

Space has always been the iconic vision of the future. But a lot has gone wrong over the past couple of decades. Costs escalated rapidly. The Space Shuttle program was oddly Pareto inferior. It cost more, did less, and was more dangerous than a Saturn V rocket. It’s recent decommissioning felt like a close of a frontier. (Source)

Agriculture


The fading of the true Green Revolution — which increased grain yields by 126 percent from 1950 to 1980, but has improved them by only 47 percent in the years since, barely keeping pace with global population growth — has encouraged another, more highly publicized “green revolution” of a more political and less certain character. We may embellish the 2011 Arab Spring as the hopeful by-product of the information age, but we should not downplay the primary role of runaway food prices and of the many desperate people who became more hungry than scared. (Source)

Finance


Think about what happens when someone in Silicon Valley builds a successful company and sells it. What do the founders do with that money? Under indefinite optimism, it unfolds like this:

  • Founder doesn’t know what to do with the money. Gives it to large bank.
  • Bank doesn’t know what to do with the money. Gives it to portfolio of institutional investors in order to diversify.
  • Institutional investors don’t know what to do with money. Give it to portfolio of stocks in order to diversify.
  • Companies are told that they are evaluated on whether they generate money. So they try to generate free cash flows. If and when they do, the money goes back to investor on the top. And so on.

What’s odd about this dynamic is that, at all stages, no one ever knows what to do with the money. (Source)

10-year bonds are yielding about 2%. The expected inflation over the next decade is 2.6%. So if you invest in bonds then in real terms you’re expecting to lose 0.6% a year for a decade. This shouldn’t be surprising, because there’s no one in the system who has any idea what to do with the money. (Source: 27:35)

Science and Engineering


We have 100 times as many scientists as we did in 1920. If there’s less rapid progress now than in 1920 then the productivity per scientist is perhaps less than 1% of what it was in 1920. (Source: 50:20)

The Empire State Building was built in 15 months in 1932. It’s taken 12 years and counting to rebuild the World Trade Center. (Source: 36:00)

The Golden Gate Bridge was built in three-and-a-half years in the 1930s. It’s taken seven years to build an access road that costs more than the original bridge in real dollars. (Source: 36:10)

When people say that we need more engineers in the U.S., you have to start by acknowledging the fact that almost everybody who went into engineering did very badly in the last few decades with the exception of computer engineers. When I went to Stanford in the 1980s, it was a very bad idea for people to enter into mechanical engineering, chemical engineering, bioengineering, to say nothing of nuclear engineering, petroleum engineering, civil engineering, and aero/astro engineering. (Source: 45:20)

Computers


Even if you look at the computer industry, there are some things that aren’t as healthy as you might think. On a number of measurements, you saw a deceleration in the last decade in the industry. If you look at labor employment: It went up 100% in the 1990s, and up 17% in the years since 2000. (If you ignore the recession, it’s gone up about 38% since 2003.) So it’s slower absolute growth, and much lower percentage growth. (Source: 8:40)

If you measured the market capitalizations of companies, Google and Amazon (the two big computer companies created in the late-nineties) are worth perhaps two or three times as all companies combined since the year 2000. If you look at it through labor or capital, there’s been some sort of strange deceleration. (Source: 9:10)

We have a large Computer Rust Belt that nobody likes to talk about. It’s companies like Cisco, Dell, Hewlett Packard, Oracle, and IBM. I think that the pattern will be to become commodities that no longer innovate. There are many companies that are on the cusp. Microsoft is probably close to the Computer Rust Belt. The company that’s shockingly and probably in the Computer Rust Belt is Apple. Is the iPhone 5, where you move the phone jack from the top of the phone to the bottom of the phone really something that should make us scream Hallelujah? (Source: 9:40)

The Technologically-Accelerating Civilization


I sort-of date the end of rapid technological progress to the late-60s or early-70s. At that point something more or less broke in this country and in the western world more generally which has put us into a zone where there’s much slower technological progress. (Source: 39:30)

If you look at 40-year periods: From 1932 to 1972 we saw average incomes in the United States go up by 350% after inflation, so we were making four-and-a-half times as much. And this was comparable to the progress in the forty years before that and so on going back in time. 1972 to 2012: It’s gone up by 22%. (Source: 14:50)

During the last quarter century, the world has seen more asset booms or bubbles than in all previous times put together: Japan; Asia (ex-Japan and ex-China) pre- 1997; the internet; real estate; China since 1997; Web 2.0; emerging markets more generally; private equity; and hedge funds, to name a few. Moreover, the magnitudes of the highs and lows have become greater than ever before: The Asia and Russia crisis, along with the collapse of Long-Term Capital Management, provoked an unprecedented 20-standard-deviation move in financial derivatives in 1998. (Source)

People are starting to expect less progress. Nixon declared the War on Cancer in 1970 and said that we would defeat cancer in 1976 by the bicentennial. Today, 42 years later we are by definition 42 years closer to the goal, but most people think that we’re further than six years away. (Source: 12:10)

How big is the tech industry? Is it enough to save all Western Civilization? Enough to save the United States? Enough to save the State of California? I think that it’s large enough to bail out the government workers’ unions in the city of San Francisco. (Source: 29:00)

The Conclusion


The first step is to understand where we are. We’ve spent 40 years wandering in the desert, and we think that it’s an enchanted forest. If we’re to find a way out of this desert and into the future, the first step is to see that we’ve been in a desert. (Source)

Continue reading

What are hedge funds, and what social functions do they serve?

(Published in prettier formatting on Medium.)

J. Pierpont Morgan died in 1913 with a fortune of about $1.5 billion in today’s dollars. For his sway over Wall Street he was nicknamed “Jupiter,” after the Roman king of the gods.

In 2013, four hedge fund managers took home over $2 billion as income each, with the top manager pocketing $3.5 billion. How did a few asset managers earn more money in a single year than Pierpont Morgan did in his whole life?

It’s not always easy to tell. Hedge funds are secretive firms that have long invited suspicion. Their activities have provoked no less than Bill Clinton, who bemoaned the undue power of “a bunch of fucking bond traders” whose whims determined the success of his policy programs.

What should you know about the industry? This essay discusses how hedge funds are structured and the role they play in the financial system.

What are hedge funds?

Hedge funds are pooled-investment vehicles that are relatively unconstrained in their methods of generating returns. They can be thought of as small mutual funds which face fewer regulatory burdens and invest in less conventional ways.

The hedge fund industry has about $4 trillion in assets under management, which is significant, but not so large that it can dictate to the rest of Wall Street. Consider the fact that BlackRock, an asset management company, has about $4.3 trillion under management alone.

What makes a company a hedge fund?

Hedge funds are legally prohibited from advertising themselves to the public, and are allowed only to raise funds from government-approved “accredited investors.” These investors must prove a certain net worth and go through a registration process to become accredited.

In exchange for this limitation on raising capital, hedge funds face relatively little regulatory scrutiny, with few restrictions on the assets they can trade and the leverage they can employ.

The very first hedge funds distinguished themselves by employing leverage and short-selling. That means that some of their trades were made with borrowed capital, which magnified their returns; and that instead of holding on to a stock and waiting for it to rise, they bet that the price would fall.

These two practices, though, have long stopped being sufficient to distinguish hedge funds from other investment vehicles. Modern hedge funds trade all sorts of securities more exotic than standard stocks and bonds. And aside from long/short strategies, their styles have become more sophisticated by orders of magnitude; that includes investing in distressed assets, mergers arbitrage, quantitative investing, and much more.

2-and-20: The very high fees of hedge funds

Hedge funds are pioneers in many ways, including in the very high compensation scheme they set up for themselves.

Claiming inspiration from the Phoenician merchants who took for themselves a fifth of the profits of a successful sea voyage, the very first hedge fund kept 20% of the profits of a trade, as well as 2% of the total assets under management. That’s terrifically expensive given that passive index funds may charge you something like 0.2% of your assets, with zero extra charge for profits.

This “2-and-20” model is remarkably persistent across hedge funds, so much so that a law professor has argued that instead of as specialized investment vehicles, hedge funds should be understood as “a compensation scheme masquerading as an asset class.”

In addition to high fees, investors in hedge funds must tolerate another cost. Hedge funds typically make it difficult for investors to withdraw money on short notice. So investors have to agree not to touch their capital, locking it up for a while after they invest, and sometimes over certain periods determined at the manager’s discretion. These contractual restrictions can have dramatic effects for managers and investors; depending on when these restrictions are exercised, investors may not pull out of a bad position, or they pull out too early and contribute to the failure of a good trade.

How well do hedge funds perform?

It’s important to note that the term “hedge fund” should not connote “investment firm of market-beating returns,” just as the term “hedge fund manager” does not necessarily mean “asset manager with extraordinary insight.” A hedge fund is mostly a legal class. Someone with little capital or experience in investing can incorporate as his very own hedge fund: All he needs is a business license. There’s no particular reason to believe that the mere act of incorporation turns a newbie into a skilled investor.

Though there are some very high-performing firms that have generated astonishing returns, hedge funds as a class do not seem to be able to consistently beat the market, especially when fees are accounted for. There are no guarantees that buying into just any hedge fund will earn you very high returns.

Which hedge funds are notable, and who manages them?

One of the first investors who resembled the modern macro trader was the economist John Maynard Keynes. Keynes used leverage and went both long and short on currencies, bonds, and stocks while he managed the endowment for King’s College, Cambridge.

Continue reading

Michael Lewis on the difference between gambling and investing

The line between gambling and investing is artificial and thin. The soundest investment has the defining trait of a bet (you losing all of your money in hopes of making a bit more), and the wildest speculation has the salient characteristic of an investment (you might get your money back with interest). Maybe the best definition of “investing” is “gambling with the odds in your favor.”

Michael Lewis, The Big Short.

Thinking Differently: Tyler Cowen interviews Temple Grandin

Tyler Cowen has conducted excellent interviews with Peter Singer and Ralph Nader. Here’s a very short e-book that’s basically a raw transcript of his conversation with Temple Grandin, the slaughterhouse designer and autism researcher who is herself autistic.

Besides the overview on autism what really struck out was how Cowen kept trying to make more general observations about the neurodiverse, and Grandin’s general reluctance to venture into the abstract.

Here are some excerpts:

On what autistic people tend to be good and bad at:

Cowen: In academia, where both of us reside, there are a lot of autistics. And there are other places in our economy where autistics are more likely to flourish than others: library science, the appraisal of paintings, work that requires pattern recognition or fine attention to detail.

Grandin: There are two things that autistics tend to be really bad at. And the [first] thing is, high-level jobs do not require multitasking, having to do two different things at once. The other thing that we’re very bad at is following long strings of verbal instructions. Those seem to be two things that are really quite universal.

Cowen: This notion that the people who do well are the mild cases and the people who don’t do well are the severe cases, I tend not to agree with that.

On autistics and paternalism:

Cowen: Let’s say you want to smoke marijuana – and that affects only you – that’s against the law. I think an autistic person is more likely to be suspicious of paternalism… But is it possible that autistic people are, in some sense, too suspicious of paternalism – that there are examples, maybe, where paternalism would do the world some good, but autistic people, because of their history and, maybe, basic inclination will resist that paternalism because that resistance has become almost ingrained?

Grandin: I have to sell my work and not myself. I can remember early in my career, going to an agricultural engineering meeting and everybody thought I was really, really super weird. And then I whip out a copy of my drawings that I had done, of a cattle-handling facility and they go, “Wow, you drew that?” And as soon as they found that I had drawn that, they started to give me some respect. You know, people respect ability.

Cowen: Maybe ten years ago, I would have thought that over time we’ll tinker with the genes of the human race and this is likely to be a good thing.  But my attitude is changing and I fear if we tinker with genes or use selective abortion, that the result will be we’ll get a lot of kids who are easy to raise or, maybe they’re tall and blonde and captain of the football team, but we’ll lose a lot of diversity.

Cowen: As we go back to the Stone Age and ask, why did autism genes ever survive? That’s an unanswered question… I think one possibility is, during times of urbanization, these autistic people had fewer social contacts and maybe they were less prone to pandemics.

Continue reading