In 1967, Mario Capecchi ventured to Harvard University, determined to study molecular biology under the great James Watson, co-discoverer of DNA. Not a man to hand out compliments easily, Watson once said
Capecchi “accomplished more as a graduate student than most scientists accomplish in a lifetime.” He had also advised the young Capecchi that he would be “fucking crazy” to pursue his studies anywhere other than in the cutting-edge intellectual atmosphere of Harvard. Still, after a few years, Capecchi had decided that Harvard was not for him. He felt that if he wanted to do great work, to change the world, he had to give himself space to breathe. Harvard, he thought, had become “a bastion of short-term gratification.”
In 1980, Mario Capecchi applied for a grant from the U.S. National Institutes of Health, which use government money to fund potentially life-saving research. The sums are huge: The NIH are 20 times bigger than the American Cancer Society. Capecchi described three separate projects. Two of them were solid stuff with a clear track record and a step-by-step account of the project deliverables. Success was almost assured.
The third project was wildly speculative. Capecchi was trying to show that it was possible to make a specific, targeted change to a gene in a mouse’s DNA. It is hard to overstate how ambitious this was, especially back in 1980: A mouse’s DNA contains as much information as 70 or 80 large encyclopedia volumes. Capecchi wanted to perform the equivalent of finding and changing a single sentence in one of those volumes—but using a procedure performed on a molecular scale. His idea was to produce a sort of “doppelganger gene,” one similar to the one he wanted to change. He would inject the doppelganger into a mouse’s cell and somehow get the gene to find its partner, kick it out of the DNA strand, and replace it. Success was not only uncertain but highly improbable.
The NIH decided that Capecchi’s plans sounded like science fiction. They downgraded his application and strongly advised him to drop the speculative third project. However, they did agree to fund his application on the basis of the other two solid, results-oriented projects.
What did Capecchi do? He took the NIH’s money and, ignoring their admonitions, poured almost all of it into his risky gene-targeting project. It was, he recalls, a big gamble. If he hadn’t been able to show strong enough initial results in the three-to-five-year time scale demanded by the NIH, they would have cut off his funding. And without their seal of approval, he would have found it difficult to find financial backing elsewhere. No funding would have been a severe setback to his career, forcing research assistants to look for other work, and potentially costing him his laboratory altogether.
In 2007, Mario Capecchi was awarded the Nobel Prize for Medicine for his work on mouse genes. As the NIH’s expert panel had earlier admitted when agreeing to renew his funding: “We are glad you didn’t follow our advice.”
The moral of Capecchi’s story is not that we should admire stubborn geniuses—although we should. It is that we shouldn’t require stubbornness as a quality in our geniuses. How many vital scientific or technological advances have foundered, not because their developers lacked insight, but because they simply didn’t have Mario Capecchi’s extraordinarily defiant character?
But before lambasting the NIH for their lack of imagination, suppose for a moment that you and I sat down with a blank sheet of paper and tried to design a system for doling out huge amounts of public money—taxpayers’ money—to scientific researchers. That’s quite a responsibility. We would want to see a clear project description, an expert opinion on the project, and preliminary research.
We would have just designed the sensible, rational system that tried to stop Mario Capecchi from working on mouse genes.
The NIH’s expert-led, results-based, rational evaluation of projects is a sensible way to produce a steady stream of high-quality, can’t-go-wrong scientific research. But it is exactly the wrong way to fund lottery-ticket projects that offer only a small probability of a revolutionary breakthrough. It is a funding system designed to avoid risks—one that puts more emphasis on forestalling failure than achieving success. Such an attitude to funding is understandable in any organization, especially one funded by taxpayers. But it takes too few risks. It isn’t right to expect a Mario Capecchi to risk his career on a life-saving idea because the rest of us don’t want to take a chance.
“Here’s the thing about failure in inno-vation: It’s a price worth paying. We don’t expect every lottery ticket to pay a prize, but if we want any chance of winning that prize, then we buy a ticket.”
Fortunately, the NIH model isn’t the only approach to funding medical research. The Howard Hughes Medical Institute, a large charitable medical research organization set up by the eccentric billionaire, has an “investigator” program that explicitly urges “researchers to take risks, to explore unproven avenues, to embrace the unknown—even if it means uncertainty or the chance of failure.” Indeed, one of the main difficulties in attracting HHMI funding is convincing the institute that the research is sufficiently uncertain.
The HHMI also backs people rather than specific projects, which allows scientists the flexibility to adapt as new information becomes available and pursue whatever avenues of research open up, without having to justify themselves to a panel of experts. It does not demand a detailed research project—it prefers to see the sketch of an idea, alongside a recent example of the applicant’s best research. It’s rather astonishing that the funding appears to be handed out with too few strings attached.
The HHMI does ask for results, eventually, but allows much more flexibility around what “results” actually are. This sounds like a great approach when Mario Capecchi is the researcher receiving the funding. But is the HHMI system really superior? Maybe it leads to too many costly failures. Maybe it allows researchers to relax too much, safe in the knowledge that funding is all but assured.
Maybe. But three economists—Pierre Azoulay, Gustavo Manso, and Joshua Graff Zivin—have picked apart the data from the NIH and HHMI programs to provide a rigorous evaluation of how much important science emerges from the two contrasting approaches. Whichever way they sliced the data, Azoulay, Manso, and Zivin found evidence that the more open-ended, risky HHMI grants were funding the most important, unusual, and influential research. HHMI researchers, apparently no better qualified than their NIH-funded peers, were far more influential, producing twice as many highly cited research articles. They were also more original, producing research that introduced new “keywords” into the lexicon of their research field, changing research topics more often, and attracting more citations from outside their narrow field of expertise.
The HHMI researchers also produced more failures; a higher proportion of their research papers were cited by nobody at all. No wonder: The NIH program was designed to avoid failure, while the HHMI program embraced it. And in the quest for truly original research, some failure is inevitable.
Here’s the thing about failure in innovation: It’s a price worth paying. We don’t expect every lottery ticket to pay a prize, but if we want any chance of winning that prize, then we buy a ticket. In the statistical jargon, the pattern of innovative returns is heavily skewed to the upside; that means a lot of small failures and a few gigantic successes. The NIH’s more risk-averse approach misses out on many ideas that matter.
It isn’t difficult to see why a bureaucracy, entrusted with spending billions of taxpayer dollars, is more concerned with minimizing losses than maximizing gains. And the NIH approach does have its place. The Santa Fe complexity theorists Stuart Kaufman and John Holland have shown that the ideal way to discover paths through a shifting landscape of possibilities is to combine baby steps with speculative leaps. The NIH is funding the baby steps. Who is funding the speculative leaps?