The Foresight of Crowds

Accurate predictions are finally within our grasp, as long as we ask the right questions of the right groups.


Photographs by Rod Graves

How do you predict the future? It’s a question as old—and as constantly evolving—as human history. Ancient augurs looked for answers in the flight patterns of birds and, if that didn’t work, their entrails. The Delphic Oracle had hallucinations induced by volcanic gas, and leaders across the Greek world took her gibberish as gospel. Shamans from the Arctic to the Amazon told the fortunes of indigenous tribes who believed that these magical men could see what’s to come, sometimes decades ahead. Some still do.

Even today, in this enlightened age of actuarial tables and algorithms, not much has changed. Whenever CEOs, politicians, and peacemakers want to predict the future, they turn to some sort of guru—although, these days, we prefer to call them “experts.” Researchers such as Philip Tetlock at UC-Berkeley have shown that when an expert is asked a yes-or-no prediction question about his or her own field, the answer, on average, is no more accurate than chance. As individuals, you see, we just aren’t that smart. However, as groups of experts we have tremendous expertise.

I’ve spent most of my career using scenario planning, an essentially democratic tool that lets a group of people imagine a range of plausible stories for the future. You’re not supposed to ask an expert to pick one; you’re supposed to prepare for them all. Yet scenario-planning clients have a tendency to view the professional scenario planner as an augur and oracle. Pioneers of the field—such as Herman Kahn in the 1950s, Pierre Wack in the 1970s, and my colleague and friend Peter Schwartz in the 1990s—were routinely treated as if they had all the answers, even as they reminded us that no one does.

Now, at last, we’re entering the era of prediction tools that explicitly reject the expert in favor of the crowd. Unlike scenario planning, these tools can give us specific, quantifiable answers about the future. I’m talking about information markets, which let everyone in an organization, a government, or even an entire country contribute their nugget of knowledge, and social media analysis, which is not as dull as it sounds. If your question is about dates or numbers (e.g., Are house prices about to rise or fall? Will the Congressional budget pass in the next four months?), these tools boast high success rates. Organizations that can figure out how to ask the right questions are creating a kind of super-smart corporate brain, a neural network that can reveal things even the best market research can’t.

This will sound familiar to anyone who’s read the best-seller The Wisdom of Crowds, which popularized research on how informed groups are invariably smarter than any solo experts, if used in the right way. But the practice of crowdsourcing—how to put that idea into action—has grown by leaps and bounds since James Surowiecki’s seminal work was published in 2003. We didn’t have Twitter back then, for one. And Twitter, it turns out, isn’t just for status updates and celebrity snark. It’s a potential goldmine for trend hunters.

In 2009, Sitaram Asur and Bernardo Huberman, computer scientists at HP Labs, examined nearly 3 million Twitter updates over a period of three months. They were looking for tweets mentioning any of twenty-four movies released in that time, from Avatar to The Spy Next Door, to see whether the number of tweets said anything about each movie’s success. It turned out that the rate at which a movie was being discussed correlated almost exactly with how well it would do at the box office the following weekend. The rate was correct 98 percent of the time! Also, 94 percent of the time, the ratio of fans-to-haters on Twitter indicated how far a movie would rise or fall the following week.

Asur and Huberman’s paper was the first serious study of Twitter’s predictive power. More are under way. Researchers are also uncovering a similar effect with search engines. In 2009, an MIT Sloan School of Management study found that the number of searches entered for “real estate agent” on Google predicted the rise and fall of home prices. Google itself got into the game last year, launching Google Flu Trends, which shows the number of searches for flu symptoms by location and now serves as an early-warning system for outbreaks.

If your question about the future has anything to do with the purchasing power of vast numbers of people or with popular sentiment, Twitter or Google analysis also may be a better bet than information markets. Huberman found that his movie tweets were one crucial percentage point more accurate than the Hollywood Stock Exchange (HSX), an online prediction market where users bet on upcoming movie news with play money—one that was previously thought of as the standard for box office prediction.

But if your question has nothing to do with popular sentiment, you’re more likely to find the future in information markets. What do I mean by information markets? There are several kinds, ranging in size. We usually think of big, brassy online markets as prediction markets—such as HSX, Intrade, or the Iowa Electronic Markets—each of which boasts tens of thousands of users. Then there are small-scale information markets that can be created within a single company or a self-selecting interest group.

Anyone can participate in the popular online prediction markets. Users select a future event to bet on—say, whether the GOP will retake Congress in November—and then buy shares if they think it’s going to happen, or sell shares (short that stock, in other words) if they think it won’t. They do this as much as they like, putting their money where their virtual mouths are. Often it’s play money, but that doesn’t matter. Studies have shown there’s no discernable difference; as long as it’s called money, it reflects the confidence of the crowd. The resulting share prices are usually fairly reliable indicators of the future, as Barack Obama found out twice in 2008, when Clinton and McCain could rarely touch him on Intrade. Using big markets to answer a nagging specialist question, however, is a little like using a bazooka to shoot a rat. Intrade users are unlikely to know or care how many widgets your company will produce next year, unless your name happens to be Steve Jobs and your widgets are iPhones.

A small-scale information market limits the number of participants and the questions they can bet on. And each participant has some insight into the questions, no matter how minor. Think of it as a kind of insider trading. You might ask your company, say, to bet on the likelihood of your product shipping by a certain date or on how many units you’ll actually sell. If you can measure it, you can build a market around it. Best of all, it costs next to nothing to run. These small-scale speculators always use play money. (Although the smaller the market, the more it helps to provide participants with an incentive at the outset, by offering gift certificates for early bettors, for example, or some kind of special access to the decision-makers.)

An anonymous marketplace gets you the kind of answers the boss never hears face-to-face. It does an end-run around the kind of know-nothing consensus found in board meetings. You might discover, for instance, that the marketing department is generally bullish about your chances of hitting your ship date, but there are a lot of bears in engineering. The bad news is presented in a way that decision-makers can’t take personally. Nobody gets to shoot the messenger.

The information market business is booming, and not just in the business world. Crowdcast, a start-up that sets up information markets, already boasts a client list that includes GM, Electronic Arts, Harvard Business School, and technology and pharmaceutical giants in the United States and Europe. It has opened a market for energy companies and clean tech VCs to place specific predictions about wind energy and solar projects, plus answer wider questions like, “How much ice will be in the Arctic Ocean next summer?” Not bad for a three-year-old company.

International organizations are starting to get in on the game. The World Bank is interested in using information markets to glean knowledge about infrastructure projects in the world’s poorest countries. The United Nations, meanwhile, would like to see if information markets could help Africa—not necessarily by rolling them out for entire countries, but by using them to glean nuggets of knowledge about policy effects from local bureaucrats. Want to know where aid money is most needed and most wasted? An information market might point in the right direction.

My company, Monitor 360, wanted to see whether this tool could help us learn something vital about two of the most troubling and confounding nations on the planet: Afghanistan and Pakistan. Over the course of three months, my colleague Rita Parhad led an experimental information market that was open to people who had lived in or visited those countries in the past year. After a single month of recruiting, we got a couple hundred participants—enough people to yield some interesting insights. We gave them each $20,000 of play money, which they could use to bet on 15 questions about Afghanistan and Pakistan. There were some small prizes to get them going: Anyone who placed five bets in three days won 25 real dollars.

Most importantly, the questions would all have verifiable answers by the end of the experiment. This was during the Afghan election, so a lot of our questions had to do with poll numbers and the intentions of the candidates. When opposition leader Dr. Abdullah Abdullah dropped out, many experts in the United States predicted that it was a strategic move that he’d later reverse. We asked our information market, and they told us no, this position was permanent. They were right, on this score as on many others.

The real innovation in information markets, like Twitter and Google analyses, is just beginning. To be successful in this field, we have to pay careful attention to soliciting the right crowd, structuring the right questions, figuring out how those questions will connect to decisions that need to be made, and creating the right incentives for answers. The whole process, in short, is in its infancy. Monitor 360, along with everyone else in this space, still has a lot of learning to do.

But it’s not hard to see the potential applications, particularly in areas where confusion currently reigns. The US military is worried about the amorphous threat of cyber attacks; an information market drawn from IT forums might help us understand it better. Companies could directly involve their customers, and products could fight it out in the marketplace before they ever hit the shelves. The marriage of information markets to microfinance could be a very powerful one, offering investors a great deal more confidence in their bets.

Hopefully, the 21st century will see the end of the millenia- long monopoly of individual experts precipitating questionable decisions. Despite the uncertainty, ambiguity, and pace of change, due to these new tools we’re closer to seeing the real future than at any time in history. All we need to do is draw in the widest ranges of opinion available, from the voices of the smallest stakeholders, and guide them gently in the direction of prediction. No bird entrails or volcanic vapors required.

Doug Randall is a writer and lecturer. He is also managing partner of Monitor-360, a part of the Monitor Group. www.monitor-360.com

And Now The Good News

Issue 13

Sold Out

In this Issue

Recent Comments