Design researchers deal with a ton of participant data, most of which is very personal, some of which is very private. At frog, we mostly work for profit-making companies, and the information we gather relates to consumer insights. This ranges from what people do in a typical day, to the contents of their bathroom closets—all information that could help shape future products and services. Yet even with corporate goals in place—and I’m being fully open here—our motto is always to consider the participants’ well-being (rather than just their data privacy) first, the needs of the design research team second, and the client third. Only then, we believe, will the client win.
That may seem counter-intuitive. But when we put individuals’ interests first, the quality of that data improves. So we work hard to keep participants in control of the process. Data consent is tricky and usually preceded by the adjective informed, as in “informed data consent,” which is as literal as it sounds: They know, are informed, of what it is they are consenting to. We try to minimize any real or apparent pressures to sign the consent, and we need to make this process work, whether we’re working with educated participants in Tokyo or those who can’t read in a village near Mazar-e Sharif. When we collect data, we like to build in a degree of reciprocity. At its simplest level, the participant reviews the data we have captured and picks a few photos for us to print out and hand to them on a separate, later visit. For some participants we provide a copy of all the data we have on them.
We put the data to good use: Some is foundational (it gives us a basic understanding of consumers, or those who might be consumers in the future), generative (it inspires what we do), and evaluative (it gives us feedback and metrics on stuff we and our clients are interested in). Increasingly, we ask participants for the rights to use the information externally—which participants agree to when they sign a model release portion of a data consent form. And once these forms are signed, we gain the freedom to use the data in new ways in the future.
Perhaps surprisingly, at the end of a research session, we encourage participants to review and delete any information we have gathered on them that they would prefer that we shouldn’t have. Usually, this results in a few deleted photos (and in over a decade of using this approach, no one has deleted all of their own data). And more often than not, participants request copies of some of the research photos—part of their memory of that session. We work hard to earn their trust.
At frog, we complete well over 100 international design research projects a year, taking hundreds of thousands of photographs. We track hours and hours of video and other interviews. We’ve been thinking a lot about how to leverage the data that we have—within the boundaries of what the participant has signed for and what is “right” by them. There are deeply held cultural, legal, and practical differences in the collection and processing of personal data across nations. Broadly speaking, European countries take a more mature and cautious approach to privacy than the United States.
There is a noticeable movement arising within global, non-commercial organizations such as the United Nations and the World Economic Forum, which are proposing a “data philanthropy movement.” In addition, entire national governments make more of their data available in the public domain (via sources such as the World Bank or even Google) to help inform on topics such as how to solve problems of urban traffic congestion or better tackle epidemics and how they spread.
The world’s largest database of individuals, lifestyles, events, and preferences—cross-referenced by relationships and location—is being created with over a million additional photos added in the few minutes it has taken you to read this piece.1 That database is, of course, Facebook. (And LinkedIn, Baidu, Weibo, and other social networks that also provide endless data.) Personal communication tools, the advent of the Internet of things, and our ever more tracking and trackable “smart” cities will ensure we will remain awash in ever more data, which will be held for longer by data collectors and researchers. They will have more ways to cross-reference and mine such information than ever before. We are in the era of truly Big Data.
At the same time, the relationship between “researchers” (or data collectors) and “participants” is changing with participants more likely to reach out, connect, and by some definition of the word, stay connected. This in turn is changing the types and granularity of the data being collected.
So now is the time to radically rethink how we collect and what we are able to do with this data. I propose that participants own their own data, for life, but we (the researchers) store it for them and borrow it as needed. Let me leave you with a checklist for this type of future, and the questions that will also help guide us there, ethically.
Challenges that are likely to result
The data/privacy space has been changing at a fair pace, but there’s one thing looming that, I think, will trigger enough social upheaval to significantly raise the temperature of the debate—that is the mainstreaming of facial recognition in the palm of your hand. What happens when you no longer effectively own the rights to the image of you? Our discussion on personal data is much bigger, and likely to outlast, our current conceptions of Big Data.