frogs on the roadRSS Feed

Conference insights from Vancouver and Boston to Paris and Beijing.

How Honest Should Smart Devices Be?

Don't speak

Will devices of the future be just as as moral (or immoral) as our friends, family, and coworkers? Will they aid us in upholding our own sense of honesty?

In her panel yesterday at South by Southwest, Genevieve Bell posed the following question: "What might we really want from our devices?" In her field research as a cultural anthropologist and Intel Fellow, she surfaced themes that might be familiar to those striving to create the next generation of interconnected devices. Adaptable, anticipatory, predictive: tick the box. However, what happens when our devices are sensitive, respectful, devout, and perhaps a bit secretive? Smart devices are "more than being context aware," Bell said. "It's being aware of consequences of context."

Our current devices are terrible at determining context, especially with regard to how we relate to other people via our existing social networks. Today's devices "blurt out the absolute truth as they know it. A smart device [in the future] might know when NOT to blurt out the truth." They would know when to withhold information.

This vision may seem attainable in the next decade, considering the research efforts that exists in this space. However, the limiting factor for consequence awareness is the human race, and our hardwired, tribal notions when it comes to social relations.

 

Taking Tips at the Dinner Party

We are anything but predictable, and we struggle with context all the time: at work, at home, in our romantic relationships, even in whether we're running late to a dinner party.

Ben McAllister and Kate Canales of frog design led a panel today called "Unwritten Rules: Brands, Social Psychology, and Social Media," which dug into how companies have adopted vehicles such as Twitter, but have struggled to understand how to communicate through those vehicles effectively.

The crux of their panel was two scenarios that inspire a gut reaction from most people:

Scenario 1: You go to a fancy restaurant, have a fantastic evening, and in thanks give your waiter a $100 tip.

How would your waiter react?

They'd say: "Wow, your'e so generous."

Scenario 2: At a friend's house for a dinner party, the food was amazing and you had fabulous conversation.

You tell them thank you, and hand them $100 for their trouble.

How would your friend react? Uh, awkward.

For the reason why the second scenario is so awkward, they dug into research by Steven Pinker and Dan Ariely that outlined different types of fundamental human relationships. In the physical a.k.a. "real" world, we have relationships based on authority, exchange, and communality. Ben and Kate's theory is that people constantly shift between these different modes of relationships, often in a matter of moments. So people that run the marketing channels for brands need to understand these shifts in behavior, and move from promoting themselves (exchange) and begin understanding how to listen and share (communality).

Designing the very small gestures provided through those channels can often go a long way for a company. Compare it to when you show up at someone's house for a dinner party and provide them a bottle of wine. No matter whether it's $4 Chuck or a fancy Bordeaux, it will take you far—though it won't save you from spending all evening talking about how great you are.

 

Giving Up on Being Honest

Can "smart devices" ever understand our intent in the range of ways with communicate with others? Can they understand when we are trying to be communal, rather than be an authority? And can they communicate in a manner that feels communal?

Genevieve noted in her talk that as human beings, we tell 2 to 200 lies a day. And while most of them are insignificant, the lies are often what smooth over friction in human relations.

But what kind of lies are these? Dan Ariely, in a somewhat unrehearsed session today with Sarah Szalavitz, walked the audience through his ongoing research into human dishonesty.

What he uncovered is that humans have a "fudge factor," a level of dishonesty we're willing to engage in and still consider ourselves honest. His insight into the behaviour isn't huge, as we've all been caught in white lies (perhaps more in our lives than we'd care to admit). Instead, it's rooted in what's considered acceptable based on context and consequence.

In one example, he ran an experiment where people were given a test with a ton of questions, but only five minutes to solve them all. In the provided time period, it would be impossible to answer them all. When time was up, the people would grade their own tests, run them through a shredder in the back of the room, then tell the facilitator how many answers they got right.

The shredder was designed, however, to not shred the test. They could compare what people said to how many were actually right.

From this experiment, they saw that most people only lied just a little—if they only solved four problems, they'd say six. Makes sense, right?

But in an separate experiment, Dan saw if people would cheat with regard to remembering the 10 Commandments. In that case, no one cheated. One finding that came out of that research was that when we are reminded about own morality, we become more honest. But the honor code must come before we engage in an activity, not after it. Otherwise, we will be tempted to cheat.

But the third experiment he related was the following: You see two empty boxes, and then a couple of dots flash on the screen within those boxes. You are asked the question, "Are there more dots on the right or the left?" You receive 10 cents if you say right and one dollar if you say left, in all cases. This is repeated a hundred times with each research subject.

In the lab, they saw that people cheat a little bit through the process. But at some point, 80% of the people lose it, and they start cheating all the time. Different people switch at different points, depending on the context.

Dan called this the "what the hell" effect. In people's minds, they're saying: "I'm a cheat, I might as well enjoy it."

 

Creating Devices that Get Creative

This made me think about whether future devices will understand these nuances of human dishonesty, and ever be able to model them accurately.

Really, are we asking too much of smart devices? Can they ever be aware of intent, of consequence, of when we say "what the hell" and take part in behaviors that may be potentially destructive? Can it let us fudge things without thinking we've made an error about whom we're meeting for dinner, or that next big meeting, or a terribly scandalous rendezvous?

Dan believes that confession "is very useful for curtailing the 'what the hell' effect," but can you imagine treating your device like a human being, a guidance counsellor, or a therapist? This is one of the major struggles we're seeing in designing systems for positive behavioral change. On the other end of any critical exchange, you'll usually find another human.

It should be obvious that in this new moral space for "smart devices," designers must be extraordinarily sensitive and aware of the behavioral context of what we create. What may be less obvious is how to design systems that shape, accommodate, or deflect the actions of people saying "what the hell," without turning us into robots.

This is not a technology problem, as technologies are just tools made by people (until the inevitable robot uprising). It means that "smart devices" are going to need to know when to hold their tongue. We're going to need to trust our devices to tell stories that aren't truthful, but instead a little creative.

Can smart devices really do that? Dan Arielly believes that creative people can tell better stories about flexing their morality, for better or for worse. But are we creative enough to make devices that understand how good we want to be as people?

Photo is "Don't speak" / image number 5355600681 by Sunny Z, shared via a Creative Commons license on Flickr.