Collection No 5
Humans learn through testing and refinement. Sensing technologies now offer this technique to products.
The addition of sensing and connectivity to products is rapidly changing what we learn from them, how we perceive them, and how we use them. Those same technologies are also feeding backwards, changing how we design products.
This idea is not new, to be sure. Nearly 20 years ago Mark Weiser, then the chief scientist at Xerox Parc, observed that we would in time live in a world where “we must dwell with computers, not just interact with them.”
Technology has almost caught up with Weiser’s vision. Connected products have evolved beyond simply offering remote access or collecting data. They are becoming more relevant actors in our daily lives. Increasingly, we interact with products that are “smart” in subtle ways, able to morph, act, and respond based on context. They have behaviors—a sort of agency. We have started to describe this new class of products with a term that is highly human, and suggestive of intelligent behavior: learning.
Learning is, after all, a very efficient way of responding to change. When we’re born, we don’t know how to walk. We experiment, fail at first, and eventually learn, adapting continuously to our surroundings and changing bodies. The rise of learning products is a step towards an era of innovation where those products will follow a similar developmental path—adapting and personalizing so well that, in time, they become intuitive enough to render them virtually invisible.
As learning is normalized into our products, our challenge is to perfect a process that successfully and reliably designs this emerging genus of self-adapting products.
The roots of machine learning go back half a century or more, to early work in artificial intelligence. Neural networks and statistical modeling have emerged as key tools in the pursuit of true machine learning. By collecting large amounts of data, algorithms could detect patterns that could be used to generate outputs—such as actions or analysis—that render computers somewhat intelligent:
So far, however, this approach has been slow to deliver results, in part because computer power was insufficient. What’s more, the “learning” process required human guidance to nudge the computer towards the right results.
Rapidly advancing technology is beginning to be able to meet this challenge. Our capacity to both collect data and process it has grown exponentially in recent years, opening the door to analytics on very big sets of data.
Indeed, sharp declines in the cost to acquire, transmit, store, and process data mean that today almost every interaction via an Internet-connected system is tracked, allowing the systems to be optimized in near real-time.
The trend was seeded by e-commerce systems; think of Amazon’s product recommendations. Today, it is embedded in practically all connected interfaces. Search engines, which dynamically suggest results in real-time based on past purchases, peer preferences, and regional cues, are another example.
These techniques have come a long way, but they also reveal the limit of the technology. For the most part, they are still passive. They deliver better results, but do not trigger real-world events, nor do they actively learn from trial-and-error.
Why is this important? To explain this, let’s take a look at how humans learn. In 1992, Chris Hughes and colleagues from the University of New South Wales, published a new model of our learning process:
The model has since become influential in teaching theory, and is surprisingly similar to other formal learning processes. In the middle steps, Hughes’ approach closely resembles the scientific model, which moves from hypothesis, to testing, to evaluation, and back around to hypothesis, as follows:
In all of these approaches, the steps of testing and refinement are crucial to learning. There comes a point in the learning process when theory must be put into practice, to be tried out. For a long time, this step was ignored in the development of artificial intelligence. But lessons from key technologies—such as autonomous robots and the development of “deep learning” algorithms—have shown that the testing phase is fundamental to the process of active learning.
Design Process Implications
It’s early days yet, but learning products are already beginning to multiply in our homes. Today’s iteration learn in a variety of ways to deliver a mix of services (see examples below). As we push to advance how these products perform, we must explore a basic question: what does it mean to design a learning product?
We’ve identified a number of steps that are becoming a natural addition to the process of designing a learning product:
- Define the senses. Our senses define our ability to interact with the world, and this is also true for products. Sensing can be direct, through the use of physical sensors such as chip-based devices, or it can be indirect, intuited through data analysis.
- Acting out. To support the learning process, products must know how to decide when to act. This goes beyond traditional interaction design thinking, where products provide feedback to trigger human actions.
- Find the right evaluation. A learning product must decide right from wrong. A high level goal is needed. For example, a learning thermostat must first judge when someone is home before it begins to intuit when they are most comfortable.
- Provide feedback. With learning products, the need to provide clear feedback is, if anything, more important and more challenging than with non-learning products. Providing feedback on learning enhances the product’s performance and thereby builds trust with the user.
- Communicate controls and errors. A product that learns and behaves autonomously raises the question: who is in control, the user or the product? To boost users’ trust, as well as to override faulty behaviors, they should be able to override the learnings.
- Draw a path. Over time, a learning product will evolve from empty and unaware into a state where it knows much about you. The path that the product takes from its initial learnings, to consolidated understanding requires not only a narrative but also boundaries. Unlike conventional objects, learning products may evolve along unexpected paths not envisioned by their designers. This is both an opportunity for unanticipated delights and a risk for unpredictable disappointments.
Emotional Bond Through Learning
From delight to disappointment, learning products have great potential to trigger strong emotional responses from their users. Thus, the greatest short-term challenge we face in designing products that learn is how to build trust.
Near-term, providing transparency through feedback, and the ability to override a product’s learnings, offer a way to do this. Further out, the power of learning products may well lie in the unpredictability of their learning process. This uncertainty changes what it means to design such products. If we think of design in terms of perfectible, failure-free, and stable experiences, learning products require both success and failure to evolve. Used together, success and failure create attraction and affection.
Designing for learning is a step to something bigger: creating stronger emotional bonds with products. In the future we might not design smart products that learn, but instead design products that can grow relationships with people. The way we create learning products is moving from designing them towards breeding them.
Silent Observer, Stubborn Animal, and Asking Anthropologist
The home is a becoming fast-breeder of learning products, each an experiment in how they promise to remake the relationship between product and user.
Perhaps the most advanced everyday example is Nest’s thermostat, which uses infrared sensors to passively monitor building temperature and human activity. Using this flow of data, the thermostat learns when people are at home and their preferred temperature at varying times. By actively testing hypotheses—turning heating up or down—the device learns how quickly the house loses heat in the winter, or grows warm in the summer. The Nest thermostat is a silent observer that adapts how it manages heating and cooling, all without asking the user a single question about their preferences. It tests, learns, and adjusts.
iRobot’s Roomba is a household robot that uses sensors and algorithms to learn about, and clean, its environment. Using an algorithm based on how animals search for their food, the Roomba explores its environment by rolling the longest possible distances. Then, if triggered by its dirt sensor, it may stay in that area. If not, it will bump into an obstacle, rotate, and head off a new direction. In so doing, it piles up many different paths that eventually cover the entire room. Removing toys, chairs, and pets from its path eases the Roomba’s travels, but it will continue regardless: it is a remarkably stubborn learner.
Another example is Toshiba’s ApriPoco, a tabletop prototype that learns your habits as would an anthropologist. Shaped like a squat plastic bird, ApriPoco’s sensors detect any use of a standard infrared remote control, for TV, cable, air conditioner, or other gadgets. Its camera-eyes track humans in its environment. When it senses a novel infrared signal, ApriPoco asks aloud, ‘What are you doing?’ The user might respond, “increasing the TV volume.” The device then stores the voice command in its database, along with the correct infrared signal the related action. From then on, the user needn’t pick up the remote again. Rather she can speak the action aloud by saying, “increase the TV volume”, and ApriPoco will relay the right signal to the right device.
These three examples illuminate the variety of experiences linked to learning products. The Nest’s learning is passive, but uncannily intuitive, directly improving very tangible aspects of our lives: thermal comfort and energy cost savings. The Roomba’s actions are more visible, but we cannot really control what it learns, nor does its final outcome—a nicely swept room—vary. ApriPoco is more direct, revealing a different aspect of learning’s potential: with the help of the person, it can both learn new behaviors, and correct its errors.
- C. Hughes, S. Toohey, and S. Hatherley (1992), “Developing learning-centred trainers and tutors,” Studies in Continuing Education 14 (1), 14-27, http://goo.gl/Nvgdhh.
- G. Manaugh and N. Twilley, “The Philosophy of SimCity: An Interview With the Game’s Lead Designer,” The Atlantic, May 9, 2013, http://goo.gl/q59RjQ.
- S. Benford, C. Greenhalgh, G. Giannachi, B. Walker, J. Marshall, and Tom Rodden, “Uncomfortable User Experience,” Communications of the ACM, Vol. 56 No. 9, Pages 66-73, September 2013, http://goo.gl/6tpcsF.