In the world of machine learning we see a fascinating design challenge—systems that need to make mistakes to learn. As a father of two daughters, I recognize this approach—clicking all the buttons on the remote, or maybe throwing it. Just like children, if you prevent machine learning systems from making mistakes, you hold back their potential. Engaging playfully with systems that are learning better aligns expectations, gives technology more latitude to explore, and accelerates training.
Serendipity Watch
I was working with an engineer from Sony named Masahiro Shimohori. We were discussing the possibility of machine intelligence orchestrating a serendipitous experience between two people. He said to me, “The opportunity for serendipity is a half-step ahead of the present.” While I still don’t know exactly what it meant, the words inspired a number of projects exploring playfulness and the subconscious.
This is what I sketched when heard the quote. I feel the idea gave me permission to play with time, space and probabilistic logic.
The image below illustrates a machine learning lifecycle, starting with an initial period of learning, which is the "explore" phase. If all goes well, we'll reach an inflection point where we can shift focus to an "exploit" phase. The cost of learning ideally exceeds the baseline value, but the effectiveness of the exploitation phase is highly dependent on the success of the exploration phase. This concept uses both playfulness and user empowerment to improve machine learning performance and reduce challenges such as cold starts, training speed and validation.
I call this experiment the “Serendipity Watch”. I chose this watch form factor because I like how the bezel provides a compelling interface for controlling time. The vignette below represents a view of the present time—the watch observes what’s happening around you.
When you scan into the future, the watch shows you possible futures based on probability. The probability can be manually overridden, allowing you to train the models. In this case, by setting the likelihood to zero, I've trained the model to lower the probability of my going to this location.
Things get interesting when you invite others into your experience where possible intersections are found. In this case, a friend has a lower probability of being at a local park at the same time as me. After I increase the likelihood that I'll be there, the friend receives an alert. In response he also increased his likelihood—thus leading to a facilitated opportunity for serendipity.
The further into the future you go, the less probable and more unpredictable the predictions become. In this case, I might be inspired to act on a highly unlikely speculation—possibly leading to a curated trip to a far away destination. Maybe it would even inspire a friend to join me.
While a working prototype was never built, some ideas from this project were used in my Sixth Sense project. This project was also featured in my IxDA Interaction 19 and Interaction 19 // SF Redux talks.