January 30, 2017
Our studio had just joined forces with the Microsoft Research group. The company was making great strides in voice synthesis and recognition, image recognition and labeling, conversational interfaces with Cortana, and a whole lot more. With this momentum, our leadership wanted to explore new ways to leverage these advances.
So, the whole studio broke into groups of five or so designers and researchers. Our group’s initial ideation phase was noticeably unique due to the extra-wide reach. Most of us had long laundry-lists of wild ideas. From augmenting education with kids to assisting elderly adults. I was chasing scenarios that involved creativity but could never quite bring a focus to a particular use case, specifically in a problem-solving sense.
After aggregating everyone’s lists, an interesting pattern began to emerge. Lifecycle-specific use cases kept coming in. Whether it was helping Grandma remember to take her medicine or help Timmy with his stutter. Then there was the question of having an array of robots with very specific functions or a do-it-all, much like Rosie from the Jetsons. Would the robot belong to the parents, the kids, or the whole family? Would the robot enable lazy parents to not raise their kids properly? Would the kids hack it for nefarious reasons? The back-and-forth discussions were truly fascinating!
Possible areas of discovery:
We decided not to be the family pet or babysitter, nor the one-trick pony stuck in the kitchen. Instead, we began to think of a household A.I. entity as a bridge between all the knowledge of the internet and busy, multi-tasking parents. The robot would also have an intimate context of everyone’s schedules and the age-related context of children’s developmental sensitivities. The robot would also need to have a ubiquitous presence, much like a home’s electrical system, except better—accessible through several of the family’s rooms and devices.
When it came down to how we’d express this story, there were a few touchpoints we agreed to ignore in order to maintain focus. We chose not to show any mocked-up software or hardware in order to focus on voice as the primary interface, we shied away from the perceived gender of the agent, and finally, we wanted to avoid any of the children interacting with the robots—this was primarily a tool for parents.
After putting together a few scenes into a narrative, recording audio commentary, and compiling it into a gigantic, auto-playing PowerPoint presentation, we sent it off to our senior leadership. We were also able to present in-person for a selected audience of design, PM, and development partners.
I’ve attempted to capture that presentation below, in a more browser-friendly manner. Each slide has an audio commentary with the talking points. It’s a little janky, hope you can make due.