The Moody Assistant
On the right is a fairly typical conversation with a smart assistant. Quick, direct, transactional.
There are no pleases, thank yous, or other components of common-sense manners. Smart assistants, although anthropomorphic, are expected to respond perfectly to human requests without real human interaction.
As smart assistants become more common and more commonly used, it’s not uncommon for users to be outright verbally abusive to their gadgets. This had me thinking: are we training ourselves into an age without manners? Are we teaching children we don’t have to respect others in conversations? Can smart assistants with female voices indirectly teach sexism?
To answer all of these questions: yes.
I believe that technologists have the burden and responsibility of creating technology that benefits individuals, communities, and society as a whole. This can be done through thorough research and understanding implications of design choices.
Smart assistants, instead of ignoring verbal abuse, should be teaching users about respect, manners, and healthy relationships. Although this originated as an idea for how to teach children about manners in conversations, my research on the topic has shown these lessons are clearly critical for adults, as well.
While I was brainstorming how we could make simple tweaks to a smart assistant to accomplish these goals, I drew up a sample conversation I’d like to have with my Google Assistant.
The basic premise of a smart assistant that understands and responds to emotion in conversation sparked the idea for this project: the Moody Assistant.
The Ideal State
The Moody Assistant should have all of the capabilities of a smart assistant while being able to smoothly navigate emotional conversations with users.
I believe it should have a two pronged approach: praise for considerate conversations and honest feedback for abusive ones.
In the first case, imagine a child is asking a smart assistant to play his favorite YouTube video. When the child says please, the assistant not only plays the video, but encourages the polite behavior.
In the second case, imagine a user who has used inappropriate or abusive language in a request or response. Ideally, the Moody Assistant would be able to identify the tone of the language and express how the language made it “feel” using I statements. By not responding emotionally, and not tolerating abuse, the Moody Assistant can teach and guide the user to more appropriate interactions.
I came up with this idea in September 2018. Since then, I have played around and read about open-source smart assistants, open-source sentiment analysis, and the hardware components that would be necessary to create a Moody Assistant proof of concept.
While this is still a work in progress, here are some of the things I’ve done so far:
Created a moody_assistant Github repo.
The project is currently using the following resources:
I need to choose resources for the following:
Open-source smart assistant technology. I plan on forking a repository to insert my own speech recognition, analysis, and responses!
Researched and ordered the hardware components to make this a standalone, physical product. Things I’m planning on working with:
Over the next few months, I hope to get a working prototype to test out on myself, my friends, and my coworkers. No matter how considerate we are, we all surely still have things to learn!
January 21, 2019
I fried my Raspberry Pi! It turns out I am not a soldering expert, even after watching a bunch of “how to solder” videos on YouTube.
I’ve ordered a new Raspberry Pi Zero W and a new set of headers. In the meantime, I grabbed some basic soldering kits and spare perf boards from Tinkersphere in the Lower East Side. Going to put more time into practicing before I try again!