The most difficult aspect of artificial intelligence is that there are many caveats as to how it works apart from “real” conversation. AI / Voice assisted systems have evolved well, but they still have a ways to go until they are perfect.
One of the complications with the senior users of Family Connect was the understanding of how the voice assist feature worked and when it’s actually “listening”. Imagine staring at your wrist, looking at blank screen and talking to it & wondering if it knows what to do.

Voice Assist Flow

There are 3 major components that I was able to tie into the improved Voice Assist feature:
• The user should know when the device is listening to their commands. If there is no indication on the screen, how do users know when to speak and how long to speak for?
• If there are any errors, the user should be given feedback of what the actual error is. This can range from the device not picking up sound, the system not being able to understand the user, or the user not knowing how to properly use the system.
• The user should be educated on how to use the system (especially the case for senior users). Not all voice assist systems are the same. Some are better than others and there is no universal precedent as to how to design the perfect AI system, as it varies per applicable use.
Proper error handling is crucial in an evolving system like Artificial Intelligence. Every system created contributes to a social learning of users adapting to higher-level technology. One of the fundamental philosophies of a connected system is that there is a way to connect tasks back into a cycle of revolving options. By giving options to users on how they want to react to the specific error, this could result in a self-troubleshoot that enables the voluntary motive to learn the system.
Back to Top