Categories: Developers / Mobile Apps

The growth of virtual assistants and smart speakers has ushered in a new era of voice-controlled devices. Just as graphical interfaces require a certain UX/UI design language, so does voice input.

Voice user interfaces (VUI) requires an understanding of natural language as well as context. Mobile app users expect to be able to provide an input that creates a relevant, appropriate response. (Think Grice’s maxims, but for your phone.)

Voice speaks its own UX/UI language

For mobile apps listening is easy and understanding is hard.

For mobile apps listening is easy and understanding is hard.

The challenge is understanding what a user needs, means and expects. Unlike graphical inputs, voice is extremely open-ended. Context is also a problem, as words don’t always mean the same thing or refer to the same entity.

For example, what does “tomorrow” refer to? Is a “fork” a dining implement or something you find on a road? Is “Frank” also “Francis” and also “my brother”?

These are just some of the questions UX/UI designers have to think about when designing for voice input.

Voice is all about intent

We’ve mentioned that voice allows for complex, unbounded input. It also reduces the reliance on the buttons, links and graphics we’re used to handling. Visual interfaces usually guide users in their interactions and delimit possible choices and outcomes. Voice doesn’t.

So how do you design an effective voice user interface? The key is to shift your focus from parsing exact input to user intent. Figuring out what the user wants and is trying to do is crucial.

At the moment, we’re not limited so much by translating voice input as we are by the lack of context. Take the line “send my wife a gift for her birthday.” Easy enough to read, difficult to act on! Who is the user’s wife? When is her birthday? What kind of gifts does she like? Where should the gift be sent?

Until our devices are smart enough to be able to gather this context, we need to be able to provide some scaffolding.

Voice is the new frontier of mobile UX/UI.

Voice is the new frontier of mobile UX/UI.

This includes:

  • Training users to deliver commands in a certain format (Alexa, play my Spotify Travel playlist!)
  • Having users define abstract nouns or connections on the go or during setup (“Bob” is also “my husband” and “Jenny’s dad”)
  • Prompting users to provide essential information deemed missing (eg a restaurant name or taxi destination)
  • Asking for clarification using “yes/no” questions

Your mobile app should think in dialog

Just as we’d write flows for a graphical interface, we need to do the same with voice input. Unless your mobile app is a general smart assistant app, typically your input will fall around a set range of use cases.

Here’s how to develop a flow that helps meet user intent:

  • Identify keywords a user is likely to use (take, go, travel, book, flight, taxi, hotel would be some used by a travel app)
  • Define possible conversational branches linked to each keyword (eg a conversation about booking a flight will always include “one-way” or “round trip”/“return”)
  • Map out example dialogs as “scripts”. Test these with users to assess usability.

Need a mobile app development team in Dallas-Fort Worth to help with your voice user interface needs? Get in touch!

Touchtap is a digital agency specializing in mobile-first development.We can build your mobile app for you.

Back to Posts