Managing Intents and Utterances within the Assistant

Gabby Chan
Gabby Chan
  • Updated

While flowcharts can help designers consider the intended design of a conversation, they often lack a systematic way of organizing intents and utterances. Whether it's that the conversation design only considers simple, built-in user responses, or intent management is considered out of scope for designers, flowcharts inherently don't provide an organizational system of user intentions. 

Intents and Utterances

Here is a quick refresher of what intents and utterances are: 

  • Intents: serves as a conceptual portrayal of the user's aim or objective during a conversation, and are defined by a series of various potential utterances that lead to the same meaning. For example, a user intent to "order a pizza" might be represented by utterances such as
    • "I want to order a pizza"
    • "Add pizza to my order"
    • "I'd like to buy a pizza", and more 
  • Utterances: the variations in user response that are mapped to the same intent. Generally, the more utterances linked to an intent, the better that the Voiceflow assistant will be able to contextualize the user intention and recognize it

By using intents and utterances, users are no longer limited to buttons and simple responses like Yes/No to converse with a voice or chat assistant. Users can instead speak or type their responses as if they are talking to a real person, and Voiceflow's NLU-powered assistant will utilize both LLM (Large Language Technology) and NLU (Natural Language Understanding) capabilities to contextualize freeform text responses and map to assistant-recognized user intents. 

In Voiceflow, you can use intents to: 

  • Start a topic: intents can be used to jump from topic to topic in an assistant. For example, if there is a "order pizza" topic in a food ordering assistant, you can start the topic with an intent step such that the assistant will start from the pizza topic if the matching intent is detected. 
  • Make a choice step: When creating a choice step, designers will have to explicitly define the pathways stemming from the choice step such that each pathway is represented by an intent. In the below example, after the assistant says "What can I help with you today", the user will get the opportunity to type a response. The response will then be processed by the assistant to attempt to match to intents in the choice step, then any global intents defined anywhere in the assistant. 

Intent Management and NLU Model 

Intents and utterances can be difficult to manage as the scope of an assistant grows. What if there are overlapping intents with similar meanings, or similar utterances that point to different intents? What if some intents don't have enough utterances to adequately train the model to understand variations in user responses? 

Enter the NLU model, complete with an intent (and entities) overview and scores for intent confidence and intent clarity. Intent confidence is scored on the quantity and quality of utterances mapped to an intent, and intent clarity is defined by whether there is meaning overlap. 

Below is a demonstration of how a Voiceflow assistant is improved upon by using the NLU model functions to improve intent confidence and clarity scores. 

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request

Comments

0 comments

Please sign in to leave a comment.