Using the NLU Manager to Handoff Intent Management to the Data Team

Gabby Chan
Gabby Chan
  • Updated

Check out a blog post from our Head of Machine Learning on good NLU design here!


The NLU (Natural Language Understanding) model of a conversational assistant is a critical component of conversational interfaces, and it is important to optimize its performance to ensure that users can interact with the system effectively.

While designers can add to an assistant's NLU model by the use of intents and utterances in Voiceflow, a data science team can help optimize the NLU model to improve accuracy of intents and reduce errors when trying to recognize user intentions.

By using the Voiceflow NLU Manager, designers and data analysts can collaborate to provide input into the type and amount of training data in the NLU model, improve intent confidence and clarity when processing user responses, and iteratively adding and fine-tuning the model based on usage data. 

Clear, Concise Documentation

Designers should provide clear documentation that outlines the design and the intended user flow, including all necessary context and any variations that might affect the NLU model. By using canvas markup and commenting, designers can annotate milestones in the conversational flows to illustrate key user goals identified and map accordingly to relevant user intents and possible utterances. 

Intents should also be accompanied by a description to add context, indicate development needs, or other information that pertains to the successful triggering and handling of a particular user intention.

Sample Utterances and Generation

To better represent a user intent, designers can input multiple potential user utterances that should map to a single intent. This helps the NLU model understand the different ways a user can indicate a desired function, in turn better understanding the user. Not only can these manually inputted utterances be used in the training data for the NLU, but Voiceflow also has an in-built utterance generation feature that uses GPT to predict similar utterances related to the intent. Designers can then evaluate the accuracy of these automatically generated utterances and choose to keep or remove each of them. 

Note: potential utterances can also be determined from user testing sessions and production usage from actual user interactions with the assistant. Found in the Transcripts tab of the assistant-level canvas, designers can look for user utterances with no matches to existing intents, and easily incorporate them into the NLU model. Learn more about utterance generation from transcripts here!

Intent Confidence and Clarity Scores

As an assistant grows in complexity, the number of intents and associated utterances will also exponentially increase. From the above NLU modal viewer, you can view the full-screen NLU manager by either clicking "Open NLU Manager" in the modal or click on the NLU symbol on the leftmost sidebar. 

Once the NLU Manager is opened, designers and data analysts can view all intents at a glance, as well as their confidence and clarity scores. These scores represent whether the existing NLU model can 1. confidently detect an intent from a user's response 2. differentiate an intent from another based on their respectively indicated relevant utterances. They can be improved by adding more utterances to an intent, either manually or generatively, and managing intent conflicts by removing similar utterances across multiple intents. 

NLU Model Export

After designers and data analysts have collaboratively assessed the architecture and performance of the NLU model, the data team can easily export the NLU model from Voiceflow to a number of different supported data formats to continue developing the assistant's NLU on another platform. 


Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request



Please sign in to leave a comment.