Flowcharts are limited in their ability to handle a wide range of user inputs. They rely on predefined buttons or built-in responses that are narrow in scope and cannot account for the full range of user inputs that may be encountered in a conversational interface.
This can result in a frustrating user experience, as users may feel disengaged from the robotic, restricted form of communication, or limited in their ability to express themselves and give information.
Adding User Input
In contrast, Voiceflow's flexible design interface explicitly accounts for different types of user input, including:
- buttons and default user intents like Yes/No and Stop
- free-form text inputs that use NLU models to recognize user needs
- built-in data formats like phone numbers and URLs, and more
This allows designers to create conversational interfaces that are more natural and intuitive for users, as users can use their own words and phrases to communicate their needs and preferences.
To learn exactly how to use the Buttons, Choice and Capture steps to capture user input, here is a short summary of the three steps.
Free-form Inputs, LLM's and Intents
As aforementioned, flowcharts typically do not account for free-form messages from the user. Allowing users to communicate freely with an assistant creates a more personalized, empathetic experience, but presents a challenge in ensuring that all possible ways that a user can express their needs are understood by the assistant.
This is another aspect that Voiceflow excels in; whereas flowcharts purely take user input as a choice to progress down a pathway or a singular piece of user information, Voiceflow matches user input to Intents using Large Language Model (LLM) technology to power Natural Language Understanding (NLU) capabilities.
Here are some definitions to go over:
- An intent is a specific action or request that a user wants to perform or receive. In Voiceflow, intents are defined as specific triggers that initiate a particular conversational flow or action. They are linked to specific phrases or commands, called utterances, that a user might say, and are used to route the user's request to the appropriate conversational flow or action.
For example, if an identified user intent was to "Create New Ticket", a designer might add multiple utterances, like "I want to create a new ticket" or "I need to open a request", that point to the same intent. Generally, the more utterances, the better the NLU will be at recognizing user responses as intents.
- LLM's are complex mathematical models that are trained on massive datasets of text using machine learning and generate responses based on a given prompt. Popularized by technologies like ChatGPT, they are efficient and powerful for general tasks such as content generation and summarization. Learn more about them here.
- NLU's are also mathematical textual models but with a stronger focus on contextual understanding of language within one or more subdomains; they analyze text to determine intents (among other concepts) and are heavily used in assistant technology and semantic analysis. Learn more about the difference between NLU's and LLM's here.
Intents play a critical role in Voiceflow's natural language processing (NLP) capabilities. By defining specific intents and training the conversational interface to recognize and respond to them, designers can improve the accuracy and relevance of the interface's responses.
Learn about how designers use Voiceflow to evaluate NLU/intent design and collaborate with developers here.
LLM Features at Voiceflow
As Large Language Models rapidly advance, Voiceflow is also exploring product features that can leverage LLM-powered text generation capabilities in intent management and NLU design. For a summary of our latest features in AI Assistants, check out this series of docs.
Comments
0 comments
Please sign in to leave a comment.