How do I capture and store a user response? What is Capture?
In order to create a seamless, user-friendly conversations, Chatbots and Voice assistants need to be able to capture information in the ways that humans talk.
The Capture Step lets you build dynamic conversation experiences by capturing and recording all or part of a user's utterance within a selected variable. This can be used to collect a specific piece of information from your user, such as their name or an email address.
After adding a Capture step to your assistant, we can see that we have the option of capturing the “Entire user reply”, or capturing and categorizing the user response to specific entities in the user's utterance.

You also have the option of adding Actions, No Match, No Reply and can add/configure additional Captures.
Note that in conversation design, the Capture is ultimately waiting on user input. This means you're not able to add steps behind capture within a block - it must be the last step such as a Prompt or Choice step. The Capture step should end a “turn” in the conversation you’re designing.
Tip: On Voiceflow, there are now multiple ways in order to save user information. You can use the Capture step and/or the Choice step. Read more about best practices and when to use Capture vs. Choice step(s) here.
The power of the Capture step is leveraging it to personalize conversations by applying captured variables in your responses.
Capturing the Entire User Reply
The Capture Step is powerful to capture and record a user's entire utterance (user reply) and store it within a selected variable.

Once you’ve selected ‘Entire User Reply’, choose the variable you want to use to store the user’s reply response.
You can select from your existing variables or create a new one. This lets you use the capture information throughout your assistant.
Capturing Specific Entities
Alternatively, you can choose to capture entities within your users response. This option lets you extract specific pieces of information out of your user’s utterance (e.g. name, plan type, country).
Note: Entities are the same ones that exist in your interaction model and can be reused in intents, or referenced in output steps like speak or text.

Capture Step Entity Creation & Editing
You can now enter the workflow for Entity Creation & Editing, Type, Color and NLU configurations (slot values & synonyms) right inside the Capture Step!
- To Create New Entity, select the bottom-menu option and configure your newly created Entity in the modal that appears
- To Edit a Selected Entity, select your desired Entity in the entity dropdown menu in the Capture step. Then, select the pencil icon located to the left of the Entity name in this selector menu.
Adding utterances
Once you’ve selected the entity you want to capture, ensure you add a few sample utterances for the entity.
This helps the machine learning model identify different ways the user might say the entity and its phrases.
As this is utterance handling & capture of expected user response(s) containing an entity, ensure that you are using the entity itself and its variations in your sample responses/utterances.
Note: This differs from populating the synonyms/slot types under yourNLU Model (M)for the entity itself.

Configuring entity prompts
In some cases the user’s response may not contain the entity you want to capture. Adding an Entity Prompt lets your assistant ask and follow-up with the user for the required information.

For example, let’s say we want the user to provide their favourite colour. If they respond instead with ‘hello’, this will trigger the the entity prompt and the assistant will request a valid response from the user.
You can input your desired Entity Prompt responses in the field. And similar to Text steps, Entity Prompt fields support markup styling. Should you not require an entity reprompt anymore, you can delete it with the (-) icon.
Capture Step - Actions
In addition to configuring your Capture step, you can use Actions to nest navigation and backend logic in a single Capture within the step.
Under the Capture step, you can perform these nested actions per Capture:
- Go to Block- Goes to a specific block referenced within the assistant
- Go to Intent- Goes to an existing intent contained in the assistant
- End- Ends the conversation at its current state
- Set variable-Allows you to set and change the value of variables
- API- Allows you to set up, configure and execute API calls & functions
- Code- Allows you to set up and code custom Javascript functions & commands
Tip: Learn more about Actions in-depth and in detail here.
No Match/No Reply - Configurations on Capture Step
There may be instances the user says something completely unrelated (No Match) to what you are trying to capture; or they simply may not respond (No Reply).
- If the assistant does not hear your user's response, or the user's response is unintelligible, the No Reply Response occurs.
- If the user says something completely unrelated to what we are trying to capture, we can handle those paths with a No Match response, and provide a better experience when it’s not understood.
For either these cases, you can guide your users to an alternative conversation path with a ‘No Match’ or 'No Reply' response, under the Capture step.
Tip: You can configure No Reply by hitting the settings icon at the footer of the Capture step editor, and hit 'Add no match.' The instructions below apply to both No Match and No Reply.

Under the No Match portion of your entity capture, you can choose to reprompt your users (1) by adding responses (that can be formatted with Text styling) and configure their randomization (2).
You can also add a No Match path, so that you're able to connect the path (3) to a section in your assistant. This will let you select the conversation path for your fallback.
You can rename the label of the No Match path, so that it can be easily referenced on Canvas.
You can also use Actions (4) to nest navigation and backend logic in a No Match within the Capture step.
Tip: You can configure your No Reply response message, the time delay before triggering no reply response(s), and connect it to a conversation path similar to the No Match workflow outlined above.
Adding multiple entity captures
When you’re having a conversation with another person, you might provide or request several pieces of information at once. With the Capture step, multiple entities can be added per step and extract additional information.
Tip: You may end up in scenarios where you either expect your user to or they attempt to provide multiple pieces of intended capture information in one response. For example asking for name and email, or name and confirmation number, or email and tracking number, depending on the context of the question.
You can use the capture step to collect multiple entities in your step. To add another captured entity, hit ‘Add Capture’.

Note: You can configure prompts for each of your captured entities. This ensures that the assistant captures all the necessary information in the right entity and slot type.
Each entity can have a prompt attached to it, so if the user doesn’t fill all the entities, we can ask for each one individually before moving on with the flow, ensuring all necessary information is captured in the right place.
Comments
0 comments
Please sign in to leave a comment.