How do I use transcripts? How do I review user testing sessions? Where do I find assistant usage analytics from testing and production?
Voiceflow offers two tools to help designers review congregate usage analytics over time and deep dive into individual user sessions with the assistant: Analytics and Transcripts.
As more users interact with an assistant in testing and production stages, Voiceflow's Analytics Dashboard will populate with important user information that serve as starting points for design iteration based on usage patterns.
- Number of interactions, users, and sessions can illustrate the growth rate and/or the adoption rate of an assistant over time. Interactions allude to how engaged the user was to the assistant (i.e. how many messages were sent), while sessions help visualize whether there is a feedback loop of users returning to the assistant.
- Recognition rate serves as an intuitive touchpoint of whether your assistant's NLU model and system of intents reflect the user's needs and expectations. It can be improved by examining individual transcripts and looking for no match utterances, which is explained in the next section.
- Top intents draw attention to the most frequent user needs. They can help prioritize assistant functionality expansions and build a better understanding of users' motivation.
- All above data can be adjusted for a timeframe of last 7 days, last 30 days, last 60 days, last 12 months, last calendar week and last calendar month
After you have shared the shared prototype with other users, their conversations are stored in the Transcripts tab of your project. To learn how to share your prototype, check out this tutorial.
Overview of All Transcripts
When you click into the transcripts tab, the sidebar of the page shows a scrolling list of every single unique user who has interacted with your prototype and their entire conversation history with your prototype. Depending on your sorting needs, Voiceflow can label individual transcripts with:
- "Mark as Reviewed" indicated by a green checkmark
- "Unread" showed with a blue dot and corresponding text label
- "Save for Later" represented by a red bookmark symbol
These action labels can then be used to sort your view by clicking on the filter button at the top left of your screen. You can sort by either time range or tags, making it easy for you and your team members to quickly identify and review transcripts marked for review or tagged otherwise.
Intent Iteration and Utterance Generation
Each user response captured in the transcript is either matched with an intent with a certain confidence score, or categorized as a no match. You can use this data to improve intents whose user-generated utterances do not confidently match to, or quick-add an utterance to an intent to quickly iterate on your NLU.
The percentage of no-match user utterances out of all user responses is represented as the recognition rate in the Analytics Dashboard.
- In the boxed outline, the utterance "I want to file a complaint" is matched with the "Dispute" intent with a 45.64% confidence score. This is an indicator that the "Dispute" intent either contains too few utterances or too many intent conflicts. Learn how to improve your intent confidence scores in your NLU model by reading Introduction to Optimizing Your NLU Model.
- The arrows on the image point to "No match - Add utterance to Intent" labels for user responses that the assistant did not recognize. By clicking on the label, you can quickly add unmatched utterance to any intent and streamline your NLU optimization process.
Note: if you don't see the intent confidence scores on your script, click on the toggle on the top right of the screen in the individual transcript view and turn on "Intent Confidence". You can also turn on "Debug Messages" to view more granular analytics for your assistant.
Transcript Markup and Tagging
On the right of the script, you have a few options to condense insights about the script. You can add additional tags to each individual transcript for sorting conversations into folders for future analysis, and leave notes for yourself or other tagged team members per user transcript.