Using the Knowledge Base

Sarah Bourgeois
Sarah Bourgeois
  • Updated

Release Notes

November 21, 2023

On Tuesday (11/21/23) we will be making a large update the Knowledge Base (KB) and Set AI steps. This update is based on the feedback we've gotten over the last few months from thousands of community members around improving the accuracy and usability of the Knowledge Base.

This update will introduce the following features and upgrades to the Knowledge Base Steps:

  • Not Found Path
  • Prompt Overrides
  • Instructions Field
  • Improved Wrapper Prompt

See the changelog here for more information

 

September 20, 2023:

August 10, 2023:

  • We have 5 new API endpoints for managing your Knowledge Base content programmatically:
    Type Name Description API docs
    POST Upload Document (non-url) Uploads a document to the Knowledge Base (excluding urls). Limit is one file per call.

    Docs

    POST Upload Document (url)

    Uploads a document of type "url" to the Knowledge Base. Limit is one url per call.

    Docs

    DELETE Delete Document Deletes a specific Knowledge Base document by documentID. Limit one per call. Docs
    GET Document Chunk Retrieval Retrieves the chunks for a Knowledge Base doc by documentID.
    Limit one documentID per call.
    Docs
    GET Document List Retrieves a list of all documents within a project's Knowledge Base.

    Docs

    PUT Replace Document (non-url) Replaces the document and document name in the Knowledge Base, by documentID (excluding urls). Limit one per call. Docs
    PUT Replace Document (url) Replaces the document and document name in the Knowledge Base, by documentID (urls only). Limit one per call.

    Docs

  • Query and Answer token citations have been added to the Test Mode. Be sure you have your Debug mode turned on. To learn more about what these citations means and how they're calculated, jump to our FAQs.

Quickstart

Upload Data Source

You can go to your Knowledge Base tab by clicking on the brain icon in the left nav.

Once there, you can click on the 'Add Data Source' button in the header.

There are currently four types of data we support:

  • URL(s) - Any URL that is from a publicly-available website. The will work best on any static text data contained on the destination URL. Any dynamically-loaded text, table data, or image content will not be used within your response. Any content behind an authentication wall will not be accessible. 
  • Sitemap - any XML Sitemap in a .xml file (format "https://website.com/sitemap.xml"). See FAQ's below for tips on how to find your sitemap. (Note: You can enter a regular URL, and our system will automatically search for the sitemap in the common formats. This method works most times, but if it doesn't, try the recommended methods in the FAQ portion.)
  • Text - Any text data included in a .txt file.
  • PDF - Any text data that can be clearly identified on a PDF. This will not include any images or structural reference for the data contained within. 
  • Docx - Any text included in a .docx file.

Once you add a data source, it will show you the loading status, and then confirm once the data source has been loaded and is ready to use. 

Preview Knowledge Base Response

You can preview your Knowledge Base responses using the 'Preview' button in the header (between 'Settings' and 'Add Data Source'). 



Structure Your Assistant to Search Knowledge Base

There are two different ways to access the knowledge base in your assistant.

 

1. Using the Knowledge Base Steps

The Knowledge Base steps (response AI and set AI) can be used throughout your flow to send a question to the knowledge base and receive an answer.

 

The Knowledge Base steps have the following fields (updated on Nov. 21, 2023)

  1. Question: This should ONLY include the users question. Do not include anything else in this field.
  2. Instructions: This field is optional. You can use it to add custom instructions to your prompt.
  3. Not Found Path: This field adds a 'not found path' to your assistant. If the KB AI step does not find an answer, it will not say anything and move down this path.
  4. Override prompt settings: The global Knowledge Base settings live in the Knowledge base. If this is enabled you can override those settings with settings that are custom for this step.

Response AI Step - 'Data Source' set to 'Knowledge Base':

This step is most commonly used to display the answer to a users question. It takes a question and provides an output that is presented to the user.

By default this step has a 'not found path' turned on which you can see on the canvas. If the knowledge base is unable to find an answer it will not return a message to the user. Instead, it will move down the not found path.

If the not found path setting is turned off - the assistant will say 'Answer not found' if no answer is found. If you want to modify that message, we suggest turning the not found path setting on and creating a text step with the message you desire.

 

Set AI Step - 'Data Source' set to 'Knowledge Base':

This step is used when you want to save the answer from the Knowledge Base directly to a variable. In this case, the user does not see the response. This is useful if you want to save the answer for further processing.

A common example is prompt-chaining. Where a designer may take the response from the Knowledge Base and use it in a Response AI step (with the AI Model source) for further formatting or to augment the response with memory or other attributes.





Considerations when 'Knowledge Base' set as AI Step 'Data Source':

  • If you're using Knowledge Base as the Data Source in your AI steps, please note that you cannot use prompt-engineering to prescribe a specific data file to use. The AI Steps will search the entire Knowledge Base, selecting data “chunks” with highest relevance to the question/prompt. Keep this in mind when curating your Knowledge Base content (”garbage in, garbage out”).
    • If you are looking to do this, you can use the new 'tag' functionality released on the Knowledge Base Query API here.
  • Knowledge Base Settings (Model, Temperature, Max Tokens, and System instructions) have a global setting, in the knowledge base, or a local setting - on the step. You can override the global settings by turning the 'override prompt setting' toggle on.
  • Existing memory is not passed in the Knowledge Base wrapper nor your prompt, if you want to pass the memory you can use the system variable _memory_ in a Javascript step to extract the information you want to pass in your prompt or using a Set step to pass the full content of the _memory_ variable.
    As an example, using the Set step, we are populating our {memory} variable created in our assistant with a stringified version of the build in _memory_ object.
    You can then use the {memory} variable in your prompt.

    CleanShot 2023-11-22 at 16.36.09.png

 

2. Using the Fallback

Knowledge Base: If there is no Intent that matches the user's question, it will query the Knowledge Base. This is called KB Fallback.

In the design below there are two points where the Assistant is in a 'listening' state. The first is the Buttons step. A user can select one of the two Buttons or ask a question. If they click a Button they will move forward in the conversation. If they ask a question, it will trigger the Knowledge Base if there is no Intent in the project that matches their question. The second 'listening' state is the Capture step itself, which is always listening, like the rest of the Listen steps (Buttons and Choice steps).

  1. Global No Match: If there is no answer within the Knowledge Base to the user's question, it will trigger a Global No Match response (this can be either static (default) or generative).

 

Overview

How does the Knowledge Base work?

 

Global No Match Behaviour

You have the option with Global No Match in your Assistant to choose generative or static responses. A generative Global No Match means that if the question hits the fallback (meaning no relevant Intents or Knowledge Base content was found), the question will be answered by a general AI. A static Global No Match means your Assistant will respond with a message you define (ie. I'm sorry, I don't have an answer for that. Is there something else I can answer for you?). 


If you want to stop your Assistant from providing generative responses outside of your Knowledge Base, set your Global No Match settings to static (default) and make sure any AI Step utilized have their 'Data Source' set to 'Knowledge Base'.

 

FAQ

File Size Limits

Each individual data source file is limited to 10MB. If you have a larger data source, you can reduce the size by removing photos (these are not captured by the Knowledge Base today anyway) and/or compressing. Today, the Knowledge Base can support up to 1000 documents per assistant. 

Data Source Structure Consideration

Any dynamically-loaded text, table data, or image content will not be used within your response. Furthermore, how your data is structured itself is relevant to the accuracy of your responses. The more cohesive your data source (more it reads like essay or blog), the more accurate your Knowledge Base responses will be.

Configuring your Knowledge Base

You can modify the performance of your Knowledge Base by using the KB Settings modal, access from the 'Settings' icon in the header to the left of 'Preview' and 'Add Data Source' buttons.

To modify how answers are generated from the data found in the sources you provided, you can use the following settings:

  • Model - This is the model that will be used to created your prompt completion. Each model has it's own strengths, weaknesses, and token multiplier, so select the one that is best for your task. 
    • GPT-3 DaVinci - most stable performance, best suited for simpler functions
    • GPT-3.5-Turbo - fast results, average performance on reasoning and conciseness
    • GPT-4 - most advanced reasoning and conciseness, slower results (only available on Pro and Enterprise Plans)
    • Claude 1 - consistently fast results, moderate reasoning performance
    • Claude Instant 1.3 - fastest results, best suited for simpler functions
    • Claude 2 - advanced context handling (strong summarization capabilities)
  • Temperature - This will allow you to influence how much variation your responses will have from the prompt. Higher temperature will result in more variability in your responses. Lower temperature will result in responses that directly address the prompt, providing more exact answers. If you want 'more' exact responses, turn down your temperature.

  • Max Tokens - This sets the total number of tokens you want to use when completing your prompt. The max number of tokens available per response is 512, (we include your prompt and settings on top of that). Greater max tokens means more risk of longer response latency.

  • System - This is the instruction you can give to the LLM model to frame how it should behave. Giving the model a 'job' will help it provide a more contextual answer. Here you can also define  response length, structure, personality, tone, and/or response language (more below). System instructions get combined with the question/prompt, so be sure they don't contradict. 

  • Chunk Limit - controls the amount of chunks used to synthesize the response. 
    How does the numbers of chunks retrieved affect the accuracy of the KB?
    In theory, the more chunks retrieved - the more accurate the response, and the more tokens consumed. In reality, the "accuracy" tied to chunks is strongly associated with how the Knowledge Base (KB) documents are curated. The default number of chunks we pull is 2, at a default max length of 1000 tokens (max chunk size). That's up to 2000 tokens worth of context with the highest similarity match score to the question. If the KB docs are curated so that topics are grouped together, this should be more than enough to accurately answer the question. However, if information is scatted throughly many different KB data sources, then likely more chunks of smaller size will increase the accuracy of the response. You can control the max chunk size of your docs with the Upload/Replace KB doc APIs, using the query parameter: "maxchunkSize". In summary, the Chunk Limit functionality aims to provide users with more tools and flexibility to increase the accuracy of their responses in line with their use cases.

    Ultimately, in order to provide best KB response 'accuracy' while optimizing token consumption, we recommend curating KB docs by:
    - limiting the number of KB docs & grouping topics inside those docs meaningfully
    - using the default 1000 token max chunk size and keeping the chunk limit at 2 (default)

How do tokens work in Voiceflow?

Charged token consumption in Voiceflow only includes query and answer synthesis (that is, when you're asking a question and get a response).

Tokens are consumed on Voiceflow for:

  • The query synthesis step (the LLM cleaning the user's question and adding context)
  • And the answer synthesis step (the LLM answering the question given the relevant chunks and conversation history)

The average token consumption for a response with an LLM that is "1x Tokens" is about 1000 tokens (combined both the query and the answer tokens).


The reason for most of these tokens is the query which has to include data source chunks and so it is large. The Max Tokens selected on your KB settings applies to the response output only ("answer" in our citations) and is before any multipliers (which we have for GPT-4 and Claude V1 only).

Here are some examples of the impact of these multipliers:

  • If the Max Tokens in your KB Settings is set to around 300, and you're using GPT-4, your response can be up to (approximately) 300 words, but the tokens consumed can be up to 7500 (300*25) on the answer tokens alone. You can see how this adds up very quickly.
  • GPT-4 is very expensive to use and consumed 25x the amount of tokens (so a single response using GPT-4 can easily be around 25,000 tokens including the query tokens).

For more information, you can read this Voiceflow article.

Non-English Knowledge Base Use Cases

Knowledge Base understands content in most languages. To configure your Knowledge Base response to be in a specific language, you must adjust the 'System' setting in your Settings with strong language indicating such. We recommend something like this:

Assistant Types compatible with Knowledge Base 

Today, the Knowledge Base is available on all Chat & SMS AI Assistant projects, and is not available on any NLU Design project types.

How to find the Sitemap for a website

An XML Sitemap is a file type that contains a list of the embedded URLs within a website. Adding a Sitemap file vs. individually pasting URLs saves time in the Knowledge Base curation process. 

If a Sitemap exists, these steps will help you find it 95% of the time:

  1. In your browser, try adding the following endings to your URL:
    /sitemap, /sitemap.xml, or /sitemap_index.xml
    You'll know if this worked if the format on the modified URL website looks like this: 


  2. Do an online search for your Sitemap using the following string (be sure to change the URL to your URL before trying in your browser): e.g. "site:website.com filetype:xml"

  3. Try a web crawling service - Enter your URL and see if they can find an available Sitemap. 
    We recommend: https://seositecheckup.com/tools/sitemap-test

 

 

Was this article helpful?

33 out of 40 found this helpful

Have more questions? Submit a request

Comments

0 comments

Please sign in to leave a comment.