The Set AI Step allows you to dynamically set variables in your Assistant based on a prompt that you provide. These variables get set in real-time on the runtime of each session with your Assistant.
The Set AI is an experimental feature leveraging Large Language Models (LLMs) and should not be used in production use cases for business critical applications because of its potential to generate misleading or false information. For that reason, you will be required to opt-in to use this functionality.
The AI Set Step will be available for you by default on all AI Assistant projects, but will not be available on any NLU Design project types.
Adding an Set AI to your Assistant
Once enabled, you can find the Set AI Step in your Assistant's Steps Menu, under the AI section. The step can be added into your assistant anywhere. Once you have placed your step, you can configure it in the Editor.
To configure your variable, in the Prompt field, provide a description for the type of function you'd like the AI to run for you. You can leverage variables from your project within this prompt, to make it dynamic. You have two options for the Data Source your LLM will leverage: AI Model or Knowledge Base. Default setting is AI Model.
From the Apply To field, select the variable you wish to map the AI's response to.
You can add multiple prompts to the same step, these prompts will run in parallel, so you cannot leverage the variable outputted from one prompt if another. If you wish to do this, you can use two sequential Set AI Steps, the generation time for you results will just take a bit longer.
Knowledge Base Data Source Considerations:
- If you're using Knowledge Base as the Data Source in your Set AI step, please note that you cannot use prompt-engineering to prescribe a specific data file to use. The Set AI step searches the entire Knowledge Base, selecting data “chunks” with highest relevance to the question/prompt. Keep this in mind when curating your Knowledge Base content (”garbage in, garbage out”).
- Knowledge Base Settings (model, temp, etc.) are only configurable on the Assistant level.
Configuring your Prompt
There are currently 4 ways to configure the prompt you've provided to modify the potential output:
- Model - This is the model that will be used to created your prompt completion. Each model has it's own strengths, weaknesses, and token multiplier, so select the one that is best for your task.
- GPT-3 DaVinci - most stable performance, best suited for simpler functions
- GPT-3.5-Turbo - fast results, average performance on reasoning and conciseness
- GPT-4 - most advanced reasoning and conciseness, slower results (only available on Pro and Enterprise Plans)
- Claude 1 - consistently fast results, moderate reasoning performance
- Claude Instant 1.3 - fastest results, best suited for simpler functions
- Claude 2 - advanced context handling (strong summarization capabilities)
- Temperature - This will allow you to influence how much variation your responses will have from the prompt. Higher temperature will result in more variability in your responses. Lower temperature will result in responses that directly address the prompt, providing more exact answers. If you want 'more' exact responses, turn down your temperature.
- Max Tokens - This sets the total number of tokens you want to use when completing your prompt. The max number of tokens available per response is 512, (we include your prompt and settings on top of that). Greater max tokens means more risk of longer response latency.
- System - This is the instruction you can give to the LLM model to frame how it should behave. Giving the model a 'job' will help it provide a more contextual answer. Here you can also define response length, structure, personality, tone, and/or response language. System instructions get combined with the question/prompt, so be sure they don't contradict.
Testing your Generated Variables
You can test your prompt using the Preview button, which will ask you to provide an example variable value if you have included one.
When you run your assistant in the Test Tool or in Sharable Prototypes, any Set AI Steps you have configured will be active, generating their response content dynamically.
The Set AI Step is still an experimental feature, and for this reason and is not recommended to be used for serious production use cases.