The Zive Assistant is the central AI chatbot interface for all employees. It supports different LLMs and it can access the web or your internal company data to answer questions.
This article focuses on the general functionality of the Zive Assistant. If you're looking for support and orientation how to optimize your usage of the assistant take a look at our article Maximize the Zive Assistant.
How to start a conversation with the Zive Assistant
As the core functionality of the platform the assistant is your home page. You can simply enter your prompt to start a new conversation:
Submitting your input will forward you into a thread view where you can continue the conversation with follow-up questions or additional instructions.
Choosing the AI model
At Zive we follow a multi-LLM strategy to provide you and your organization with the flexibility to use all of the newest and best models in one platform.
When hovering over each model you get additional infos about the models strengths and a description. This enables you to choose the model that best fits your use case.
By default the model selection is set to "Auto". In this mode our system analyzes the prompt and automatically identifies the best model. This way you don't have to choose your model manually.
Note: The models available for selection depend on two factors
The plan that you are in. Find more information on our pricing page.
The configuration done by the Administrators of your Zive platform. Learn more about configuring the available AI models.
Attaching content
Attaching content is a great possibility to narrow down the context the assistant should consider for your prompt. While the assistant is capable to search internally (company knowledge) and externally (web), if you have very specific sources you want to work with it's worth to attach them directly. You have two different options to attach content.
Attach from company knowledge
With this option you can search through content that is permanently available in the platform and directly attach it to the conversation. This includes content that was indexed through data connectors (e.g. SharePoint, Google Drive, Confluence, ...) or uploaded to knowledge collections to be available in the platform. Simply enter a search term and select the content that you would like to work with:
Upload from your device
If the content is not available in your company knowledge on the platform, you can also choose to upload a local file from your device. Selecting this option will open the finder/explorer where you can choose the relevant files.
Note: Uploading local files to a conversation will not make them available for other users. Your conversation is always personal.
Configuring AI Nudges (where to search and output format)
The nudges at the bottom enable you to quickly give the Assistant hints on what you expect from it and how it should work with your prompt. Think prompt text snippets - but as buttons instead of typing them out each time. This way you can give the LLM hints on how you want your request to be worked with.
All of these nudges are completely optional, you do not have to use them. Selecting them is just a shortcut to make your intentions clearer.
Search in
Here you can guide the assistant if and where it should search for additional information. There are two different options you can select and deselect:
Company: Enables the assistant to search through your internal knowledge and find relevant information for your prompt or question.
Web: Enables the assistant to search the web for external information relevant for your prompt or question.
So what exactly happens when selecting or deselecting one of these options?
Selecting: Hints the assistant to search internally (Company) or externally (Web), but does not explicitly force it.
Deselecting: Removes the capability for the assistant, meaning it prevents the assistant from searching internally (Company) or externally (Web) for additional information.
By default both options are activated to not limit the capabilities. Meaning you can deselect one or both to limit if and where the assistant should search.
Format
Selecting the desired output format can significantly improve the quality of the response you receive from the assistant. For example, asking for a comparison can yield completely different responses when selecting either "Report" or "Table".
With selecting the format here you can easily define your expectations without needing to entirely rewrite or restructure your prompt.
Reliability & Limitations
Like always when working with LLMs and generative AI, reliability cannot be 100% guaranteed. That is because LLMs tend to hallucinate or show bias due to the nature of how they function.
Zive has developed a range of highly specialized technologies to ensure that the LLMs stick to the actual knowledge that's available in your workplace. The most important one being the ability to show the sources that the Zive Assistant has based its answers on, including even the very specific passages within each source.
However, it remains each user's obligation to validate the answers given by the Zive Assistant and possibly check secondary sources, whether they are company knowledge or external information.