Skip to main content

Chat UX

Chat assistants today follow the ChatGPT model: question in, answer out. While this can be useful when providing answers to simple questions, complex technical products often require more nuance. RunLLM provides a chat experience that goes far beyond the basic question and answer model.

RunLLM's goal is to provide a UX that is commensurate with what a support engineer would provide. There are many ways in which RunLLM innovates on the user experience:

Configuration

All these features are configurable on the UI. They can be found under the "Show advanced" section of the deployment's configuration modal. Existing deployments can be found in your assistant's Config tab.

Follow-up messages

A critical part of any support workflow is knowing if you actually resolved the user's issue, and user's are notoriously difficult to get a hold of once their problem's been solved. RunLLM follows up with your users to make sure that it was able to make them successful, and it tracks this data internally to help you understand how effective it is.

The user experience looks like this (demo video coming soon!):

  1. User asks a question.
  2. RunLLM provides an answer.
  3. If the user downvotes the answer, RunLLM asks them to explain why the answer was wrong or what issue they ran into.
  4. If the user doesn't interact at all (no follow-up questions, votes, etc.), then RunLLM will follow up 30 minutes later to check if their issue was resolved.
  5. User responds to the follow-up message.
  6. Based on the user's response, RunLLM classifies the conversation as resolved, unresolved, or resolved by others (if a person steps in to answer the question).

Note that the conversation resolution data is not currently available on the RunLLM admin dashboard. Please reach out if you'd like access to these analytics.

If RunLLM is unable to answer a user's question based on available data, it may run a web search on the user's behalf. The top results from this search will be sent to the user as a followup.

Supplementary guidance

User questions can sometimes be complicated, and even when a question is answerable, it might not be advisable to let the user do exactly the thing they want. For example, if a user asks, "How do I mirror a petabyte of data from S3 onto my NAS?" the right answer is probably, "Why would you want to do that?"

After RunLLM answers the question, it will look at related information to see if there are best practices or guides that are related to this topic that didn't directly answer the question. When possible, it will provide supplementary guidance to the user in order to help point them in the right direction.

The user experience looks like this (demo video coming soon!):

  1. User asks a question.
  2. RunLLM provides an answer.
  3. RunLLM looks for topics related to the answer generated above; if there are any related pieces of information, it generates a follow-up message explaining best practices related to the topic at hand.

Notify on Documentation Inconsistencies

RunLLM will preemptively follow up in conversations that have been flagged as relying on potentially inconsistent documentation. Learn more about how inconsistency detection works here.

Support Team Handoff

If RunLLM is unable to answer a user's question based on available data, it may route the user's question to an internal support slack channel. Support team members can then answer the user's question that slack channel, and automatically send their response back to the user's conversation. This feature supported for Slack, Discord, and Chat Widget deployments. The internal slack channel is configured when setting up this feature on the UI.

The user experience looks like this (demo video coming soon!):

  1. User asks a question on the chat widget, for example.
  2. RunLLM is unable to answer the question given existing data sources. It posts the question to the internal slack channel and notifies the user that the question has been escalated to the support team.
  3. A support team member answers the question in the internal slack thread, and tags their last response with the 📬 emoji (:mailbox_with_mail).
  4. RunLLM will confirm whether the response should be sent back verbatim, or as a summarized version. The latter is useful if there is any prolonged discussion within the thread.
  5. After confirmation, the response will be sent back to the user's conversation.
Instant Learning

Every time a support member sends a response back, RunLLM will perform instant learning on that answer, so that the assistant will be able to answer similar questions in the future!

Code execution

Answers to developer questions often include code snippets, but as anyone who's use ChatGPT knows, code generated in isolation isn't always going to be accurate. RunLLM validates the code that it generates by executing it in a containerized environment and sharing the results with your users. After executing the code, RunLLM will share whether the status of the code execution and any edits that can be made to improve the code.

The user experience looks like this (demo video coming soon!):

  1. User asks a question.
  2. RunLLM generates an answer that includes a code snippet.
  3. RunLLM detects that a code snippet has been generated and attempts to execute it. If the code execution succeeds, an update is shared with the user.
  4. If the code execution fails, RunLLM shares an update with the user and attempts to edit the code to resolve the issue. If it determines the issue is unresolvable, the user is given a summary of the issue.
  5. If the code is updated, RunLLM executes the new code and shares the success message or a summary of the issue with the user.
There's more on the roadmap

If you're excited about these features and want to learn more, please don't hesitate to get in touch with us. We have a number of exciting ideas and can't wait to share what we're working on next!