Chat UX
Chat assitants today follow the ChatGPT model: question in, answer out. While this can be useful when providing answers to simple questions, complex technical products often require more nuance. RunLLM provides a chat experience that goes far beyond the basic question and answer model.
RunLLM's goal is to provide a UX that is commensurate with what a support engineer would provide. There are three ways in which RunLLM innovates on the experience:
If you'd like access to these features on your RunLLM assistant, please reach out! We're happy to set them and enable them for you. We'll soon post demo videos for each of these features on this documentation site — stay tuned!
Code execution
Answers to developer questions often include code snippets, but as anyone who's use ChatGPT knows, code generated in isolation isn't always going to be accurate. RunLLM validates the code that it generates by executing it in a containerized environment and sharing the results with your users. After executing the code, RunLLM will share whether the status of the code execution and any edits that can be made to improve the code.
The user experience looks like this (demo video coming soon!):
- User asks a question.
- RunLLM generates an answer that includes a code snippet.
- RunLLM detects that a code snippet has been generated and attempts to execute it. If the code execution succeeds, an update is shared with the user.
- If the code execution fails, RunLLM shares an update with the user and attempts to edit the code to resolve the issue. If it determines the issue is unresolvable, the user is given a summary of the issue.
- If the code is updated, RunLLM executes the new code and shares the success message or a summary of the issue with the user.
Supplementary guidance
User questions can sometimes be complicated, and even when a question is answerable, it might not be advisable to let the user do exactly the thing they want. For example, if a user asks, "How do I mirror a petabyte of data from S3 onto my NAS?" the right answer is probably, "Why would you want to do that?"
After RunLLM answers the question, it will look at related information to see if there are best practices or guides that are related to this topic that didn't directly answer the question. When possible, it will provide supplementary guidance to the user in order to help point them in the right direction.
The user experience looks like this (demo video coming soon!):
- User asks a question.
- RunLLM provides an answer.
- RunLLM looks for topics related to the answer generated above; if there are any related pieces of information, it generates a follow-up message explaining best practices related to the topic at hand.
Follow-up messages
A critical part of any support workflow is knowing if you actually resolved the user's issue, and user's are notoriously difficult to get a hold of once their problem's been solved. RunLLM follows up with your users to make sure that it was able to make them successful, and it tracks this data internally to help you understand how effective it is.
The user experience looks like this (demo video coming soon!):
- User asks a question.
- RunLLM provides an answer.
- If the user downvotes the answer, RunLLM asks them to explain why the answer was wrong or what issue they ran into.
- If the user doesn't interact at all (no follow-up questions, votes, etc.), then RunLLM will follow-up 15 minutes later to check if their issue was resolved.
- RunLLM then classifies the user's answer into resolved, unresolved, or resolved by others (if a person steps in to answer the question).
Note that the analytics data is not currently available on the RunLLM admin dashboard. Please reach out if you'd like access to your resolution data.
If you're excited about these features and want to learn more, please don't hesitate to get in touch with us. We have a number of exciting ideas and can't wait to share what we're working on next!