Application: An application is the highest-level concept in RunLLM. An application is a collection of workloads that are related to the same functionality. For example, a retrieval-augmented chatbot that uses your company's internal wiki would be an application; it would be comprised of one (offline, scheduled) workload to index your documents into a vector DB and another (online, on-demand) workload to answer user questions. Each one of these workloads is called a task.
Task: A task is a single workload in RunLLM. It's defined by one or more Python functions that are executed in conjunction with each other. A task can either be run on a fixed schedule (e.g., every night at midnight) or be connected to a REST API and executed on-demand. Tasks are defined by the composition of primitives.
Primitive: A primitive is a single Python function that is executed as a part of a task. RunLLM comes with 5 default primitives —
generate — and a catch-all
custom primitive that allows you to write arbitrary Python code. Each primitive is strongly typed and comes with a default implementation using state-of-the-art open-source tools like LangChain and Llama Index. Primitives come with sensible defaults for LLM parameters, all of which can be configured; primitives can also be fully customized with bespoke implementations.
Resources: A resource is a third-party system or service that RunLLM interacts with. Resources include data sources (e.g., Snowflake, Google Docs), model providers (e.g., OpenAI), and vector DBs (e.g., Chroma, Pinecone). You can connect RunLLM to your resources from our Python SDK or from the Resources page on the UI. Most primitives operate on one or more resources — for example, the
generate primitive will use a model provider as a resource.