Skip to content

LLM Prompt

Generates a response from a Large Language Model (LLM) based on a given prompt.

Inputs#

Port Description
Model Name
The name of the model to use. The provider is inferred from the prefix.
User Prompt
The primary prompt or question to send to the LLM.
System Prompt
Optional instructions to guide the AI's behavior, persona, or response style.
Attachments
A list of file streams (e.g., images) to send with the prompt.
API Key
The API key for the selected cloud provider. Not required for local Ollama models.

Outputs#

Port Description
Success
Executes if the text generation is successful.
Error
Executes if there is an error during text generation.
Response
The generated text response from the LLM.

Addendum#

This node sends a prompt to an LLM and outputs the generated response. It supports text and multimodal inputs (e.g., images).

Model Providers#

The provider is automatically selected based on the model name prefix:

  • OpenAI: gpt-, o1-, o2-, o3- (e.g., gpt-4o)
  • xAI (Grok): grok- (e.g., grok-1.5)
  • Anthropic: claude- (e.g., claude-3-5-sonnet-20240620)
  • Google AI: gemini- (e.g., gemini-1.5-pro)
  • Mistral AI: mistral-, mixtral-
  • Ollama: Any other name (e.g., llama3) is assumed to be a local model served by Ollama.

Authentication#

For cloud-based LLM providers, you must provide an API key. For local Ollama models, no key is needed.

Temperature#

The node automatically sets a default temperature value based on the model, which influences the creativity of the response. This cannot be overridden.

Model Prefix(es) Temperature
gpt-5 1.0
gpt-, o1-, o2-, o3- 0.7
grok- 0.7
gemini- 0.7
claude- 1.0
mistral-, mixtral- 0.8
Ollama / Other 0.7

ID: core/llm-prompt@v1