Skip to main content

Prompt LLM

Sends a text prompt to a language model (LLM) and retrieves the generated response. Supports text, JSON, and structured output formats. The result includes metrics (tokens consumed, latency, model used) useful for cost tracking.

Parameters

ParameterTypeRequiredVariableDescription
promptTemplatetextYesYesPrompt text sent to the AI model. Supports {{variable}} syntax. (min 5 chars)
model.providertextYesNoLanguage model provider (e.g. openai, anthropic, mistral).
model.modeltextYesNoModel identifier to use (e.g. gpt-4o, claude-3-opus).
parameters.temperaturenumberNoNoModel creativity. 0 = deterministic, 1 = more creative. (Default: 0.7, min 0, max 2)
parameters.maxTokensnumberNoNoMaximum number of tokens in the generated response. (Default: 2048, min 1, max 128000)
outputFormatchoice (text, json, structured)NoNoResponse format: plain text, JSON, or typed structure. (Default: "text")
systemPrompttextNoYesSystem instructions sent before the main prompt. Defines model behavior.
outputVariabletextNoNoOutput variable name containing the model response.

Parameters marked Variable = Yes accept the {{blockName.field}} syntax.

Output

Output variable : promptResult

{
"success": false,
"content": "...",
"rawContent": "...",
"parsedData": "...",
"outputFormat": "...",
"model": "...",
"provider": "...",
"finishReason": "...",
"tokensInput": 0,
"tokensOutput": 0,
"totalTokens": 0,
"latencyMs": 0
}

Example

Generate a summary from input data.

Input :

{"text": "Le fournisseur Acme a livre 500 unites le 15 mars..."}

Output :

{"success": true, "content": "Resume : Acme a livre 500 unites le 15/03.", "rawContent": "Resume : Acme a livre 500 unites le 15/03.", "outputFormat": "text", "model": "gpt-4o-mini", "provider": "openai", "tokensInput": 42, "tokensOutput": 18, "totalTokens": 60, "latencyMs": 1165}

Common errors

ProblemSolution
The model is not respondingCheck that the LLM provider is configured in the workspace settings.
The prompt is too shortEnter at least 5 characters in the prompt field.
Tip

Use {{promptResult.content}} for the text response and {{promptResult.tokensOutput}} to track token consumption.