Prompt LLM
Sends a text prompt to a language model (LLM) and retrieves the generated response. Supports text, JSON, and structured output formats. The result includes metrics (tokens consumed, latency, model used) useful for cost tracking.
Parameters
| Parameter | Type | Required | Variable | Description |
|---|---|---|---|---|
promptTemplate | text | Yes | Yes | Prompt text sent to the AI model. Supports {{variable}} syntax. (min 5 chars) |
model.provider | text | Yes | No | Language model provider (e.g. openai, anthropic, mistral). |
model.model | text | Yes | No | Model identifier to use (e.g. gpt-4o, claude-3-opus). |
parameters.temperature | number | No | No | Model creativity. 0 = deterministic, 1 = more creative. (Default: 0.7, min 0, max 2) |
parameters.maxTokens | number | No | No | Maximum number of tokens in the generated response. (Default: 2048, min 1, max 128000) |
outputFormat | choice (text, json, structured) | No | No | Response format: plain text, JSON, or typed structure. (Default: "text") |
systemPrompt | text | No | Yes | System instructions sent before the main prompt. Defines model behavior. |
outputVariable | text | No | No | Output variable name containing the model response. |
Parameters marked Variable = Yes accept the
{{blockName.field}}syntax.
Output
Output variable : promptResult
{
"success": false,
"content": "...",
"rawContent": "...",
"parsedData": "...",
"outputFormat": "...",
"model": "...",
"provider": "...",
"finishReason": "...",
"tokensInput": 0,
"tokensOutput": 0,
"totalTokens": 0,
"latencyMs": 0
}
Example
Generate a summary from input data.
Input :
{"text": "Le fournisseur Acme a livre 500 unites le 15 mars..."}
Output :
{"success": true, "content": "Resume : Acme a livre 500 unites le 15/03.", "rawContent": "Resume : Acme a livre 500 unites le 15/03.", "outputFormat": "text", "model": "gpt-4o-mini", "provider": "openai", "tokensInput": 42, "tokensOutput": 18, "totalTokens": 60, "latencyMs": 1165}
Common errors
| Problem | Solution |
|---|---|
| The model is not responding | Check that the LLM provider is configured in the workspace settings. |
| The prompt is too short | Enter at least 5 characters in the prompt field. |
Tip
Use {{promptResult.content}} for the text response and {{promptResult.tokensOutput}} to track token consumption.