Skip to main content

Configure your AI models

At a glance

Ontologie lets you connect your own language models (LLMs) to power AI agents and AI blocks in workflows. Bring your own API key (BYO-key) to use the provider of your choice.

Before you begin

  • Be an administrator of the workspace
  • Have a valid API key from an LLM provider

Supported providers

ProviderCommon models
AnthropicClaude Sonnet, Claude Opus, Claude Haiku
OpenAIGPT-4o, GPT-4o mini
GoogleGemini Pro, Gemini Flash
MistralMistral Large, Mistral Small
GroqLlama, Mixtral (fast inference)
CohereCommand R, Command R+
Local modelAny OpenAI API-compatible model (e.g. Ollama, vLLM)

Adding a provider

  1. Go to Workspace settings > AI Models.
  2. Click + Add a provider.
  3. Select the provider from the list.
  4. Enter your API key.
  5. Click Test connection to verify.
  6. Click Save.
info

Your API key is encrypted at rest and is never exposed in the interface after saving.

Local model (OpenAI API-compatible)

To use a model hosted locally or on your own infrastructure:

  1. Select Local model as the provider.
  2. Enter your API URL (e.g. http://localhost:11434/v1).
  3. Enter the model name (e.g. llama3.1:8b).
  4. Optional: enter an API key if your server requires one.
  5. Test and save.

Configuring the default model

  1. In the providers list, click the model you want to set as default.
  2. Click Set as default model.

The default model is used by:

  • New agents created in Agent Studio
  • AI blocks in workflows (unless specifically configured otherwise)
  • Natural language queries in spreadsheets

Choosing a model per agent

Each agent can use a different model:

  1. Open the agent in Agent Studio.
  2. Go to the Configuration tab.
  3. In the Model section, select the desired provider and model.

This lets you use a fast model for simple agents and a more powerful model for complex tasks.

Key security

PracticeDescription
EncryptionKeys are encrypted at rest (AES-256)
Restricted accessOnly administrators can view and modify providers
RotationUpdate your keys regularly — the old key is replaced immediately
AuditEvery LLM call is traced in the workspace logs

Troubleshooting

ProblemCauseSolution
"Invalid key"Incorrect or expired API keyVerify the key on the provider's website
"Model not found"Incorrect model nameVerify the exact model name in the provider's documentation
"Timeout"The provider is not respondingCheck the provider's availability
"Quota exceeded"Provider quota exhaustedCheck your usage on the provider's dashboard

See also

Need help?

Contact us: Support and contact.