Foundation Services
Language Models
Chat, classification, structured extraction. The core AI capability.
Embeddings
Vector search, similarity matching, semantic discovery within infospaces.
Web Search
Live web results for chat - ground responses in current information.
Geocoding
Extract and map locations from your documents. Powers geographic visualizations.

API Key Storage
Two storage modes for each provider:| Mode | What it does | Use when |
|---|---|---|
| Runtime Only | Keys stay in your browser | Quick testing, interactive use |
| Save for Tasks | Keys encrypted in database | Background jobs, large analysis runs, automation |
Providers
Language Models
| Provider | Get Key |
|---|---|
| Anthropic | console.anthropic.com |
| OpenAI | platform.openai.com |
| aistudio.google.com | |
| Ollama | Local — no key needed |
Ollama
Ollama is a local language model server that has many models available and is a great way to get started with local inference.1
Select Ollama as LLM and Embedding Provider

2
Select a model

Embeddings
Used for vectorizing content in your infospaces. Configure separately from language models.| Provider | Notes |
|---|---|
| OpenAI | Reliable, widely used |
| Ollama | Local embeddings, complete privacy |
Web Search
Enables the chat to pull in live search results.| Provider | Get Key |
|---|---|
| Tavily | tavily.com |
| SearXNG | (soon back integrated in the stack) |
Geocoding
Location extraction uses Nominatim. Self-hosted deployments include it by default. No API key needed for local Nominatim.Verifying Setup
- Open Chat and send a message — confirms language model works
- Upload an asset and check if it appears in vector search — confirms embeddings
- Ask chat to search the web — confirms web search
- Run an analysis with location fields — confirms geocoding


