Skip to main content
Once you have a schema and some assets, you can run analysis. Each document gets processed by your chosen model, and results are stored as structured annotations.

The Process

1

Select content

Pick individual assets or entire bundles
2

Choose schema

Select which schema to apply
3

Pick model

OpenAI, Anthropic, Google, or local via Ollama
4

Run

Each document gets processed, results stored as structured JSON

Choosing a Model

Different models have different strengths:
ProviderBest forNotes
OpenAIGeneral extraction, high throughputReliable, fast, good default
AnthropicNuanced analysis, long documentsBetter at complex reasoning
GoogleLarge context windowsGood for lengthy documents
OllamaPrivacy, local processingNo data leaves your machine

Batch Processing

When analysing large collections
  1. Start small - Test on 2-3 documents first
  2. Check outputs - Do they match expectations?
  3. Refine instructions - Tighten schema if needed
  4. Scale up - Run on full collection

Monitoring Runs

Active analysis runs show progress in the UI:
  • Documents processed / total
  • Current status (running, completed, failed)
  • Error details if something goes wrong
Failed documents can be retried individually without re-running the entire batch.

Results

After analysis completes, results appear as annotations on your assets. Each annotation contains the structured data your schema extracted. View results:
  • On the asset - See all annotations for a specific document
  • Fragments - If curated from a table, you can see the persistent fragments on the asset detail view
  • In dashboards - Aggregate and visualise across the entire run
  • Via export - Download as CSV or JSON for external analysis