Using the ChatUnderstanding Providers

Understanding Providers

Providers are the AI brains behind Chorum. Here’s how they work.


What’s a Provider?

A provider is a company that runs AI models. Chorum supports many:

Provider dropdown

ProviderModelsKnown For
AnthropicClaude 4 Sonnet, Claude OpusThoughtful reasoning, safety
OpenAIGPT-4 Turbo, GPT-4oVersatility, code generation
GoogleGemini 1.5 ProLong context, multimodal
DeepSeekDeepSeek modelsCost-effective reasoning
PerplexityPerplexity AISearch-augmented
xAIGrokReal-time info, unfiltered
GLM-4Zhipu AI modelsChinese language support
LocalOllama, LM StudioPrivacy, no cost

Each provider has different strengths, costs, and quirks.


How Routing Works

When you send a message, Chorum’s router decides which provider to use:

Your Message

[What kind of task is this?]

[Which providers can handle it?]

[Which are under budget?]

[Pick the cheapest capable one]

Send to Provider

Task Types

The router detects task type automatically:

If Your Message Looks Like…Router Thinks…Routes To…
”Analyze this dataset”Deep reasoningOpus/GPT-4
”Write a poem”Creative/balancedSonnet
”Summarize this doc”Fast/simpleHaiku
”Review this code”Code analysisGPT-4/Sonnet

You can override this. But the router is pretty good.


Cost Comparison

Rough cost per 1M tokens (as of 2025):

ModelInputOutput
Claude Opus$15$75
GPT-4 Turbo$10$30
Claude Sonnet$3$15
GPT-4o$5$15
Claude Haiku$0.25$1.25
Gemini 1.5 Pro$1.25$5
Local (Ollama)$0$0

The router factors this in. For simple questions, it’ll prefer cheaper models.


Setting Up Providers

Add Provider modal

  1. Go to Settings → Providers
  2. Click Add Provider
  3. Select a provider from the dropdown
  4. Paste your API key
  5. (Optional) Set a daily budget

Where to Get API Keys

ProviderURL
Anthropicconsole.anthropic.com
OpenAIplatform.openai.com/api-keys
Googleaistudio.google.com/apikey
xAIconsole.x.ai

Local Models (Free)

Want to run models on your own hardware?

  1. Install Ollama or LM Studio
  2. Pull a model: ollama pull llama3 or ollama pull phi3
  3. In Chorum, add Ollama/LM Studio as a provider
  4. No API key needed

Local models are slower but completely private and free.


Budgets

Set daily spending limits per provider:

  1. Go to Settings → Providers
  2. Click on a provider
  3. Set Daily Budget (e.g., $5.00)

When you hit the limit, that provider becomes unavailable until tomorrow. The router will fall back to cheaper providers or warn you.


Fallbacks

If your preferred provider is down or over budget, Chorum automatically falls back:

Primary: Claude Sonnet → GPT-4o → Gemini → Local

You can configure this chain in Settings → Resilience.


Which Provider Should I Use?

New to AI? Start with Claude Sonnet. It’s the best balance of capability and cost.

On a tight budget? Use local models (Ollama with Llama 3 or Phi-3).

Need the absolute best? Force Claude Opus or GPT-4 for complex reasoning.

Processing huge documents? Use Gemini 1.5 Pro (1M token context).


Next: Provider Selector