BRICKS Project Desktop supports multiple AI model providers. Configure API keys in settings to enable a provider.
Anthropic
| Model | Reasoning | Extended thinking |
|---|
| claude-sonnet-4.6 | Yes | — |
| claude-opus-4.6 | Yes | Max |
| claude-haiku-4-5-20251001 | Yes | — |
OpenAI
| Model | Reasoning | Extended thinking |
|---|
| gpt-5.4 | Yes | Max |
| gpt-5.3-codex | Yes | Max |
| gpt-5.2-codex | Yes | Max |
| gpt-5.2 | Yes | Max |
| gpt-5.1-codex-mini | Yes | — |
OpenAI Codex
| Model | Reasoning | Extended thinking |
|---|
| gpt-5.4 | Yes | Max |
| gpt-5.4-mini | Yes | Max |
| gpt-5.3-codex | Yes | Max |
| gpt-5.2-codex | Yes | Max |
Google
| Model | Reasoning | Extended thinking |
|---|
| gemini-3-flash-preview | Yes | — |
| gemini-3-pro-preview | Yes | — |
GitHub Copilot
| Model | Reasoning | Extended thinking |
|---|
| claude-sonnet-4.6 | Yes | — |
| claude-opus-4.6 | Yes | Max |
| claude-haiku-4-5-20251001 | Yes | — |
| gpt-5.4 | Yes | Max |
| gpt-5.4-mini | Yes | Max |
| gpt-5.3-codex | Yes | Max |
| gpt-5.2 | Yes | Max |
| gpt-5.2-codex | Yes | Max |
| gemini-3.1-pro | Yes | — |
| gemini-3-pro | Yes | — |
| gemini-3-flash | Yes | — |
| grok-code-fast-1 | — | — |
GitHub Copilot uses OAuth for authentication. Configure it in Settings > Providers > GitHub Copilot.
OpenAI Codex uses OAuth for authentication (Login with OpenAI). Configure it in Settings > Providers > OpenAI Codex.
OpenAI Compatible endpoints
Add any number of OpenAI-compatible API endpoints, each with its own name, base URL, API key, and model list. Manage endpoints in Settings > OpenAI Compatible Endpoints.
Each endpoint is configured with:
| Field | Description |
|---|
| Endpoint Name | A display name (e.g., “Ollama”, “vLLM”, “LM Studio”) |
| Base URL | The API endpoint (e.g., http://localhost:11434/v1) |
| API Key | Optional, depending on the provider |
| Models | One or more model identifiers available at this endpoint |
Models from configured endpoints appear in the model selector grouped under the endpoint name. This allows you to use local models (Ollama, LM Studio) or any third-party provider that implements the OpenAI API format.
If you previously configured a single OpenAI Compatible provider, it is automatically migrated to the new multi-endpoint format on first launch.
Thinking levels
Models with reasoning support can use different thinking levels to control the depth of analysis:
| Level | Token budget | Best for |
|---|
| Off | — | Fast responses, simple tasks |
| High | ~16k tokens | Most tasks |
| Max | ~32k tokens | Complex reasoning, architecture decisions |
The Max level is only available on models marked with “Max” in the extended thinking column above.
Environment variable fallback
If no API key is configured in the app settings, the agent checks these environment variables:
ANTHROPIC_API_KEY — used for Anthropic models
OPENAI_API_KEY — used for OpenAI models