Create a session
- Select an app in the sidebar
- Click New Session
Choose a model
Click the model selector in the input bar to pick an AI model. Models are grouped by provider:- Anthropic — Claude Sonnet 4.6, Claude Opus 4.6, Claude Haiku 4.5
- OpenAI — GPT-5.3 Codex, GPT-5.2 Codex, GPT-5.2, GPT-5.1 Codex Mini
- OpenAI Codex — GPT-5.3 Codex, GPT-5.2 Codex
- Google — Gemini 3 Flash, Gemini 3 Pro
- GitHub Copilot — Multiple models from various providers
- OpenAI Compatible endpoints — Custom endpoints with per-endpoint model lists (Ollama, vLLM, LM Studio, etc.)
Set thinking level
Click the thinking selector next to the model selector to control the agent’s reasoning depth:| Level | Budget | Use case |
|---|---|---|
| Off | — | Fast responses, simple tasks |
| High | ~16k tokens | Most tasks, good balance of speed and depth |
| Max | ~32k tokens | Complex reasoning, architecture decisions |
Max thinking is only available on models that support extended thinking: Claude Opus 4.6, GPT-5.3 Codex, GPT-5.2 Codex, and GPT-5.2.
Send a message
Type your message in the input bar and press Ctrl+Enter (or click the send button). Use Shift+Enter for a new line. The agent streams its response in real-time. You can see:- Text responses with full markdown rendering and syntax highlighting
- Thinking blocks showing the model’s reasoning process (expandable, requires Show Thinking Content)
- Tool calls displayed as expandable blocks with the tool name and arguments
- Tool results showing the output of each tool execution
Mention files
Type@ in the input bar to search for files in your project. Select a file to include its path in your message, giving the agent context about which files to work with.
- Browse directories by typing
/after a folder name - Navigate with arrow keys and select with Enter or Tab
- Up to 20 results are displayed
Use skills
Type/ in the input bar to trigger skill command autocomplete. Selecting a skill injects its instructions as context for the agent.
The selected skill appears as a removable pill in the input bar. The agent receives the skill’s content along with your message.
See the skills reference for more details on creating and managing skills.
Attach images
Click the image attachment button or paste an image from your clipboard. Supported formats: PNG, JPG, JPEG, WebP, GIF. Images are sent to the model as base64-encoded content alongside your text message.Tool calls and approval
The agent has six built-in tools:read_file, write_file, edit_file, bash, glob, and grep. See the agent tools reference for details on each tool.
When the agent runs a bash command, you see an approval prompt:
- Click Run to execute the command
- Click Reject to deny execution
Sub-agents
The agent can delegate tasks to specialized sub-agents. Sub-agents run with a focused set of tools and instructions, making them ideal for scoped tasks like codebase exploration or code review. You can manage sub-agents from the sidebar. See the sub-agents reference for details on creating and configuring them.Local devices
Click the Local Devices button in the input bar to scan for BRICKS Foundation devices on your local network. The dialog lists discovered devices with their address, version, and badges:- This workspace — the device is bound to the same workspace as the current project
- CDP — the device supports the Chrome DevTools Protocol
/bricks-cli prompt into the input bar. The agent then uses the bricks-cli skill to run the action.
Next steps
Deploy your app
Ship your application to the BRICKS server.
Agent tools reference
Learn about each built-in tool the agent can use.