Sandboxed AI Engineering
Run AI agents
in isolated Docker sandboxes.
One CLI across Claude and Codex with full execution isolation, unified context, and GitHub-native workflows.
Requires Node.js 18+, GitHub CLI, Docker Desktop 4.58+ for sandboxing, and installation docs.
Core Value
Docker-backed isolation for every AI agent run.
AI agents with filesystem access need boundaries. Locus runs Claude and Codex in the same Docker-backed isolation layer, so your team gets safe, reproducible execution without sacrificing speed.
Isolated by default
Every AI agent runs inside its own Docker container. Your host filesystem, credentials, and system stay untouched — even during autonomous execution.
Controlled and reproducible
Same sandbox configuration across all team members and CI. No more "works on my machine" for AI-assisted development.
Automatic workspace sync
Code changes sync seamlessly between sandbox and host. Sensitive files stay excluded, and your .gitignore rules are respected.
Why Locus
Sandboxed execution, unified context, GitHub-native delivery.
Isolated AI agent runs, one CLI across providers, and GitHub as the system of record. Everything your team needs to ship safely with AI.
Docker Sandboxing for AI Agents
Run Claude and Codex inside isolated Docker containers. Your host stays clean while agents execute with full filesystem access inside the sandbox.
One command to sandbox: locus sandbox claude or locus sandbox codex. Same isolation model for both providers.
Security sandboxing docsUnified Interface Across AI Clients
Switch between Claude and Codex without changing your workflow. Same commands, same context, different provider.
Switch ai.model between claude-sonnet-4-6 and gpt-5.3-codex while keeping the same run, review, and iterate commands.
Unified interface docsGitHub as Operational Memory
Issues, milestones, labels, and PRs become your execution database. Every AI agent run is tracked and auditable through GitHub.
Create issues, assign sprints, execute with AI, and track delivery — all persisted in GitHub objects your team already uses.
GitHub backend docsBuilt-In Orchestration Tools
Plan, execute, review, and iterate with commands that go beyond raw provider CLIs. Full delivery lifecycle in one tool.
locus plan, locus run, locus review, locus iterate — an operational workflow that works the same across Claude and Codex.
CLI overview docsHow It Works
Choose AI client, sandbox, execute, persist in GitHub.
Pick Claude or Codex, run in an isolated sandbox, execute with built-in orchestration commands, and keep all state in GitHub-native objects.
Step 01: Choose and Sandbox Your AI Client
Select Claude or Codex, then run in an isolated sandbox
Set the model and enable sandboxing. Both providers run in the same Docker-backed isolation layer with unified context and consistent commands.
Unified interface deep diveStep 02: Run Through One Interface
Plan, execute, review, and iterate in the same CLI
Use built-in orchestration commands for delivery loops. This goes beyond raw provider CLIs by combining planning, execution, review, and iteration workflows.
End-to-end Locus workflowStep 03: Persist in GitHub-Native Data
Keep execution state in issues, milestones, labels, and PRs
GitHub is the system of record. Work items stay in issues and milestones, delivery artifacts stay in PRs, and operational status stays visible to the whole team.
GitHub as operational memoryStep 04: Automate with Full Isolation
Full-auto execution inside sandboxed containers
Enable auto-approval for autonomous runs. Combined with sandboxing, agents execute safely without human intervention — auto-labeling issues, creating PRs, and resuming from failures.
Full-auto execution modelPowered by tools you already use
Multiple AI providers. One sandbox. One workflow.
FAQ
Frequently asked questions
Locus is an open-source CLI that runs AI agents in isolated Docker sandboxes. It provides a unified interface across Claude and Codex, uses GitHub as its operational backend, and includes built-in orchestration for planning, execution, review, and iteration.
Locus runs AI agents inside Docker containers, isolating them from your host filesystem and system. Each provider (Claude or Codex) gets its own sandbox with the same configuration. Your workspace is synced into the container, and code changes are synced back — while sensitive files stay excluded.
Yes. Locus is free and open source under the MIT license. You need your own API keys for Claude (Anthropic) or Codex (OpenAI), but Locus itself has no paid tiers, usage limits, or proprietary components.
No. Locus runs entirely on your machine. It communicates directly with GitHub and your chosen AI provider (Claude or Codex). There are no Locus servers — your code, prompts, and credentials never leave your local environment.
Claude Code and Codex CLI are standalone tools for their respective providers. Locus adds Docker-based sandboxing for both, a unified interface to switch between them, GitHub-native state management, and orchestration commands (plan, run, review, iterate). One tool, both providers, fully isolated.
Run npm install -g @locusai/cli. You need Node.js 18+, GitHub CLI (gh), and Docker Desktop 4.58+ for sandboxing. Full setup instructions are at docs.locusai.dev/getting-started/installation.
Ship with sandboxed AI.
One interface. Full isolation.
Install the CLI, set up Docker sandboxing, and run your first isolated sprint across Claude or Codex in minutes.

