agent0
A local AI coding assistant that runs in your terminal. You type questions or instructions, and the AI reads your code, edits files, runs commands, and searches the web — all from a simple text interface.
No cloud accounts or API keys required for local use. Your code stays on your machine.
What it does
agent0 gives you an AI assistant that understands your codebase. You can ask it to:
- Read and edit files — it can open any file in your project, make targeted edits, or create new files
- Run shell commands — compile code, run scripts, install packages, anything you'd do in a terminal
- Search your code — find files by name patterns or search inside files for specific text
- Look up Clojure docs — instant documentation for Clojure standard library functions
- Research the web — search the internet and read web pages to answer questions
- Run skills — execute predefined workflows you set up for common tasks
Everything happens in a terminal UI with a chat-style interface. You type, the AI responds and uses tools as needed, and you see what it's doing in real time.
Requirements
Self-hosted (local, no API keys)
This is the default setup. The AI model runs entirely on your machine.
- Babashka — a fast Clojure scripting runtime (used to run agent0)
- Ollama — runs AI models locally on your computer
- A downloaded model — agent0 defaults to
qwen3-coder-next, a coding-focused model
Hardware considerations: local models need a decent GPU or a machine with enough RAM. Smaller models work on modest hardware; larger models need more resources. Check Ollama's model library for size and requirements.
Remote LLM (Ajet Cloud, OpenAI, Anthropic)
If you'd rather use a cloud-hosted model instead of running one locally, agent0 can connect to any remote server that speaks the Ollama API:
- Babashka — still required to run agent0 itself
- A remote Ollama-compatible endpoint — set
OLLAMA_HOSTto the server URL
Some providers that work:
- Ajet Cloud — managed Ollama-compatible hosting
- Any remote machine running Ollama
- OpenAI or Anthropic via an Ollama-compatible proxy (e.g. LiteLLM)
With a remote provider, you don't need Ollama installed locally or a powerful machine — the model runs on the remote server.
Installation
1. Install Babashka
macOS:
brew install borkdude/brew/babashka
Linux:
bash < <(curl -s https://raw.githubusercontent.com/babashka/babashka/master/install)
See the Babashka install docs for other methods.
2. Set up a model
For local use — install Ollama and pull a model:
# Install Ollama from https://ollama.com, then:
ollama pull qwen3-coder-next
For remote use — point to your provider:
export OLLAMA_HOST="https://your-provider-url"
3. Clone the repository
cd ~/repos # or wherever you keep projects
git clone https://git.ajet.fyi/ajet/agent0.git
Dependencies (including the TUI framework) are fetched automatically via git deps.
4. Add to your PATH
Make the agent command available from anywhere:
# Add to your shell profile (~/.bashrc, ~/.zshrc, etc.)
export PATH="$HOME/repos/agent0:$PATH"
Then restart your terminal or run source ~/.bashrc.
Usage
agent # Start a new conversation
agent "explain this codebase" # Start with a question or instruction
agent --continue # Resume your most recent session
agent --session <id> # Resume a specific session by ID
Once running, type your message and press Enter. The AI will respond, using tools as needed. You'll see what it's doing — reading files, running commands, searching — in real time.
Configuration
Environment variables
| Variable | What it does | Default |
|---|---|---|
AGENT_MODEL |
Which AI model to use | qwen3-coder-next |
OLLAMA_HOST |
Where the model is running | http://localhost:11434 |
Project context
agent0 automatically reads instructions from your project to give the AI relevant context. It checks for these files (in order):
.agent0/context.md— agent0-specific project instructionsCLAUDE.md— also used by Claude Code.cursorrules— also used by Cursor.github/copilot-instructions.md— also used by GitHub Copilot
You can also put global instructions (applied to all projects) in ~/.config/agent0/context.md.
Skills
Skills are reusable prompt templates you trigger with /name. Define them in:
~/.config/agent0/skills.md— available in all projects.agent0/skills.md— project-specific skills
Example skill definition:
# /deploy <env>
Deploy the application to the {env} environment. Run the deploy script,
verify it completes successfully, and report the result.
Then use it: type /deploy staging and it expands into the full prompt.
Sessions
Every conversation is automatically saved. You can pick up where you left off:
agent --continue # Resume the latest session
agent --session <id> # Resume a specific session
Sessions are stored in ~/.local/share/agent0/sessions/. Logs are in ~/.local/share/agent0/logs/.