How Shep Works
Shep turns a one-line description into a working pull request. You stay on main the entire time — Shep handles branching, coding, committing, pushing, and watching CI in the background.
The default flow
You describe Agent codes Shep commits Shep pushes Shep opens
a feature → in a worktree → the changes → to remote → a draft PRThen Shep watches CI. If checks fail, the agent reads the logs and pushes a fix automatically (up to 3 retries by default).
Three things happen when you run shep
- A daemon starts in the background. It manages your features, coordinates AI agents, and watches CI.
- The web dashboard opens at localhost:4050 . You can create features, see live agent output, and review diffs.
- Your terminal stays yours. Keep coding, run tests, ship hotfixes — Shep doesn’t touch your working directory.
What happens when you create a feature
You give Shep a description in plain English:
shep feat new "add a /health endpoint that returns uptime and version"Shep:
- Creates a worktree. A fresh copy of your repo on a new branch, in its own directory under
~/.shep/. Your main checkout is untouched. - Launches your AI agent. Claude Code, Cursor, or Gemini — whichever you configured. Shep feeds the agent your prompt plus context about the repo (architecture, conventions, dependencies).
- The agent codes. Shep streams the output to the dashboard so you can watch in real time.
- Shep commits. Clean commits with sensible messages.
- Shep pushes (if
--pushor your default workflow says so). - Shep opens a draft PR (if
--pr). - Shep watches CI. If it fails, the agent gets the logs and pushes a fix.
When everything’s green, you have a draft PR ready to review.
Worktree isolation: the key idea
This is what makes Shep different from running an agent in your working directory.
Each feature gets its own git worktree — a separate copy of your repo with its own branch.
- No branch switching. You never leave the branch you’re on.
- No stashing. Your uncommitted work is never touched.
- No conflicts. Each feature works in its own isolated directory.
- True parallelism. Multiple features run at the same time without seeing each other.
Worktrees share the same .git directory as your main repo, so they take almost no extra disk space.
Shep doesn’t write code — your agent does
Shep is an orchestrator. It does not contain its own coding model. Instead it coordinates external AI agents you already use:
- Claude Code (recommended)
- Cursor CLI
- Gemini CLI
You pick which agent to use during first-run setup. Shep handles launching it, feeding it context, and capturing its output. The actual code is written by the agent.
This means Shep is agent-agnostic: switch tools whenever you want, mix-and-match per feature, or move to a new model without changing your workflow.
Two ways to drive Shep
The CLI and the web dashboard are two views of the same system. Anything you can do in one, you can do in the other:
- Create features
- Monitor progress
- Review changes
- Approve or reject phases
- Stop or resume work
Use the CLI when you’re already in the terminal. Use the dashboard when you want a visual view of all your in-flight features.
Local-only by default
Everything Shep tracks lives in ~/.shep/ as a SQLite database. There are no Shep servers, no accounts, no telemetry. Your code only ever leaves your machine via the AI agent you configured — under that agent’s own privacy terms.
What’s next
- Features & Tasks — the unit of work.
- SDLC Lifecycle — the phases a feature passes through.
- Parallel Execution & Worktrees — why this scales.