aidevops wires 13 specialist AI agents into a self-healing ops platform — no babysitting required.
The Problem Nobody Talks About After the Vibe-Coding High
Everyone's figured out that LLMs can write code. The dirty secret is what comes next: merging branches, running quality gates, managing parallel workstreams, keeping costs from exploding, and waking up at 3am because a deploy went sideways. That part — the DevOps part — is still brutally manual, even with AI in the loop.
aidevops by Marcus Quinn is a direct attack on that gap. And it's been sitting at 174 stars on GitHub, largely undiscovered, while noisier tools with worse architectures dominate the conversation.
What It Actually Does
At its core, aidevops is an OpenCode plugin — a Shell-based framework that deploys 13 specialist AI agents (code, SEO, marketing, legal, sales, research, video, business, accounts, social media, health, automation, content) plus 900+ on-demand subagents loaded via /slash commands and @subagent mentions. The agents don't just answer questions — they run the full loop autonomously.
The supervisor agent fires every two minutes. It dispatches parallel workers, merges PRs, detects stuck processes, and advances multi-day missions while you sleep. The README calls this "autonomous orchestration" and for once that's not marketing copy — the architecture actually backs it up.
The practical entry point is dead simple:
bash
git clone https://github.com/marcusquinn/aidevops.git
cd aidevops
./setup.sh
setup.sh deploys everything locally, including the .agents/ directory structure with agents, tools, services, workflows, and scripts. The user-facing agent config lives at ~/.aidevops/agents/AGENTS.md post-install.
Technical Deep-Dive: Where It Gets Interesting
Git worktrees as the parallelism primitive. Instead of switching branches and losing context, aidevops uses git worktrees to run multiple AI sessions on separate branches simultaneously. Each session gets its own working directory, its own context, its own Ralph loop (their term for iterative AI development cycles). This is genuinely clever — worktrees are criminally underused in the industry, and wiring AI agents to them is a natural fit.
Multi-model safety for high-stakes ops. Force pushes, production deploys, and data migrations trigger cross-provider verification — a second model from a different provider checks the first model's work before execution. The reasoning in the README is sharp: "different providers have different failure modes, so correlated hallucinations are rare." This isn't theoretical; it's a real mitigation strategy that larger AI safety teams use.
Cost-aware model routing. The stack routes tasks through a hierarchy: local → haiku → flash → sonnet → pro → opus. Mundane tasks hit cheap models; high-stakes decisions escalate. There's budget tracking with burn-rate analysis built in. For anyone who's accidentally run a Claude Opus session on a bulk code-review job and gotten a $40 bill, this matters.
The AGENTS.md architecture. There are actually two AGENTS.md files — one at the repo root for contributors, one at .agents/AGENTS.md for end users, copied to ~/.aidevops/agents/ by setup. CLAUDE.md exists purely as a compatibility shim, redirecting to AGENTS.md as the single source of truth. This is thoughtful multi-runtime design — works with Claude Code, OpenCode, and anything else that respects the emerging AGENTS.md convention.
Self-healing and self-improving loops. When something breaks, the framework diagnoses root cause, creates tasks in TODO.md, and fixes it. Session mining (prompts/build.txt) extracts learnings from past sessions automatically. The "gap awareness" principle — every session identifies missing automation and files tasks — means the system genuinely compounds over time.
Quality pipeline. Before any commit, .agents/scripts/linters-local.sh runs. ShellCheck covers all scripts in .agents/scripts/. There's a version manager at `.agents/scripts/version-man