We’re a small team, and our main company supplies voice data. But we kept running into the same problem with coding agents. We’d have a feature request, a refactor, a bug, and some internal tooling work all happening at once, and managing that through local agent sessions meant a lot of context switching, worktree juggling, and laptops left open just so tasks could keep running.
So we built Broccoli. Each task gets its own cloud sandbox to be executed end to end independently. Broccoli checks out the repo, uses the context in the ticket, works through an implementation, runs tests and review loops, and opens a PR for someone on the team to inspect.
Over the last four weeks, 100% of the PRs from non-developers are shipped via Broccoli, which is a safer and more efficient route. For developers on the team, this share is around 60%. More complicated features require more back and forth design with Codex / Claude Code and get shipped manually using the same set of skills locally.
Our implementation uses:
1. Webhook deployment: GCP 2. Sandbox: GCP or Blaxel 3. Project management: Linear 4. Code hosting & CI/CD: Github
Repo: https://github.com/besimple-oss/broccoli
We believe that if you should invest in your own coding harness if coding is an essential part of your business. That’s why we decided to open-source it as an alternative to all the cloud coding agents out there. Would love to hear your feedback on this!
Every issue is created with /spec and a conversation with a human. Once the spec is materialized as an issue it’s sufficient for an agent to implement.
Everything is documented. It’s amazing.
One real Linear ticket from a few months back that we assigned to broccoli:
Store post-processing run outcomes in a versioned, append-only audit trail so re-running the same processor on the same audio file produces a complete history (who/when/what changed), while keeping an easy “latest result” view. Add an admin-only UI.
That’s it. As a part of the sketch step, broccoli does its own repo discovery and online research before planning the execution.
is this weekend project or bigger? approach changes a lot.
james.exec@proton.me
I didn't want to be on the hook for supporting an open source version though, so never made it public. Good on you for putting it out there.
A few differences I can quickly spot, fwiw...
I went with Firestore over Postgres for the lower cost, and use Cloud Tasks for "free" deduping of webhooks. Each webhooks is validated, translated, and created as an instant Cloud Task. They get deduped by ID.
We see a lot of value in a scheduler. So running a prompt on a schedule - good for things like status reports, or auto log reading/debug.
I prefer to put my PEMs in to KMS instead of Secret Manager. You can still sign things but without having to expose the actual private key where it can be snooped on.
I run the actual jobs on spot VMs using an image baked by Packer with all the tooling needed. You don't run in to time/resource limits running them as Cloud Run jobs?
Re: spot VMs. Great idea! There are two features we have not finished porting to OSS. Internally, we can specify the instance type and timeout, and we also send about 50% of jobs to Blaxel; we find it has a much better cold start compared to Cloud Run. We probably will port the multi-vendor support logic over to OSS soon but wanted to keep v1 simple (and a one-provider magic experience!).
Scheduler is a wish item for us. Curious how you implemented it? Currently, we just have a scheduled Cloud Function during the night to automatically address open PR comments (via the Broccoli GitHub feedback automation) so that the engineer wakes up to a mostly clean PR without needing to do anything. We haven't ported this to the OSS yet because 1) Firebase Cloud Functions, 2) not sure what would be the best ergonomics. Any suggestions here?
Originally I had Cloud Scheduler running a heartbeat task every X mins, and the one of the heartbeat tasks was to look for any overdue scheduled tasks and fire them off. So they were not very precise in timing, but a very simple setup.
I made the move to Cloud Tasks so I could heartbeat less often. Now the cleanup happens in the heartbeat - ensure all scheduled tasks have a matching cloud task pending.
Feedback on PRs was an interesting challenge - since we can get it from Slack replies, Github comments, CI failures and we want to be fairly reactive. I ended up leaning on Firestore realtime queries, the harness on the agent VM is subscribed and can interrupt the agentic loop to feed in new feedback as it comes in. All gets very complicated to OSS, but it has helped to get quicker feedback loops going.
Too real. We’re currently still sticking to local agent workflows which feel more powerful than cloud native ones. Moving that to your own cloud with no third-party control plane feels like the right middle ground. Nice work
EDIT: the adversarial two-agent review loop is really clever!
This works really well.
here's my [similar take](https://github.com/testeranto-dev/testeranto)
As for Jira, would love it if you contribute that integration to us! Someone asked for it in this thread :D
However I feel it will be an uphill battle competing with OpenAI and Anthropic, I doubt your harness can be better since they see so much traffic through theirs.
So this is for those who care about the harness running on their own infra? Not sure why anyone would since the LLM call means you are sending your code to the lab anyway.
Sorry I don’t want to sound negative, I am just trying to understand the market for this.
Good luck!
Teams would use Anthropic and OpenAI, but they shouldn't just use Anthropic or OpenAI. We see much better results from calling the models independently and do adversarial review and response.
This doesn't replace your need for the models, but you certainly don't need to rely on any of the cloud agent solutions out there that call these models underneath the hood.
On a separate note, READMEs written by AI are unpleasant to read. It would be great if they were written by a human for humans.
Also agree that teams should invest in their own harness (or maybe pedantically, build a system on top of harness likes Claude Code, Codex, Pi, or OpenCode)
It worked great but time to first token was slow and multi repo PRs took very long to create (30+ mins)
Now im working on my standalone implementation for cloud native agents