This is mainly a UX experiment for me. And also the reason it isn't a Chrome extension: once you have a chat bar that understands intent, the URL field is redundant. You shouldn't have two places to tell the browser what you want. I didn't want to bolt an agent onto an existing browser's chrome and end up with duplicated controls everywhere — I wanted full freedom to redesign the shell from scratch, decide what stays, what goes, and what a browser even looks like when the agent is the primary interface.
Download for macOS: https://usenimbus.app
Launch video: https://youtu.be/dj23-XIiB1o - Ask-user tool. When the agent hits a judgment call (cost
confirmation, ambiguous field, a captcha), it pauses and asks you
in the chat, not in a page overlay. You answer, it resumes.
I'm really obsessed with the ask user tool on Claude Code,
and obviously I implemented it here also.
- Use oracle to plan complex tasks, take its help when stuck, and also to
create skills
- Sessions. Each task is its own session with its own tab(s) and
history. Switch between them and let the tasks run in the background.
- Bring your own key. Gemini, OpenAI, Anthropic, or any
OpenAI-compatible endpoint. No server of mine in the loop.
- Skills. Teach it or let it figure out a reusable flow and save it as
skill to reuse it.
- Auth handoff. When a login popup opens, the agent blocks, you
complete the auth, the agent picks back up. I purposefully didn't automate
things like auth/captcha, as the expectation of the current websites'
implementations isn't automation.
- Everything local. Traces of every run go to ~/.nimbus/traces/.
No telemetry, YET. Nothing reaches my servers, you just contact the LLM
providers directly.