Honestly, if you don't find it appealing you don't need to use it. I think a lot of folks don't find vim appealing and stick to vscode and that's okay too.
This is the sort of scenario that leans me towards thinking tools are being praised by how they support major red flags in development flows.
Having dozens of changes in flight in feature branches that may or may not be interdependent is a major red flag. Claiming that a tool simplifies managing this sort of workflow sounds like you are mitigating a problem whose root cause is something else.
To me it reads like praising a tool for how it streamlines deployments to production by skipping all tests and deployment steps. I mean, sure. But doesn't this mask a far bigger problem? Why would anyone feel the need to skip checks and guardrails?
What if those who call out red flags actually do so based on experience,particularly in understanding how and why red flags are red flags and why it's counterproductive to create your own problems?
I mean, if after all your rich experience working on diverse set of projects with various needs and requirements, your answer to repeatedly shooting yourself in the foot is that you need a tool to better aim around your toes... What does it say about what lessons you draw?
The jj lovers can go build their massive beautiful branches off in a corner, I'll be over here building an SDLC that doesn't require that.
Old man yells at cloud moment is over
Programs to manage “stacks of patches” go back decades. That might be hundreds that have accumulated over years which are all rebased on the upstream repository. The upstream repository might be someone you barely know, or someone you haven’t managed to get a response from. But you have your changes in your fork and you need to maintain it yourself until upstream accepts it (if they ever call back).
I’m pretty sure that the Git For Windows project is managed as patches on top of Git. And I’ve seen the maintainer post patches to the Git mailing list saying something like, okay we’ve been using this for months now and I think it’s time that it is incorporated in Git.[1]
I’ve seen patches posted to the Git mailing list where they talk about how this new thing (like a command) was originally developed by someone on GitHub (say) but now someone on GitLab (say) took it over and wants to upstream it. Maybe years after it was started.
Almost all changes to the Git project need to incubate for a week in an integration branch called `next` before it is merged to `master`.[1] Beyond slow testing for Git project itself, this means that downstream projects can use `next` in their automated testing to catch regressions before they hit `master`.
† 1: Which is kind of a like a “megamerge”
Dangling footnote. I decided against adding one and forgot to remove it.
There are different types of "large" PR's. If I'm doing a 10,000 LOC refactor that's changing a method signature, that's a "large" PR, but who cares? It's the same thing being done over and over, I get a gist of the approach, do some sampling and sanity checks, check sensitive areas, and done.
If I'm doing something more complex and storied to the point it requires stacks with dependencies, then I'm questioning why I haven't split and chunked the thing into smaller PR's in the first place and having those reviewed. Ultimately the code still has to get reviewed, so often it's about reframing the mindset more than anything else. If it organizationally slows me down to the point that chunking the PR into smaller components is worse than a stacked-pr like approach, I'm not questioning the PR structure, I'm questioning why I'm being slowed down organizationally. Are my reviews not picked up fast enough? Is the automated testing situation not good enough? The answer always seems to come back to the process and not the tooling in these scenarios.
What problem does the stacked PR solve? It's so I can continue working downstream while someone else reviews my unmainlined upstream code that it depends on. If my upstream code gets mainlined at a reasonable rate, why is this even a problem to be solved? It also implies that you're only managing 1-3 major workstreams if you're getting blocked on the feature downstream which also begs the question, why am I waterfalling all of my work like this?
Fundamentally, I still have to manage the dependency issue with upstream PR's, even when I'm using stacked PR's. Let's say that an upstream reviewer in my stacked PR chain needs me to change something significant - a fairly normal operation in the course of review. I still have to walk down that chain and update my code accordingly. Having tools to slightly make that easier is nice, but the cost benefit of being on a different opt in toolchain that requires its own learning curve is questionable.
It looks like you see stack PR as an inherent complex construct, but IMO splitting the implementation into smaller, more digestable and self-contained PRs is what stack PR is about
So if you agree that is a better engineering practice, then jj is only a tool that helps you do that without thinking too much about the tool itself
Turns out these two differences combined with tracking change identity over multiple snapshots (git shas) allow for ergonomic workflows which were possible in git, just very cumbersome. The workflows that git makes easy jj also keeps easy. You can stop yelling at clouds and sleep soundly knowing that there is a tool to reach for when you need it and you’ll know when you need it.
Yeah, and? Not everyone is in control of the culture of the organization they work in. I suspect most people are not. Is everyone on HN CEOs and CTOs?
A lot of people's taste making comes from reading the online discussions of the engineering literati so I think we need old folks yelling at clouds to keep us grounded.
That’s why it’s always the same confusing hype when it’s discussed, because it’s AI/LLM hype effectively
I don't layer my utensils for example, because a spoon is fit for purpose and reliable.
But if I needed to eat multiple different bowls at once maybe I would need to.
For my personal use case, git is fit for purpose and reliable, even for complex refactoring. I don't find myself in any circumstances where I think, gosh, if only I could have many layers of this going on at once.
This is a little weird at first when you’ve been used to a decade and a half of contorting your mental model to fit git. But it genuinely is one of those tools that’s both easier and more powerful. The entire reason people are looking at these new workflows is because jj makes things so much easier and more straightforward that we can explore new workflows that remove or reduce the complexity of things that just weren’t even remotely plausible in git.
A huge one for me: successive PRs that roll out some thing to dev/staging/prod. You can do the work all at once, split it into three commits that progressively roll out, and make a PR for each. This doesn’t sound impressive until you have to fix something in the dev PR. In git, this would be a massive pain in the ass. In jj, it’s basically a no-op. You fix dev, and everything downstream is updated to include the fix automatically. It’s nearly zero effort.
Another is when you are working on a feature and in doing so need to add a capability to somewhere else and fix two bugs in other places. You could just do all of this in one PR, but now the whole thing has to b reviewed as a larger package. With jj, it’s trivial to pull out the three separate changes into three branches, continue your work on a merge of those three branches, and open PRs for each separate change. When two of them merge cleanly and another needs further changes, you just do it and there’s zero friction from the tool. Meanwhile just the thought of this in git gives me anxiety. It reduces my mental overhead, my effort, and gives overburdened coworkers bite-sized PRs that can be reviewed in seconds instead of a bigger one that needs time set aside. And I don’t ever end up in a situation where I need to stop working on the thing I am trying to do because my team hasn’t had the bandwidth to review and merge my PRs. I’ve been dozens of commits and several stacked branches ahead of what’s been merged and it doesn’t even slightly matter.
(you may know that already, but maybe someone who reads this will find this helpful for forming a good mental model, as so many people lack one despite of working with git daily)
And better conflict resolution means it often becomes viable to just have mega merge add next release
This. Things like stacks and mega-merges are huge red flags, and seeing enthusiastic people praising how a tool is more convenient to do things that raise huge red flags is perplexing.
Let's entertain the idea of mega-merges, and assume a tool fixes all tool-related issues. What's the plan to review the changes? Because what makes mega merges hard is not the conflicts but ensuring the change makes sense.
What's the red flag about a stack?
And I hope you do. It is so much better than git in every way. It enables working with stacks and the aforementioned megamerges so easily, allowing me to continue working forward while smaller units of work are reviewed/merged.
When I first tried to use jj, I wasn't entirely committed and switched between jj and git. Finally I hit a breaking point being fed up with stacks/merges and tried jj _for real_.
I recommend to give it a serious try for a few solid days and use it exclusively to really understand it. You won't go back.
The jj Discord is a very helpful place. Thanks to everyone there. Great article Isaac!
Btw, the risk of trying out other modern version control systems is nearly as low: most of them are compatible with git and you can convert back and forth. That definitely includes mercurial etc.
People tried mercurial. They went back to git.
I recently started a new job where the vanilla git CLI is the only git frontend installed on company servers, and the regressions in user-experience are painful :(
Not some. I mean, even the few source code repository services that supported mercurial started dropping it.
See Bitbucket's announcement:
https://www.atlassian.com/blog/bitbucket/sunsetting-mercuria...
> According to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. In fact, Mercurial usage on Bitbucket is steadily declining, and the percentage of new Bitbucket users choosing Mercurial has fallen to less than 1%.
Though, I'd be remiss not to mention that this (and any other) jj workflow would be much easier with jjui. It's the best TUI around, not just for jj
I proposed incorporating some of this article into it. https://github.com/idursun/jjui/discussions/644
I imagine if I follow this workflow, I might accidentally split it off in a way that branch A is dependent on some code changes in branch B, and/or vice versa. Or I might accidentally split it off in a way that makes it uncompilable (or introduce a subtle bug) in one commit/branch because I accidentally forgot there was a dependency on some code that was split off somewhere else. Of course, the CI/CD pipeline/reviewers/self-testing can catch this, but this all seems to introduce a lot of extra work when I could have just been working on things one at a time.
I'm open to changing my mind, I'm sure there are lots of benefits to this approach, since it is popular. What am I missing here?
When I have discrete, separate units of work, but some may not merge soon (or ever), being able to use mega merges is so amazing.
For example, I have some branch that has an experimental mock-data-pipeline thingy. I have yet to devote the time to convince my colleagues to merge it. But I use it.
Meanwhile, I could be working on two distinct things that can merge separately, but I would like to use Thing A while also testing Thing B, but ALSO have my experimental things merged in.
Simply run `jj new A B C`. Now I have it all.
Because jj's conflict resolution is fundamentally better, and rebases are painless, this workflow is natural and simple to use as a tool
I don't know jj well so its merge algorithm may well be better in some aspects but it currently can't merge changes to a file in one branch with that file being renamed in another branch. Git can do that.
You’re right that I have to make sure that the backend changes don’t depend on the mobile changes, but I might have to be mindful of this anyway if the backend needs to stay compatible with old mobile app versions. Megamerge doesn’t seem to make it any harder.
But I created a special uv branch that moved my local setup to uv. Then went back up the tree to main and created a feature branch from there. Merged them together and worked out from that branch moving all the real changes to the feature branch.
Now whenever I enter that project I have this uv branch that I can merge in with all the feature branches to work on them.
because agents are slow.
I use SOTA model (latest opus/chatgpt) to first flesh out all the work. since a lot of agent harness use some black magic, i use this workflow
1. Collect all issues 2. Make a folder 3. Write each issue as a file with complete implementation plan to rectify the issue
After this, i change from SOTA to Mini model
Loop through each issue or run agents in parallel to implement 1 issue at a time.
I usually need to do 3 iteration runs to implement full functionality.
In other words, I effectively was working on one thing, but at a quicker easier pace.
You've missed a crucial detail.
You've both been doing it, but only one of you was using a tool that needed rebases to pull it off.
Your repo is small and/or your CI is fast. You’ll understand in a big repo or when CI has to run overnight to get you results.
I gather one scenario is: You do a megamerge and run all your tests to make sure new stuff in one branch isn't breaking new stuff in another branch. If it does fail, you do your debug and make your fix and then squash the fix to the appropriate branch.
I understand if people are enjoying it great, but the amount of praise and 'this is revolutionary' comments I see makes me really feel I'm missing a beat.
This doesn't seem to be entirely up-to-date: http://github.github.com/gh-stack
Won't get you much if you don't like to mutate commits in general, of course; at that point it's just a different committing workflow, which some may like and some dislike. (I for one am so extremely-happy with the history-rewriting capabilities that I've written some scripts for reinventing back a staging area as a commit, and am fine to struggle along with all the things I don't like about jj's auto-tracking)
As a fun note, git 2.54 released yesterday, adding `git history reword` and `git history split` in the style of jj (except less powerful because of git limitations) because a git dev discovered jj.
> Basically, in the megamerge workflow you are rarely working directly off the tips of your branches. Instead, you create an octopus merge commit (hereafter referred to as “the megamerge”) as the child of every working branch you care about. This means bugfixes, feature branches, branches you’re waiting on PRs for, other peoples’ branches you need your code to work with, local environment setup branches, even private commits that may not be or belong in any branch. Everything you care about goes in the megamerge. It’s important to remember that you don’t push the megamerge, only the branches it composes.
> You are always working on the combined sum of all of your work. This means that if your working copy compiles and runs without issue, you know that your work will all interact without issue.
You don't even push the megamerge to the origin. Or perhaps you don't even need to push it. You can just... work off it.
But why would I do that with git anyway ? My local branch is what I'm working of, if I'm not ready to push, why would I ? I can as you say just work off it..
And when I'm ready to push, I prep my commit, because I'm expecting it to be immutable and pulled by others 'as-is'. Again, I must be missing something. I think the tool is just not for me, yet at least.
Both get iterated on, because it's hard to know everything about a feature before it's done; maybe you find bugs that need fixing, or you realise you were missing something.
Rebasing the dependent branch onto the tip of the other branch gets you there, but as a workflow it's not pleasant, especially if you're not the only person working on the features... It's a recipe for conflicts, and worse that rebased branch conflicting with another person's view of it.
You are working on stuff in the backend, but it sure would be nice to see it in the frontend so you jury rig something in the frontend to display your work as well as some console.log() commands. Then you forget to revert changes to the frontend before pushing the branch.
In jj you would start with these as separate branches. Then you work on a merge of these. Then you use the absorb command to auto-move the code you are working on to the correct branch or you squash the changed files to the branch. Then you can push the backend branch to server and PR it. Then you throw away the frontend branch or just leave it there so you can use it in a future feature.
A real case from my work. I had to work on an old Python project that used Poetry and some other stuff that was just not working correctly on my computer. I did not want to toucj the CD/CI pipeline by switching fully to uv. But I created a special uv branch that moved my local setup to uv. Then went back up the tree to main and created a feature branch from there. Merged them together and worked out from that branch moving all the real changes to the feature branch. Now whenever I enter that project I have this uv branch that I can merge in with all the feature branches to work on them.
Remember in JJ you're always "in a commit", so the equivalent of the git working tree (i.e. unstaged changes in git) is just a new commit, often with no description set yet. (Because in JJ a commit also doesn't need a description/commit message immediately).
So in a mega-merge you can have a working tree that pulls from local-dev-tuneup, bugfix-a, and feature-b, and you can then squash or split changes out of it onto any of those source branches. Like you can avoid serializing those branches before you're ready to.
I've definitely faced the scenario in Git where I have a unmerged changes that I want to use while continuing work on a feature branch. I end up creating a PR for the branch of the first smaller features (e.g. local-dev-tuneup->master), then a second PR pointing at the first (feature-a -> local-dev-tuneup). It works but it's all a bit cumbersome, even more so if feature-a ends up needing to land before local-dev-tuneup. JJ has better tools for handling this.
Or potentially a team member has critical changes with a PR open and you want to start building on top of their changes now. Manageable in Git but you're locked in on a branch of their branch. Now add a second set of critical changes. Can be done in git but you'll be jumping through hoops to make it happen.
Of course you might say that all indicates a workflow/process problem, but my experience is this situations are common enough but not frequent.
(I haven't actually used megamerges myself yet but the article has me ready to try them out!)
what's next, "oh! my gitess"? "chainsvn man"?
I have a PR up for jjk that does the full change as a review changes, and there's another user's PR that allows diffs over arbitrary ranges (i.e. when working out whether the commits that make up a PR are good as a whole rather than individually)
If none exist, I think there's a great opportunity there, for anyone with the knowledge and motivation to make some absolute beginner guides. Already jj is infinitely more user-friendly, and as the tool matures, it isn't far fetched to think a new generation of programmers could go straight to jj without knowing their way around git first.
> At the time of writing, most Jujutsu tutorials are targeted at experienced Git users, teaching them how to transfer their existing Git skills over to Jujutsu. This tutorial is my attempt to fill the void of beginner learning material for Jujutsu.
Exactly what I was looking for, thank you!
It's certainly very usable despite all that, and the changes are simple enough to adapt to, but it's a pretty new thing.
Someone who "knows enough to be dangerous" may be better served by sticking with the git happy-path.
Of course, if working with others you should use what they do until you're confident that you can switch without impacting them.
[1] https://docs.jj-vcs.dev/latest/cli-reference/#jj-parallelize [2] https://blog.chay.dev/parallelized-commits
My mind was a little blown when I read about the megamerge strategy in Steve Klabnik's tutorial.[1]
Yes, Jujutsu's approach of autorebasing changes is very nice. Now all I have to do is to try it myself.
† 1: https://steveklabnik.github.io/jujutsu-tutorial/advanced/sim...
Insanely easy and effective.
If anyone is JJ-curious, I also can't recommend the Discord[1] enough. The community is very helpful and welcoming.
There's some counterproductive stuff in there from my perspective but at its core you're keeping up a throwaway integration branch, which is helpful practice if you'll ever care about an integration. It's annoying with git because the interface for updating your throwaway integration branch is very clunky and easy to get wrong.
IUUC This is already implemented for git as an extension. https://github.com/tummychow/git-absorb
I think this is such a basic thing that should be part of any DVCS implementation.
From this comment on the git-absorb issue tracker I wouldn't expect it to be fixed soon either: https://github.com/tummychow/git-absorb/issues/134#issuecomm...
It's ok to force-push a branch that only you have worked on (and even in the case of others working on the same branch it can be fine as long as you communicate with them)
That said, jj will warn you if you try to edit an immutable commits, and you can configure what it considers immutable.
https://www.jj-vcs.dev/latest/config/#set-of-immutable-commi...
You can force changes with a ‘—ignore-inmutable’ flag.
I just wish Jujutsu supported git tags rather than only supporting bookmarks as branches. And I also wish that Jujutsu supported preserving commit dates during rebases.
One of my absolute favorite things about Jujutsu is how easy it is to manipulate the commit graph remotely without having to manually checkout each commit first. I've been working on some pull requests to their built-in diff editor lately trying to improve the user experience enough that most conflicts will be fixable without having to use a text editor.
Also, the lack of a special staging area means you also never have to fucking stash your changes before you can do practically anything. Your changes always have a place, you can always go somewhere else and you can always come back.
There are commands for manipulating tags (jj tag set, jj tag delete), and recently [1] support for fetching / pushing
Probably my favourite thing that has really changed my workflow is being able to write empty commits in advance then just switch between them. It helps me remember what I’m doing and whats next whenever I get distracted or take a break.
Eventually I settled on a tree-like megamerge that's more practical: merge 2 branches at a time and merge the merged branch with the next branch. This way I only need to handle 2-way conflicts at a time which is more manageable.
Also you have to be very careful to decide the order when you (and your colleagues) are going to land the branches, or if you expect any new features other people are working on that's going to conflict with your branches. When using megamerger workflow, most of the problems come from coordinating with other colleagues.
I'm hesitant to pick jj up in case it ends up losing to git like mercurial did. But it's very tempting.
I can't see it going anywhere. It is in many ways "just" a different porcelain for git. The plumbing is the same. It's also safer to use: no JJ command can lose data another JJ command can't recover.
One thing I like is there's many ways to achieve the same result. E.g. author uses a fancy rebase to graft a new branch between trunk and merge point. I could do the same by: 1) rebase -s onto trunk, 2) merge new branch with mega merge, 3) squash old megamerge upwards into new merge. No cryptic revset needed.
When LLMs are driving development, source control stops being an active cognitive concern and becomes a passive implementation detail. The unit of work is no longer “branches” or “commits,” it’s intent. You describe what you want, the model generates, refactors, and reconciles changes across parallel streams automatically.
Parallel workstreams used to require careful coordination: rebasing, merging, conflict resolution, mental bookkeeping of state. That overhead existed because humans were the bottleneck. Once an LLM is managing the codebase, it can reason over the entire state space continuously and resolve those conflicts as part of generation, not as a separate step.
In that world, tools like jj are optimizing a layer that’s already being abstracted away. It’s similar to how no one optimizes around assembly anymore. It still exists, it still matters at a lower level, but it’s no longer where productivity is gained.
It better be, now and going forward for people who use LLMs..because they will need it when LLM messes up and have to figure out, manually, how to resolve.
You ll need all the help (not to mention luck) you need then..
You're bashing the old way, but you do not provide any concrete evidence for any of your points.
> The unit of work is no longer “branches” or “commits,” it’s intent.
Insert <astronaut meme "always has been">.
Branching is always about "I want to try to implement this thing, but I also want to quickly go back to the main task/canonical version". Committing is about I want to store this version in time with a description of the changes I made since the last commit. So both are an expression and a record of intent.
> Parallel workstreams used to require careful coordination: rebasing, merging, conflict resolution, mental bookkeeping of state.
Your choice of words is making me believe that you have a poor understanding of version control and only see it as storage of code.
Commits are notes that annotates changes, when you want to share your work, you share the changes since the last version everyone knows about alongside the notes that (should) explain those changes. But just like you take time to organize and edit your working notes for a final piece, rebasing is how you edit commits to have a cleaner history. Merging is when you want to keep the history of two branches.
Conflict resolution is a nice signal that the intent of a section of code may differ (eg. one wants blue, the other wants red). Having no conflict is not a guarantee that the code works (one reduces the size of the container, while the other increase the flow of the pipe, both wanted to speed up filling the container). So you have to inspect the code and run test afterwards.
Discard the above if you just don't care about the code that you're writing.
Tools like git and jj exist to help humans manage state: branches, commits, rebases, conflicts, history curation. That whole model assumes a human is directly manipulating and reasoning about the codebase.
With LLMs in the loop, that assumption breaks. I don’t need to think in terms of branches or commits. I describe intent, and the model handles the mechanics of editing, reconciling, and producing a coherent result. Source control becomes an implementation detail of the toolchain, not something I actively operate.
jj is an improvement over git for humans, but that’s exactly why it feels like a local maximum. It refines a workflow that is already being abstracted away.
I’m not saying version control disappears. I’m saying it moves down a layer, the same way memory management or instruction scheduling did. When that happens, optimizing the human interface to it matters a lot less.
Think about the following first. You got a problem in the real world and if you can subdivide it into smaller problems, you will find that some are simple enough that a computer can take care of it and never be bored while doing it. And due to the last decades a lot of them have ready-made solutions. But you have to coordinate those solutions and write a program. And for these you need to write instructions into a text files.
But the real world is not static and you can't figure out the solution in one go, so you have to do iterative works on it. And unlike real world, the only cost of modifications is time. But you still want backups and the ability to restore version. So here come version control for the code of the software.
So you start thinking about all the possible workflow you could do with checkpoints you can return to in a few minutes, and it will look very close to something like git (or cvs). The one thing is that the computer is very removed from the problem that is driving all the changes and instead is at the other side. So it can magically correct issues and instead you have to step in.
> With LLMs in the loop, that assumption breaks. I don’t need to think in terms of branches or commits. I describe intent, and the model handles the mechanics of editing, reconciling, and producing a coherent result.
That would be great if that was possible now, but that looks like a synopsis for some SF novel. I can use git or jj today, but your version is lacking the several steps that would be making this a daily occurrence.
> memory management or instruction scheduling
You may think that they did, but that's until you have to deal with a memory leak or concurrent tasks. What we want version control for is the capability to snapshot state and restore to a known state and to share changes (instead of whole folder) when collaborating. How it's done does not really matter but git's conceptual model is very close to be ideal (at least for text files and line based statements). And its UX is versatile enough to be adaptable for all sort of workflow.
That said, "jj new -A @", get Claude to do it's thing, "jj squash" is going to be pretty safe.
Anonymous branches. Easy to move things around. Always succeeding merges. Megamerges. Worktrees not bound to branches.
/ A \
-- B - Megamerge
\ C /
There is nothing stopping you from doing this: / A - D \
-- B ----- - Megamerge
\ C - E /
(Edit, or even this:) / A - D \
-- B ----- - Megamerge
\ C ----/
\ E /
Where E stacks on C and D stacks atop A. In the case above, A-E are revsets of either 1 or more commits. JJ doesn't care if they are or not. You'd generally bookmark the revset on the final "commit" as the pointer. / features/add-widgets
/ / features/add-widget-integration
/ A - D \
-- B ----- - Megamerge
\ C - E /
\ \ feature/add-new-page
\ feature/rework-navigation
In the example above, let's say you rework the navigation. You could have it exist alongside the navigation rework, but changes are you don't want to do the work twice. You just say "hey, this depends on the nav rework" and so it's there inside of the repo.The thing is there is another way to do this where you end up with 4 different parents in a megamerge and your nav rework touches the megamerge and your new page is yet another revset is just a fork off of it. But yeah... JJ gives you a lot of flexibility in this manner.
I'm still not as smooth at figuring out conflicts on mega-rebase.
Next up once my Sunday morning token allowance is to look at using git.