LinkedIn has become really noisy with folks espousing extreme productivity thanks to AI. I’m skeptical. Yes, I think AI is going to be a massive shift, but we’re early in the process. So far, there hasn’t been much measurable impact beyond people not using skills and losing them. With that said, being at the beginning of the changes, tooling is important. The folks trying to achieve extreme productivity with AI spinning up 30+ agents need automation to make it work and the tools are a moving target. With that in mind, here are a few tools that I’ve been writing and using that have been somewhat successful.
First off, while I’ve used Code and Gemini, my primary driver is Claude Code. I haven’t noticed major differences that have made me want to use multiple agents with different providers. Maybe that makes me low on the adoption totem pole, but again, I’m skeptical the work is worth it. It is really hard to measure this stuff, so unless it clearly helps to finish work, I’m not really going to spend much time on it.
The first tool I want to call out is workset . I used superset.sh and found the breakdown from repo to worktree to terminals was simple and easy to use. My only frustrations revolved around the terminal being less than ideal and doing small things in code wasn’t intuitive. I’ve used Emacs forever at this point, so it was the natural place for me to replicate similar functionality in an environment that helped me land in familiar territory when I needed it. For example, Emacs has magit for working with git and it is extremely helpful when I need to fix up an repo that might have gotten vibed into a bad state. I don’t think anyone should use my workset tool, but it was easy to build and works reasonably well. If you are struggling to organize your agents, write tooling to fix it that makes sense to you.
The second tool is rek . The name is trolling Nick and the tool he has been working on called erk . One thing about Erk that is really valuable is the public nature of the work it does. Everything is on public systems like Github to provide as much transparency as possible. I’ve always felt code review is more about communicating what is going on and less about humans finding bugs. Tests should catch bugs. In this new world of AI, your prompts and planning become important artifacts that communicate your intent and helps you learn how to improve leveraging AI amongst your team.
So, why rek?
I tried Gas Town and wished it worked as advertised. The blogs mentioned “slinging” work to “polecats” and you could continue doing things in your “director” session. My v1 of rek managed terminals and tried to do similar things like spawning new processes with claude sessions that I could manage in tmux. When I tried superset.sh, the tmux behavior wasn’t necessary. Also, it was painful to get things to work in the background. This was important to me because I wanted work to happen while I was in meetings. When I ditched the tmux side, I reset and tried to make a system that broke down work like Gas Town, but didn’t use beads and actually worked to do thigns in parallel. This time it worked well and I had my sessions spinning up multiple agents in parallel and building in the background. So, that is what rek does and I’m pretty happy with it. I do have some functionality to use Linear tickets as well, but we’ll if that develops.
The last tool I’ve been playing with is crow
. Crow is a code review tool to help me more quickly review code and fix code after someone has suggestions on a PR. I’m not as bullish on this tool, but I do like the ergonomics. You run crow review $PR and it will download and review it. If it is your PR, it will try to address comments. The reviewers are decent and simpler than something like the /code-review tool in claude. Erk also has a really great /erk:pr-address command that you can use in a session, so maybe I’ll make crow review use that when it starts a session. At the end of the day, the goal was something local that could make changes, run tests, and try the code in question to better analyze it. It is still a work in progress, but I like the ergonomics and the output is comparable with other tools.
None of these tools are life changing and make me hyperbolically productive with AI. They are small tools that I found helpful, and more importantly, I learned things by building them. When the web was new we viewed source to see how things were built and learned. Building tools for AI feels similar, but with the addition of intense existential feelings around what it means to build. I hope folks put the extreme anecdotal opinions on AI productivity in perspective and try to pull out a little inspiration for how to try new things with AI. Tools are a great way to learn.