Rendered at 22:07:00 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
torginus 7 hours ago [-]
I think they're trying to implement every management fad with AI agents and see if improves performance.
Personally, I have tried pair programming, and it hasn't really felt like something that works, for various reasons - the main one is that I (and my partner) have complex thought processes in my head, that is difficult and cumbersome to articulate, and to an onlooker, it looks like I'm randomly changing code.
latchkey 6 hours ago [-]
I worked at Pivotal Labs where hundreds of developers pair programmed every day, all day. It works, the trick is learning how to get out of your head and communicate with your pair in a way that two brains works better than one.
I agree, it isn't for everyone.
encoderer 5 hours ago [-]
Pair programming works best when you are tasked with a problem that’s actually beyond your current abilities. You spend less time in your head because you are exploring a solution space for the first time.
yesensm 14 hours ago [-]
I’m curious whether anyone has measured this systematically. Right now most of the evidence for multi-agent setups still feels anecdotal.
not_ai 11 hours ago [-]
And expensive, exactly the way a pay per use product would push its customers…
“It’s not working well enough!” We tell them. They respond with “Have you tried using it more?”
3yr-i-frew-up 9 hours ago [-]
Back in 2024 I read a study saying: "Ask 4 LLMs the same question, if they all give you the same answer there is some 95-99% chance its correct"
Soooo... Its not just greed. There is something there.
axldelafosse 7 hours ago [-]
Yes exactly. I’m talking about this in the article. I found out that when Claude and Codex both review the same PR and both find the same issue, our team fixes it 100% of the time.
zombot 6 hours ago [-]
What's the point of pair programming then if they both have the same opinions?
axldelafosse 4 hours ago [-]
They don't. And you would be surprised how a good model actually pushes back on some comments.
The point was: when they do agree, it is a very strong signal.
pixl97 5 hours ago [-]
There are a number of different models out there.
shafyy 9 hours ago [-]
Haha yeah... Wait until they start jacking up the subscription prices
observationist 7 hours ago [-]
They don't change the prices, they just modify the amount of compute allocated - slower speeds and fewer tokens, they can set everything in the background to optimize costs and returns, and the user never realizes anything has changed.
Sometimes they'll announce the changes, and they'll even try to spin it as improving services or increasing value.
Local AI capabilities are improving at a rapid pace, at some point soon we'll have an RWKV or a 4B LLM that performs at a GPT-5 level, with reasoning and all the bells and whistles, and hopefully that'll shake out most of the deceptive and shady tactics the big platforms are using.
stackgrid 13 hours ago [-]
Completely with you on this! But then we need to define the cirteria for comparison. Might not be that easy unfortunately
edf13 15 hours ago [-]
Nice - I do something similar in a semi manual way.
I do find Codex very good at reviewing work marked as completed by Claude, especially when I get Claude to write up its work with a why,where & how doc.
It’s very rare Claude has fully completed the task successfully and Codex doesn’t find issues.
axldelafosse 15 hours ago [-]
I created the first version of loop after getting tired of doing this manually!
edf13 15 hours ago [-]
I’m going to take a look today!
lancekey 8 hours ago [-]
Do you see any benefit in doing this locally versus having Codex review the PR Claude generates?
axldelafosse 4 hours ago [-]
The feedback loop is faster. But PR reviews are still useful as they are multiplayer (meaning that you and another human reviewer can talk about a specific agent's comment directly on the diff, which is very useful sometimes).
nurettin 12 hours ago [-]
Claude is also good at that. I made a habit of asking "are you sure?" after a complex task. It usually says it overlooked something.
ctmnt 9 hours ago [-]
I find both to be true. I use Claude for most of the implementation, and Codex always catches mistakes. Always. But both of them benefit from being asked if they’re sure they did everything.
dgb23 11 hours ago [-]
If this approach turns out to be valuable, it's unlikely that it has anything to do with having multiple actual agents, but rather that it's valuable to have 2 configurations (system prompt, model, temp, context pruning, toolset etc.) of inside the same agent being swapped back and forth.
axldelafosse 7 hours ago [-]
Yeah maybe! Right now I find it useful to use different agent harnesses but as the models get better (and the agent harnesses simpler), it might be possible to get the same result with the same model. Would be cool to experiment with open-source models
ramon156 8 hours ago [-]
I've always wondered what it would be like if we reversed the roles. I remember people claiming they had gotten better results if an agent started asking the questions.
What if we had an agent-to-agent network that contacted the human as a source of truth whenever they needed it. Keep a list of employees that are experts in said skill, then let them answer 1-2 questions.
Or are we speeding up our replacement like this?
pancsta 8 hours ago [-]
What if a search engine queried YOU? That’s the question (I guess).
pkaeding 7 hours ago [-]
That's how they worked in Soviet Russia, right?
bulletsvshumans 8 hours ago [-]
I had a prototype where the agent primarily worked autonomously, and it could solicit human feedback via tool calls. But yes that pattern does feel like it is hastening the apocalypse.
axldelafosse 7 hours ago [-]
Interesting! Feels like a good way to write docs/memories. What I like about loop is that it runs the interactive TUIs so you can answer questions in both sessions (not just the main worker). It is not human multiplayer though (but that would be cool).
I like this idea, I’ll experiment with it as part of a brainstorming skill to make the agents ask clarifying questions (to each other and to the human in the loop).
rootnod3 8 hours ago [-]
Hint: how did these humans become experts in that area? Definitely not by using AI.
cadamsdotcom 18 hours ago [-]
The vibes are great. But there’s a need for more science on this multi agent thing.
axldelafosse 17 hours ago [-]
I agree! Right now it is leveraging the Codex App Server, which is open-source and very well implemented, but using Claude Code Channels is probably a bit hacky.
The good thing is that it establishes a direct connection so it's already much better than having one agent spawn the other and wait for its output, or read/write to a shared .md file -- but it would be cool to make it work for all agent harnesses.
The PLAN.md question is the one worth pulling on. Once the plan lives in git or the PR it's already downstream of intent and whoever defined what to build has already handed off. The harder problem is giving agents access to the original intent, not just the implementation plan derived from it. When there's drift between what was planned and what got built, a git-resident PLAN.md makes it hard to trace back to why the decision was made in the first place.
hrimfaxi 8 hours ago [-]
The plan will always be downstream of intent though. At least in git you can track the evolution of the plan over time and hopefully annotate the rationale for changes in direction.
sibtain1997 8 hours ago [-]
Fair point. Git helps track how the plan changes, but it doesn’t always capture the original intent behind it.
axldelafosse 7 hours ago [-]
I’m glad someone is finally bringing this up, thank you!
What are you suggesting instead? To share the prompt in order to capture the intent? Usually I expect the plan to reflect the prompt.
I find it interesting when I create a PR after a quick session: the description really captures the intent instead of focusing on the actual implementation. I think it’s because the context is still intact, and that’s very useful.
You can also create a skill for reviewing (which calls gemini/codex as a command line tool) and set instructions on how and when to use. Very flexible.
axldelafosse 7 hours ago [-]
Yes, but what’s cool about loop is that it runs the interactive TUIs and establishes a direct connection between them. You can steer and answer questions in both sessions, not just from the main worker.
etothet 8 hours ago [-]
"Letting the agents loop can result in more changes than expected, which are usually welcome..."
If "more changes than expected" means "out of scope", then I disagree. Those types of changes are exactly one of the things that's best to avoid whether code is being written by a person or an LLM.
axldelafosse 8 hours ago [-]
It doesn’t mean that they are always out of scope, rather than the reviewer can be nitpicking (like humans do) and instead of addressing the comment in a follow-up PR, the change gets addressed in the same PR. So not necessarily out of scope, but it can add up and make it harder for a human to review.
That’s why I’m wondering if we should instruct the agents to act more like humans would: if the change can be done in a follow-up PR, this is probably what an experienced engineer would do.
rsafaya 13 hours ago [-]
I think the A2A space is wide open. Great to see this approach using App Server and Channels.
I tried built something similar (at a high level) for a more B2C use case for OpenClaw https://github.com/agentlink-dev/agentlink users. Currently I think the major Agents have not fully owned the "wake the Agent" use case fully.
Regardless this is a very cool approach. All the best.
axldelafosse 7 hours ago [-]
Cool! Thank you.
vessenes 19 hours ago [-]
I prefer claude for generation / creativity, codex for bull-headed, accurate complaining and audit. Very rarely claude just doesn't "get it" and it makes sense to have codex direct edit. But generally I think it's happiest and best used complaining.
woadwarrior01 8 hours ago [-]
This is very reminiscent of the review-loop Claude Code plugin.
Yes, the goal is the same. This plugin is similar to the Codex gstack skill.
What makes loop different is that it lets Claude and Codex talk to each other directly (receiving messages from Claude via the Codex App Server and from Codex via Claude Code Channels). I believe this approach works even better than having one agent spawn the other and wait for its output, or read/write to a shared file.
This is interesting for code, but I'm curious about agent-to-agent coordination for ops tasks — like one agent detecting a database anomaly and another auto-remediating it
highphive 14 hours ago [-]
I think a lot of people/companies are integrating workflows like that, it's just separate from the point of agent pair coding.
The interesting thing here is agents working together to be better at a single task. Not agents integrated in a workflow. There's a lot of opportunity in "if this then that" scenarios that has nothing to do with two agents communicating on one single element of a problem, it's just Agent detect -> agent solve (-> Agent review? Agent deploy? Etc.)
20 hours ago [-]
bradfox2 18 hours ago [-]
Multi turn review of code written by cc reviewed by codex works pretty well. Been one of the only ways to be able to deliver larger scoped features without constant bugs. I've seen them do 10-15 rounds of fix and review until complete.
Also implemented this as a gh action, works well for sentry to gh to auto triage to fix pr.
_ink_ 15 hours ago [-]
How do you do this? Are you just switching between clis? Or is there a tool that uses the models in that way?
encoderer 17 hours ago [-]
Yes I’ve had a lot of success with this too. I found with prompt tightening I seldom do more than 5 rounds now, but it also does an explicit plan step with plan review.
Currently I’m authoring with codex and reviewing with opus.
axldelafosse 17 hours ago [-]
Good reminder: don't forget the plan review!
zombot 6 hours ago [-]
Is there a prize yet for the most absurd application of AI? Pair programming seems a fair first step in the quest for this holiest of grails. How about an agentic implementation of the House of AI Lords?
Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.
AbanoubRodolf 18 hours ago [-]
[dead]
dude250711 12 hours ago [-]
The circle of slop.
xeyownt 8 hours ago [-]
Let's burn the planet twice faster while doubling our token costs.
Personally, I have tried pair programming, and it hasn't really felt like something that works, for various reasons - the main one is that I (and my partner) have complex thought processes in my head, that is difficult and cumbersome to articulate, and to an onlooker, it looks like I'm randomly changing code.
I agree, it isn't for everyone.
“It’s not working well enough!” We tell them. They respond with “Have you tried using it more?”
Soooo... Its not just greed. There is something there.
The point was: when they do agree, it is a very strong signal.
Sometimes they'll announce the changes, and they'll even try to spin it as improving services or increasing value.
Local AI capabilities are improving at a rapid pace, at some point soon we'll have an RWKV or a 4B LLM that performs at a GPT-5 level, with reasoning and all the bells and whistles, and hopefully that'll shake out most of the deceptive and shady tactics the big platforms are using.
I do find Codex very good at reviewing work marked as completed by Claude, especially when I get Claude to write up its work with a why,where & how doc.
It’s very rare Claude has fully completed the task successfully and Codex doesn’t find issues.
What if we had an agent-to-agent network that contacted the human as a source of truth whenever they needed it. Keep a list of employees that are experts in said skill, then let them answer 1-2 questions.
Or are we speeding up our replacement like this?
I like this idea, I’ll experiment with it as part of a brainstorming skill to make the agents ask clarifying questions (to each other and to the human in the loop).
The good thing is that it establishes a direct connection so it's already much better than having one agent spawn the other and wait for its output, or read/write to a shared .md file -- but it would be cool to make it work for all agent harnesses.
Open to ideas! The repo is open-source.
What are you suggesting instead? To share the prompt in order to capture the intent? Usually I expect the plan to reflect the prompt.
I find it interesting when I create a PR after a quick session: the description really captures the intent instead of focusing on the actual implementation. I think it’s because the context is still intact, and that’s very useful.
If "more changes than expected" means "out of scope", then I disagree. Those types of changes are exactly one of the things that's best to avoid whether code is being written by a person or an LLM.
That’s why I’m wondering if we should instruct the agents to act more like humans would: if the change can be done in a follow-up PR, this is probably what an experienced engineer would do.
https://github.com/hamelsmu/claude-review-loop
What makes loop different is that it lets Claude and Codex talk to each other directly (receiving messages from Claude via the Codex App Server and from Codex via Claude Code Channels). I believe this approach works even better than having one agent spawn the other and wait for its output, or read/write to a shared file.
The interesting thing here is agents working together to be better at a single task. Not agents integrated in a workflow. There's a lot of opportunity in "if this then that" scenarios that has nothing to do with two agents communicating on one single element of a problem, it's just Agent detect -> agent solve (-> Agent review? Agent deploy? Etc.)
Also implemented this as a gh action, works well for sentry to gh to auto triage to fix pr.
Currently I’m authoring with codex and reviewing with opus.
Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.