AI code review (Claude, maybe Codex)

Alexandru Ardelean ardeleanalex at gmail.com
Tue Apr 7 01:28:27 PDT 2026


On Tue, Apr 7, 2026 at 6:57 AM David Lang <david at lang.hm> wrote:
>
> Hauke Mehrtens wrote:
>
> > On 4/6/26 19:33, JP wrote:
> >
> >>   - these platforms are subsidised (in the extreme) by (provably
> >> society-damaging) VC-funds; any attempt at building infrastructure upon
> >> this without significant review/planning/estimation strikes me as
> >> potentially high risk
> >
> > Isn't this good? OpenWrt can profit from these VC-funds.
>

I'll chime in and share my input,

I'm the current maintainer (again) for Python in the packages feed.
https://github.com/openwrt/packages/blob/master/lang/python/python3/Makefile#L21

I've been maintaining lang/python/  since (roughly) 2014.
There's a period where I've been off due to (classic) co-maintainer quarreling.
(Disclaimer: I'm also quite to be blamed for the quarreling).
But over the years, when people have been becoming inactive in the
feeds, I started collecting abandoned packages (this information is
mostly useful for context).

In recent weeks, I've decided to put the Python eco-system back into
shape by updating all packages, as well as other packages which I
maintain.

And I've been using Claude to do it.

My current process is:
1. I have this beef-y system which I didn't get to use much
2. I installed a clean Ubuntu, configured Claude there, put an SSH key
to my Github packages fork
3. I tell Claude to update a few packages in lang/python, then run the
CI locally (this CI part still needs work; sometimes it passes
locally, then on Github various things need tweaking)
4. I push the branch to my Github fork
5. I open the PR; check the CI (for some reason Github's CI can take
up to 2 days to run on the packages feed)
6. If all looks good, rebase + merge and move on
7. Since it's a different machine, I can log off and let it run for a
couple of hours; and go do something else

// I do want to take a short break here and thank
// whoever added the test.sh mechanism (I believe it was Paul Spooren?).
// I think for the packages feed, adding test.sh with
// Claude for adding basic tests is an acceleration/stabilization
mechanism which
// should be done; we never did, but we should definitely start to do it.

This process doesn't seem to be too fast yet, due to CI being slow
(builds take a lot of time to start).
I'm still trying to avoid complaining about the whole Rust ecosystem
adding ~2 hours to some Python package builds, but  ¯\_(ツ)_/¯

Sometimes, I will tell Claude: "here is the packages/ repo feed ; this
is the failing package, run the CI and fix it"
After a while it will come back with a reasonably good fix; sometimes it won't.
Given how many packages I have collected over the years: this process
is the only way (for me) to go forward.
The other ways would be to quit (or do nothing), or remove them.
I am starting to see ocasional packages getting empty for PKG_MAINTAINER.
And I do remember that back in 2014, the current packages/ feed was
created with the intent to not allow this to happen (because there
were too many packages going on without maintainers in a previous
version of the packages/ repo).

Moving forward more, allow me to "hallucinate" a bit.
I do believe that for OpenWrt a board farm with some agentic AI would
be an interesting new approach.
The effort to set up such a farm may be big, but maintaining it and
deploying code from OpenWrt PRs may not be so bad anymore.
// End of this personal hallucination.

I also resonate with Hauke here: the number of PRs have grown (even
for packages/ it's in the range of ~250 ).
The number of issues is also high (~750).
An acceleration method is welcome.
I would love for this (acceleration) method to be more people with
advanced knowledge of WiFi, kernels and networking (routing and LAN).
It does not look like we will have this.
So, I am fine to take the next second-best, which looks to be agentic AI.
And... I do feel a bit "reborn" working with these agentic AIs.

Oh, one more "hallucination" before I close this mail: I am trying to
find some time to "work" with Claude and tell it "here is a system
with reasonable capabilities (Ryzen 9, 24 threads, 128 GB of RAM,
Intel ARC770), install a local model and teach it everything you know;
to be agentic and to be able to perform tasks like you".
It's not going so well so far; and the Claude CLI leak may be an
indication of why.
I am somewhat worried that data-center AI (like Claude and Codex) are
getting close to a crash, due to energy prices going up.
And the whole Persian Gulf thing, looks to probably accelerate this.
So, for anyone worried about this AI: maybe we won't be able to afford
it for a while.

If you've reached this point and may care to suspect why Github
actions works so bad, it may be due to this:
https://isolveproblems.substack.com/p/how-microsoft-vaporized-a-trillion

Thanks
Alex

> exactly, where is the project risk? there is no talk of eliminateing all manual
> review (even if "AI" approves it, that doesn't mean that it's the right thing
> for real hardware, or the project overall)
>
> if/when it goes away, will the project be in any worse position than it is now?
> and in the meantime, the team can crank through more work.
>
> David Lang
>
> _______________________________________________
> openwrt-devel mailing list
> openwrt-devel at lists.openwrt.org
> https://lists.openwrt.org/mailman/listinfo/openwrt-devel



More information about the openwrt-devel mailing list