Install LuCI for snapshots builds

Daniel Golle daniel at makrotopia.org
Tue May 7 17:38:19 PDT 2024


On Tue, May 07, 2024 at 11:52:02PM +0200, Robert Marko wrote:
> On Tue, 7 May 2024 at 23:25, Paul Spooren <mail at aparcar.org> wrote:
> >
> > Hi all,
> >
> > For some reason (resource usage?) our snapshot builds do not include the LuCI web interface. I think it’s an advantage to have LuCI installed in snapshot images since a) it installed for all releases anyway and b) often it’s just nice to have the web interface directly available.
> >
> > Is anyone against having the interface installed by default? I remember from multiple (in-person) discussion with fellow developers, that they’d prefer it being installed.
> >
> > If it’s an oversight I’d like to see it added to the default packages (via the builedbot config), if there’s a reason to keep it from snapshots, I’d like to understand the details.
> 
> +1 for LuCI by default in snapshots as well from me.

I understand the usability we may gain from that but you should all
be aware that this basically means only having a single buildbot
phase with a very slow turnover time, and that is a HUGE disadvantage
for development and use of the buildbot as classic CI took.

Let me explain why:
Currently the snapshot builders are only building **target-specific**
packages as well as packages included in the image by default. (We call
that "phase1"). That means that a single build takes around 2~3 hours,
depending on the target and the machine carrying out the build. In this
way we manage to have every target build once approximately every 24
hours.

If we wanted to include LuCI, that would basically mean that we will not
only have to include all the LuCI modules and applications, but also
**all their dependencies** which is basically half of the packages feed.

A full build of the packages feed (called "phase2") takes around 4
additional hours (best-case) and up to 17h (worst-case). We also
don't build for each (sub-)targets (think: ramips/mt7621), but only for
each architecture (think: mips24kc), and many (sub-)targets share the
same architecture. This results in every **architecture** being re-built
approximately every two days.

If we would do this for all (sub-)targets, the number would obviously be
even worse, we'd probably only see fresh images once or twice a week,
which is too slow to catch problems and too long for users to test
changes in a timely manner. It would be a humongous slow down of
development and testing on generic and core parts.
For me it would mean that I would have to invest a mid four-digit $ amount
into hardware to still be able to do meaningful development, and probably
it would mean that for quite a few of us.

What I could imaging is to have an **additional** build stop on top of
that, lets call it "phase3". That could be triggered on completion of
phase2 and then assemble images for all (sub-)targets using that
architecture including LuCI **in addition** to the phase1 snapshot
builds.

>From a usability point of view, maybe the answer can be much easier and
the solution is already there:
1. Go to https://firmware-selector.openwrt.org/
2. Select 'SNAPSHOT' on the right.
3. Enter the device you want to put OpenWrt on or update.
4. Click on "Customize installed packages and/or first boot script"
5. Add 'luci' (or 'luci-ssl' or 'luci-ssl-openssl', ...) to the list
   of to-be-installed packages.

Maybe we can have an easier way to do that in that web UI and then
everybody will be happy?

And yes, the images generated using the firmware-selector are being
cached there. In this way only the first user requesting an image
with additional packages (luci in this case) would have to wait
~ 1 minute for the image to be generated, every subsequent user
requesting the same image would be served instantly.

The advantage would also be that we don't generated huge amounts of
images for legacy devices without any users, nor for "science-fiction"
R&D platforms.


Just my 2 cents...



More information about the openwrt-devel mailing list