Purpose of openwrt-devel?
Elliott Mitchell
ehem+openwrt at m5p.com
Sat Mar 23 10:51:15 PDT 2024
On Sat, Mar 23, 2024 at 03:15:44AM +0100, Olliver Schinagl wrote:
>
> On March 21, 2024 10:28:29 p.m. GMT+01:00, Elliott Mitchell <ehem+openwrt at m5p.com> wrote:
> >On Thu, Mar 21, 2024 at 10:00:46AM +0100, Olliver Schinagl wrote:
> >> On 20-03-2024 01:34, Elliott Mitchell wrote:
> >> > On Mon, Mar 18, 2024 at 10:53:12AM +0100, Olliver Schinagl wrote:
> >> >> I expect this to be done very rarely and by users that know what they
> >> >> are doing, but just "automating" a few logical git commands.
> >> >>
> >> >> Performance is not a key-driver here. It's too rarely used.
> >> > True, though being faster is nice.
> >>
> >> While true, I don't think we even have to start arguing if runtime is
> >> less then a single second. This on my 12 year old PC, granted, the CPU
> >> is only 8 years and the nvme SSD only 5.
> >>
> >> ./scripts/kernel_bump.sh -p realtek -s 5.15 -t 6.6 0,40s user 0,33s
> >> system 105% cpu 0,694 total
> >>
> >> ./scripts/kernel_bump.sh -p ramips -s 6.1 -t 6.6 0,40s user 0,40s
> >> system 106% cpu 0,753 total
> >>
> >> Even if you bumped all repo's (with a dumb for-loop) we'd be talking 30
> >> seconds to do _all_ of them at once (which never happens).
> >
> >On a computer of similar class, but with *much* slower storage
> >(fileserver is sick and underperforming): real 0m0.477s
> >
> >So if this was directly to an SSD, 2 orders of magnitude.
>
> Sure, but not significant in any way or form. Its still in both cases significantly less then one second. The script has executed before my finger has left the enter key. There is no purpose or advantage here. If the script ran 10 seconds, it wouldn't even matter IMO.
>
True enough. Though I will cite this as an example of the care used in
the design.
> >Odd thing about what you put in parentheses. I've been trying to get a
> >discussion going about that for months, yet seems no one notices the
> >suggestion. This was a distinct goal of the script, to hopefully get
> >that discussion going.
>
> To update all targets at once? How is that useful?!
Taking the unusual step of splitting a paragraph since that is needed
in this case.
I've cited two specific reasons in a recent message. I keep citing those
reasons again and again and again. Yet get no response.
This makes it *much* easier to change defaults in "generic". Instead of
needing to touch a bunch of configurations with different versions, you
can simply touch all configurations with one version. If you're unlucky
and a new kernel cycle starts before approval or rejection, you can
rebase one copy onto the rename commit and another onto the merge commit.
You then rebase these on top of each other and then squash, the result is
you're onto the latest with minimal trouble. Without this trying to
modify values effecting multiple devices is an absolute nightmare
(believe me, I know!).
This can also generate a single non-buildable commit per year. Whereas
if every single device configuration is handled individually you
generate >=44 non-buildable commits per year. This is the difference
between <1% of `git bisect` sessions hitting a non-buildable commit,
versus 5-15% of `git bisect` sessions hitting a non-buildable commit.
Given how obsessed some people are with `git bisect`, this is a major
advantage.
>>If a target is fully upstream, there is nearly nothing to migrate, no patches etc. So maybe the kernel config. Sure expanding (either script) to accept multiple platforms would be trivial or accept a commit pair per platform andere just loop over the script fort each target. But this is something not feasible for decades to come.
>
> Bumping a kernel version pretty much always requires additional work. Config migration, rebasing patches, testing on actual hardware. Its just not even worth considering.
>
> Also, a quick note on skipping kernel versions, generally openwrt seems to only support two versions at a time. You could have just a single one, or skip 5. The problem/work, is adapting the target to actually function again. Bigger jumps just means different/more work on the patches, but nothing else really.
>
Were you aware the sky is most often perceived as being blue (clear sky),
white (clouds), or black (night)? Were you aware Google is a very large
company? Were you aware PI is a transcendental number approximately
equal to 3.14?
I don't dispute any of the above. I don't see how any of the above is
related to whether it is better to copy kernel configurations and patches
for all boards at once, versus copying them per-board.
Without a doubt copying the configurations and patches is only a single
trivial step. Yet unless you're aware of a board/device which doesn't
copy configurations and patches as an early step, this is no reason not
to do them all at once.
For that matter, if it is such a small step why bother with a script?
> >> >> Leaving the tree in failed state imo is a feature. We switch from the
> >> >> normal branch to a special branch to do all operations. The user can
> >> >> always force ably switch back. Ultimately, this is a choice, can a user
> >> >> fix things and inspect failures, or 'oh it failed, lets reset'. Reset
> >> >> instructions during cleanup is a good idea however.
> >> > Therein lines a concern. Why does yours switch to a special branch?
> >> > It is not human, it doesn't need a computer to keep track of commits for
> >> > it. As such it shouldn't need a branch.
> >>
> >> Why is this a problem? Why can't a script that is intended to remove
> >> manual labor, behave like a human. There comes the readability and
> >> maintainability argument once again. If a human can read it, he can
> >> modify it. If the script fails, or a special case pops up, a human can
> >> do those steps manually quite easily.
> >>
> >> I'm a big beliver in KISS. So yes, the script is not perfect and doesn't
> >> have shiney gold plated parts. But it is simple, can be understood by a
> >> human, by a non-developer even I'd argue.
> >>
> >> In the end, computers do what humans tell them to do. In the end, humans
> >> reading things is far more important, then super-optimizing a script,
> >> that's run once or twice a year by a human developer.
> >>
> >> And using a branch does have its advantages too. We can switch to the
> >> branch and examine if things go wrong. Again, this is something a human
> >> would do too :)
> >
> >A human can tell `git` to move to an unreferenced commit. Useful to know
> >how if things go wrong. Though I will admit by having your script be so
> >close to things many people do, does make it more obvious to more people.
> >
>
> Yeo yhats my point, humane neef tot read, write, understand whats ging on, at least in general ways.
>
> >Yet if that is an issue they should be looking at the URL where the
> >approach came from and reading that.
>
> That's probably a step too far, this is using magical got internals. But sure. I'd thing someone sees the git mv, git commit, git checkout and figures ohh, i think i get it', but of course to understanf deeper research is still needed. I just disagree with making things (appear to the general reader) very complex, and then expect them to research the theory is to far.
>
Yet all this leaves me concerned about assumptions being made. I've
already pointed to one example (it assumes you're on a local branch).
While the name is not the sort most humans would use, it is still
creating and deleting a branch when it doesn't need to. I've got an odd
suspicion it will start to require more features of your development
environment.
> >> > If you examine the result, you might also discover its approach has some
> >> > rather substantial advantages. At this time I believe with the second
> >> > commit it offers a proper superset of your script's functionality.
> >> >
> >> I wonder what this super set is though and why it is so badly needed ...
> >
> >Your knowledge level is showing.
>
> I've had set theory in units yes. But its still not clear which superior features the Perl version offers. How is it massivly better. How is the result majorly different? What is so super (pun intended) in your approach?
> >The UI approach for `kernel_upgrade.pl` is rather distinct from what
> >`kernel_bump.sh` has. I'm unsure how closely what it does matches the
> >behavior of your script. Yet the modified `kernel_bump.sh` performs
> >similar to what you have, by invoking `kernel_upgrade.pl` once with
> >appropriate arguments.
>
> I'll test it soon ;) Just got some personal stuff to deal with ...
You're not the only person who doesn't devote 100% time to OpenWRT.
Theory was the better capabilities would become clear with a bit of
experimentation.
> >Then there are the things `kernel_upgrade.pl` can do which
> >`kernel_bump.sh` has no equivalent.
>
> But what are they. And how are they relevant?
You've been typing about how yours could upgrade everything by being
called multiple times. Since I was aiming to get the above issue
discussed `scripts/kernel_upgrade.pl` has featured the capability to
update everything all at once, from the start.
In fact only upgrading a single board was a feature which had to be
added. Since I added fewer assumptions, mine makes no distinction
between upgrading targets versus subtargets. It can do multiple of both
at the same time without any restrictions and a single invocation.
In the process, only 2 commits are generated. The under .5s is for
updating *everything*.
I'm ending up with an odd suspicion the extra experiment is the way to
go. Some developers might consider `kernel_bump.sh`'s UI better, yet
`kernel_upgrade.pl` has a rather better back-end.
--
(\___(\___(\______ --=> 8-) EHM <=-- ______/)___/)___/)
\BS ( | ehem+sigmsg at m5p.com PGP 87145445 | ) /
\_CS\ | _____ -O #include <stddisclaimer.h> O- _____ | / _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
More information about the openwrt-devel
mailing list