Objective of OpenWRT/x86?

Daniel Golle daniel at makrotopia.org
Wed May 3 19:50:05 PDT 2023


On Wed, May 03, 2023 at 06:36:10PM -0700, Elliott Mitchell wrote:
> On Tue, May 02, 2023 at 02:45:43AM +0200, Alberto Bursi wrote:
> > 
> > 
> > On 26/04/23 22:17, Elliott Mitchell wrote:
> > > Well, was a specific objective ever chosen for the x86 version of
> > > OpenWRT?
> > > 
> > > I can state my goal/hope for OpenWRT/x86.  The *WRT Linux distributions
> > > were so named for originally targeting the LinkSys WRT54G.  This was a
> > > small AP, so one might expect builds to be for small APs.  By today's
> > > standards 128MB RAM and 16MB of storage is relatively small.
> >  >
> > 
> > Afaik the x86_64 images approach has always been "include what you can 
> > to make it run on everything without need to customize the image".
> > Storage is integrated in kernel because otherwise it won't see the 
> > rootfs and fail to boot, everything else is in modules (possibly) but 
> > modules for most ethernet controllers are always included in default build.
> > 
> > Then there is the virtualized drivers that were just added to the kernel 
> > and not properly modulized, maybe it was easier like that, don't know.
> 
> These two are the same issue.  Not including the drivers for a hypervisor
> could make the image not boot on that hypervisor.  Though most
> hypervisors will include some fallback device emulation which provides
> inferior performance.
> 
> > Because on x86_64 storage and RAM is measured in Gigabytes so saving a 
> > few hundred kilobytes by removing the "all hardware just works with a 
> > single image" feature is NOT a worthy tradeoff.
> 
> Partially true, partially untrue.  Saving a few hundred kilobytes isn't
> worthwhile.  Shrinking the kernel by 35% and userspace by 20% though IS
> worthwhile.
> 
> Notably I'm making use of Xen.  The Xen devices don't try to look like
> normal hardware.  As a result the Xen block devices don't depend on the
> SCSI subsystem.  Then you completely disable all emulated graphics
> devices (simulated serial consoles work fine) and things have shrunk
> drastically.

Kernel size on x64/64 is 5780.8 KB. Even if you really manage to shrink
the kernel by 35%, that would mean 2023 KB. That's 1.5% of the total
memory available on a 128 MB system...

> 
> 
> > > Since this network is in the moving towards the server phase, the AP drew
> > > my attention.  Turns out all of the hardware of an AP is available for
> > > servers, though in distinct form-factors.  If the server has a full
> > > IOMMU, the hardware can be directed to a VM and you have a proper AP in a
> > > VM.
> > > 
> > > Problem is instead of the recommended 128MB memory, 16MB of storage
> > > (https://openwrt.org/supported_devices/432_warning) the virtualization
> > > examples (https://openwrt.org/docs/guide-user/virtualization/start) are
> > > suggesting massively more memory.  256MB (small Xen example), 512MB
> > > (VMware OpenWRT/15.05) or 1GB (Qemu examples).
> > > 
> > 
> > Did you actually check that OpenWrt x86 needs that much memory on x86 or 
> > are you running on assumptions someone made when writing wiki articles?
> > 
> > Because I've been running OpenWRT on both x86 hardware and VMs and for a 
> > "normal" router build (i.e. default image with Luci) it really does not 
> > need more than 128MB RAM.
> 
> Well, if those wiki articles are the proffered documentation source, one
> does kind of need to trust the documentation.  If one then looks at the
> kernel configuration and doesn't find a consistent theme among the
> options, but lots of bloating options...
> 
> 
> > > I don't yet have a full handle on the issues.  My first thought was the
> > > large sizes come from early stage development and not wanting to bother
> > > with memory limitations.  Another possibility is this comes from simply a
> > > thought pattern of "x86 memory is cheap, just throw memory at it".
> > > 
> > > Yet if one is wanting to use this for serious purposes, memory IS a
> > > concern.  GCC is pretty capable for x86, a quick check suggests GCC tends
> > > to produce binaries for x86 which are four-fifths the size of those for
> > > ARM.  Yet everything for OpenWRT/x86 is suggesting at least *double* the
> > > memory?!
> > 
> > Again with this "suggesting". The people writing the docs may or may not 
> > have had the same assumptions as you have. Maybe like me they are 
> > running the VM on a cluster with 128GB of RAM per host so they have just 
> > chosen whatever number they feel is a "low amount of RAM" without going 
> > deeper, if they waste 512MB or 1GB it's no big deal.
> 
> Yup, scale does matter.  Places and magnitudes of memory savings you look
> for with 1 host versus 50 hosts are very different.  Also makes a
> difference for 50 hosts spread among offices, versus a rack building.
> 
> > > One issue I've found is the kernel configurations seem ill-suited to x86.
> > > Almost any storage driver used by 10 or more people is built into the
> > > kernel.  As a result the *single* kernel is huge.
> > 
> > "huge" in what context? Yes it's more than for most embedded devices 
> > that have 16MB flash but let's be real on x86 it's hard to find storage 
> > that is smaller than 1GB and RAM is likewise plentiful so what are we 
> > talking about. Even ancient Geode devices like Alix boards have 256MB 
> > RAM which is still plenty for what they can actually do.
> 
> I'm concerned at the large number of things enabled in the kernel
> configurations.  Traditionally OpenWRT has used f2fs, yet x86 also has
> ext4.  With the percentage of systems using SSDs, would seem advantageous
> to stick with that flash-friendly choice.

The reason to keep ext4 (on all platforms supporting booting off block
devices, btw.) is that F2FS has a minimum volume size requirement.
Hence we use ext4 if there is less 100MB available for rootfs_overlay.

> 
> I was looking for a tight memory-conservative configuration, yet what is
> on x86 includes everything under the sun.
> 
> > > The conventional approach for x86 is to use an initial ramdisk and build
> > > everything as modules.  Issue here is OpenWRT doesn't currently have the
> > > scripting/tools for runtime probing of a storage subsystem.  I think this
> > > is a fine approach if the committers are up for all the change this will
> > > entail.
> > > 
> > > Alternatively x86 could be broken into more builds.  This would emulate
> > > how OpenWRT works for all other devices.  "generic" would likely be 2-3
> > > distinct builds.  "64" would be 4-6 distinct builds.  Issue here is how
> > > many distinct builds should be created?
> > > 
> > > If one was to go this direction, I suppose there might be "giant" or
> > > "desktop" build.  Each hypervisor could use a target, include "hardware"
> > > guaranteed to be present.  Then build all network drivers as modules (so
> > > any device can be passed-in).
> > > 
> > > 
> > > 
> > > Examples of things which don't seem to fit well are CONFIG_AGP and
> > > CONFIG_DRM.  I would expect those for a desktop Linux distribution due
> > > to GUI.  For OpenWRT which tends towards networking/embedded those seem
> > > out of place.  CONFIG_FB is similar, though some devices do have actual
> > > limited frame-buffers.
> > 
> > I don't know exactly how much impact this has but on many x86 systems 
> > it's very nice to have the video ports work so you can connect a screen 
> > and interact with console. On some older boards or laptops it might be 
> > over AGP bus so dropping this would drop their screen support.
> > 
> > For reference, other network appliance projects like pfSense and 
> > OpnSense also support basic text console on video ports so that's 
> > something many "expect".
> > 
> > Now, it's probably all ancient stuff that not a lot of people are using. 
> > So yeah ok remove it from x86_64 but leave it on the x86 and the legacy 
> > images because that's where people with ancient hardware might need it.
> 
> AGP support only effects graphics performance.  Without the support AGP
> devices lose some performance, but continue to work (since they continue
> to act as PCI).

That's simply not true. AGP support is required to allocate video memory
from shared system memory, which on some systems without any dedicated
video memory boilds down to the ability to setup a frame buffer which
matches the native resolution of the screen (ie. DKMS on i915, to name
the most popular example). I've mentioned that already in a previous
reply to your suggestion to remove AGP support (which, in terms of
software, means support for shared video memory).

> 
> Many desktops can use a serial port as console (universal on servers).
> Hypervisors generally provide some flavor of serial port-like device
> (often they misbehave by doing rather more than 115.2kbps).

... and many cheap thin clients or systems like the Intel NUC which
are popular as high-performance network appliances come with on-board
graphics relying in shared video memory, and many people will want to
use HDMI+USB as a console due to the lack of a serial port (I know
it's available on some pinheader inside the case, but not even a
standard connector...)

> 
> 
> > > Another item is the goal of trying to self-host.  Being able to self-host
> > > is a worthy goal, but that has very distinct needs from an embedded
> > > networking device.
> > 
> > Imho this is is very much out of scope. Other linux distros aren't going 
> > to disappear any time soon.
> 
> Quite true.  I ran across an article about someone trying to do this, so
> I have to admit at least one person has that goal in mind.  My concern is
> all these goals seem to be getting mixed together when they actually
> conflict.
> 
> 
> 
> On Sun, Apr 30, 2023 at 10:40:40PM -0600, Philip Prindeville wrote:
> > 
> > > On Apr 28, 2023, at 11:18 PM, Elliott Mitchell <ehem+openwrt at m5p.com> wrote:
> > > 
> > > On Fri, Apr 28, 2023 at 12:04:15PM -0600, Philip Prindeville wrote:
> > >> 
> > >>> Problem is instead of the recommended 128MB memory, 16MB of storage
> > >>> (https://openwrt.org/supported_devices/432_warning) the virtualization
> > >>> examples (https://openwrt.org/docs/guide-user/virtualization/start) are
> > >>> suggesting massively more memory.  256MB (small Xen example), 512MB
> > >>> (VMware OpenWRT/15.05) or 1GB (Qemu examples).
> > >> 
> > >> Sorry, why is this a "problem"?
> > >> 
> > >> I spent $1100 on a Xeon-D box with 128GB of DRAM, 16 hyper threaded cores, and 2TB of NVMe.
> > > 
> > > If those numbers are to be believed (which is now suspect), it means a
> > > *requirement* to devote that much to network operations.  Not being a
> > > requirement means one could use the memory for other things.  Or I could
> > > allow more than that to do extra complicated network things.
> > 
> > Which part is a lie?
> 
> The numbers I meant as being suspect were the estimates for OpenWRT VM
> needs, not your numbers.  Sorry about the misunderstood statement.
> 
> The 1GB for Qemu was high enough to be obviously ridiculous.  The 512MB
> for VMware is also rather a bit out there.  The 256MB listed for Xen is
> in the right range to be plausible as a minimum requirement.  Build most
> ethernet drivers into the kernel and one could readily require that much.
> 
> 
> > >>> One issue I've found is the kernel configurations seem ill-suited to x86.
> > >>> Almost any storage driver used by 10 or more people is built into the
> > >>> kernel.  As a result the *single* kernel is huge.
> > >> 
> > >> If it's not used as a boot device, we could make it kmod-able... otherwise we'd need to add initramfs...  I don't think anyone wants to go down that road.  Too easy to brick devices.
> > >> 
> > >> I think we should leverage more subtargets and profiles, but that's a separate discussion.
> > > 
> > > This wraps back to my original issue.  x86 has some differences and they
> > > haven't been adapted to.
> > > 
> > > x86 is easier to recover, so an initramfs is quite viable, perhaps x86
> > > should be the exception and have one.  Alternatively, indeed more
> > > targets.
> > > 
> > > Perhaps "x86" and "x86vm"?
> > 
> > There were sound reasons for avoiding initramfs.
> 
> Indeed.  I'm suggesting perhaps OpenWRT/x86 should be different in having
> one.  Otherwise x86 should receive treatment equal to other systems and
> have a few more distinct builds.
> 
> 
> > >>> If one was to go this direction, I suppose there might be "giant" or
> > >>> "desktop" build.  Each hypervisor could use a target, include "hardware"
> > >>> guaranteed to be present.  Then build all network drivers as modules (so
> > >>> any device can be passed-in).
> > >> 
> > >> The number of interfaces supported by virtualization (at least in KVM) are quite limited (e1000/igbe, ixgbe, rtl8139, and virtio) so I don't see this as much of a problem.
> > > 
> > > The number of interface types supported by KVM is quite limited.  The
> > > number of interface types supported by Xen is quite limited.  I suspect
> > > the list for Hyper-V and VMware are similarly limited.
> > > 
> > > Yet, each of these sets is disjoint from the others.  Hyper-V's
> > > interfaces add ~1MB to the kernel.  One of VMware's interfaces adds
> > > ~350KB to the kernel.  If that happens to be your hypervisor, you
> > > urgently need that interface, but if that isn't your hypervisor that is
> > > wasted memory for which ECC is inappropriate.
> > 
> > Do we even know if *all* of these hypervisors are in use?
> 
> No idea.  I do know VMware was a thing in the mid-2000s, but I don't know
> their current status (I believe they still exist).  I suspect MS may have
> tried to buy VMware, but now they're pushing Hyper-V very heavily.
> 
> I wouldn't go out of my way to support Hyper-V.  I cannot help snickering
> at the idea of having someone using OpenWRT for an AP embedded in a
> Hyper-V machine.  That irony is one to aim for.
> 
> 
> 
> On Mon, May 01, 2023 at 04:32:52PM +0100, Daniel Golle wrote:
> > On Mon, May 01, 2023 at 09:01:29AM -0600, Philip Prindeville wrote:
> > > 
> > > From one anecdotal episode I'm not going to extrapolate that this is a robust solution in all cases; I wouldn't get very far as a cyber security engineer thinking this way.
> > 
> > Maybe the fact that PCI passthrough is facilitated by the IOMMU which
> > takes care of resource isolation makes you feel a bit better about it?
> > The host from this point on doesn't deal with that PCIe slot any more,
> > and passtrough is happening entirely in hardware.
> 
> This is the primary point.  Pre-2010 IOMMUs were very rare, post-2020
> they're universal on non-embedded devices (and common on embedded
> devices).  Without an IOMMU the hypervisor pretty well needs to be able
> to exactly simulate the device, with an IOMMU it can just about forget it
> exists.
> 
> There was a story from several years back of someone trying to bring up
> an OS on some hardware (I remember it as bring-up FreeBSD on an Apple
> laptop, but I didn't locate the story).  They kept having problems with
> memory corruption and couldn't figure out the cause.  Disabled everything
> even slightly superfluous, yet no cause could be identified.  Out of
> desperation they tried disabling nearly all devices using the IOMMU.
> Suddenly the memory corruption disappeared.
> 
> I recall (this may not be 100% accurate) the conclusion being Apple's
> firmware was turning on the 802.11 interface and looking for magic
> packets.  Yet it failed to turn it off, so a different OS ran into
> trouble due to an incorrect assumption (the OS didn't yet have a driver
> for the device and couldn't turn it off).
> 
> > However, keep in mind that access to PCIe in most cases (such as WiFi
> > adapters) doesn't assume the user could be a bad actor. You will probably
> > still be able to do bad things with it, esp. if you know the hardware
> > well (such as triggering overheat/overcurrent, deliberately creating
> > radio interference with other system parts, ...).
> 
> I could believe a severe overcurrent situation on an 802.11 card having
> potential to damage the motherboard.  I'm unsure of the level of risk
> since it will damage itself in the process and the power section of
> modern motherboards is pretty robust (graphics chips use far more).
> 
> The other two are quite implausible.  Simply not enough power available
> for either to seem a significant danger.
> 
> 
> On Tue, May 02, 2023 at 09:34:39AM -0600, Philip Prindeville wrote:
> > You can also lock up the PCIe bus so that the CPU can't access the bus or bus-attached devices like disk controllers, network interfaces, etc.
> 
> This would have been a severe concern back when the PCI *bus* was the
> main peripheral *bus*.  Now though PCIe isn't actually a bus, but a
> collection of point-to-point links.  As such with PCIe locking up a
> single segment could readily be done, but you're unlikely to impede
> operation of the machine without getting into multiple hardware VMs.
> 
> Notably with a multiple-NVMe to single PCIe slot card with on-board PCIe
> switch (avoids the need for bifurcation support on the motherboard).
> Controlling one or two of those NVMes might allow you to saturate the
> card to motherboard link.  Yet storage is a sensitive area and not too
> likely to have an entire device fed into a VM.
> 
> As such, though this is something to be wary of there isn't really too
> much danger unless you're deliberately trying to shoot yourself in the
> foot.
> 
> 
> -- 
> (\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
>  \BS (    |         ehem+sigmsg at m5p.com  PGP 87145445         |    )   /
>   \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
> 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
> 
> 
> 
> _______________________________________________
> openwrt-devel mailing list
> openwrt-devel at lists.openwrt.org
> https://lists.openwrt.org/mailman/listinfo/openwrt-devel



More information about the openwrt-devel mailing list