Re: [Tails-dev] Tails for arm64 (with support for Apple Sili…

Delete this message

Reply to this message
Autor: noisycoil
Data:  
Dla: NoisyCoil via Tails-dev
CC: N9iu7pk, The Tails public development discussion list
Temat: Re: [Tails-dev] Tails for arm64 (with support for Apple Silicon)
Dear all,

I rebased the arm64 patches onto v6.1, marking the first build of the Tails for arm64 developer preview that's aligned with a stable release (you can find the images in the usual MEGA shared folder). In addition to the rebase, there are a number of new additions to the patchset, the most important being that I implemented the changes needed to run the automated test suite on the arm64 proper (wip/arm64 branch) and asahi (wip/asahi branch) builds. So I'm happy to announce that the arm64 and asahi builds pass each and every single test scenario, except (of course) for those which depend on Tails infrastructure. Unfortunately, the Raspberry kernels are not virtualizable in the usual way (for starters, they lack every virtio driver), so I could not get the test suite to run on those.

More details and a description of the other changes in what follows.


*** Automated test suite ***

In order to run the automated test suite on the arm64 images, I had to overcome quite a few obstacles.

- First, I had to make the ISO images bootable via UEFI, which I hadn't done yet. For the record, I don't think ISO images are of much use in the arm64 world.

- Second, it turns out that systems which boot via UEFI (like virtualized arm64 systems) do not support internal snapshots, so I had to come up with an implementation of external snapshots for the Tails test suite. External snapshots can be activated via the undocumented `--external-snapshots` option to `run_test_suite`, and they completely replace internal snapshots when activated. Be warned that new step definitions that make creative use of snapshots may well break my implementation of external snapshots, as this is only guaranteed to work with the step definitions which are currently defined.

- Third, there is an annoying bug in GDM that makes arm64 VMs logout whenever `udevadm settle` or the likes is called. I had already noted this behavior in a previous email, but hadn't identified the cause yet (again, the behavior is specific to arm64 VMs only, although the bug itself exists for every arch and all hardware). Since the logouts heavily interfere with the tests' flow, I had to install a patched version of GDM in the images. The patch was fortunately already available in GDM v45, so I just had to backport it to v43. The source code for the patched GDM is at https://gitlab.tails.boum.org/noisycoil/gdm, and you can find the debian source and binary packages in the usual MEGA shared folder.

- Fourth, there is another annoying bug in arm64 qemu that makes it so plymouth doesn't work on arm64 VMs. Again this conflicts with the tests' flow. Fortunately, adding a couple of kernel parameters at boot time (within the testing logic itself, i.e. along with "autotest_never_use_this_option" etc.) fixes this.

- Fifth, arm64 VMs apparently cannot be booted via SATA CDROM drives, but only via SCSI CDROM drives, so on the one hand I had to define a domain for SCSI CDROM drives and use that for arm64, and on the other hand I had to include the virtio_scsi kernel module in the image's initramfs.

- Sixth, a number of checks, especially those related to syslinux and to installing Tails upgrades, do not apply to arm64. I had to disable those for arm64. Also, Tails tests only use UEFI boot, never MBR.

- Seventh, I found a bug and a soon-to-become bug in the test suite which directly impacted the arm64 tests (more precisely the wip/asahi tests). I have filed related bug reports and fixes have already been worked out by segfault. They will be probably released in v6.2 (one of them was already merged into stable), but in the meantime I backported them to the v6.1 arm64 build branches. These are https://gitlab.tails.boum.org/tails/tails/-/issues/20277 and the now closed https://gitlab.tails.boum.org/tails/tails/-/issues/20276.

- Finally, I had to rebuild Tails' patched version of virt-viewer for arm64 and install it in my testing machines, which I did, and, again, you can find the source and binary debian packages in the usual MEGA shared folder.


You will find further details and references about these issues in the commit messages themselves. The Tails arm64 tests can be run by passing the `--arm64` option to `run_test_suite`, which makes the test suite use the correct default machine (features/domains/default-arm64.xml) and CDROM domains, implicitly enables `--external-snapshots`, disables the syslinux and Tails upgrade checks, enables UEFI unconditionally, passes the correct boot parameters, etc. These are technical changes which are essential to make the test suite run in the first place.

Beware that an 8GB RPi5 is already not quite powerful enough to run the tests cleanly (because of slow I/O, I would guess). The tests will indeed run (provided you set the GIC version to 2 in features/domains/default-arm64.xml) and most of them will pass, but for each run you can expect ~ 5-10% scenarios to randomly fail for no obvious reason (and the culprit often is libguestfs randomly crashing). If you really want to try the test suite on an 8GB RPi5, the least you can do is extending the timeout for the `try_for` function in features/support/helpers/misc_helpers.rb, e.g. by adding a `timeout = timeout * 2` at the very start of the function. I would suggest you not to attempt running the test suite on anything less powerful than an 8GB RPi5.

I am currently running the tests on a 2020 M1 Apple Mini (8 cores, 16GB RAM) running Debian Asahi and I am having no issue except for, like, 1 scenario randomly failing every 2-3 full test runs. On this platform, full test runs take ~ 4h, vs. ~ 10h on an 8GB RPi5.


As for the tests themselves, I didn't change the scenarios or step definitions, with a single exception. Since the Asahi images use a custom Asahi repository (more on this later), the tests in the wip/asahi branch which check for debian repositories now also accept that single repository. This makes a few more tests pass which already pass with no changes in wip/arm64, since that one only uses ordinary debian repositories.

This being said, the scenarios that fail are 5 out of 246:

- "Recovering in offline mode after Additional Software previously failed to upgrade and then succeed to upgrade when online" fails because Tails' debian repository does not include cowsay v3.03+dfsg2-1 for arm64.
- "I am notified when Additional Software fails to install a package" fails because it depends on the previous scenario, which failed.
- "Upgrading an initial Tails installation with an incremental upgrade" and "Upgrading a Tails whose signing key is outdated" fail because there are no arm64 incremental upgrades.
- "tails-upgrade-frontend-wrapper is using the Tails-specific SocksPort" fails because on arm64 tails-upgrade-frontend-wrapper is currently modified so that it exits without doing anything (otherwise it would return an error as, again, there are no tails upgrades for arm64).


*** Tor Browser ***

>From this release on I will be using my own builds of the Tor Browser, which are hosted at https://gitlab.com/NoisyCoil/tor-browser-build. These are essentially Heikki Lindholm's builds with minor inessential technical changes (which if anything bring the arm64 builds closer to the x86_64 Tor Browser). The advantage in doing so is I can control the release cycle and use the latest Tor Browser point release in the Tails builds (I usually make a new TB build almost on the same day a new one is released upstream). For example, the latest arm64 Tails builds use my Tor Browser v13.0.13 build (i.e. the same version as x86_64 v6.1, which was released one (!) day before v6.1), while Heikki's latest release was v13.0.9.



In Tor-related news, work has started upstream for making official linux-arm64 builds of the Tor Browser. In fact, I opened an MR for cross-building the Tor Browser for linux-arm64 from x86_64 (https://gitlab.torproject.org/tpo/applications/tor-browser-build/-/merge_requests/920), which is needed to make official builds, and the Tor Browser developers showed quite some interest in it. The MR was assigned a reviewer just yesterday and I'm in contact with the maintainers to get directions on how to improve it so it can be merged. If everything goes right we may have official linux-arm64 builds in the not-so-distant future.


*** Apple Silicon/Asahi ***

Just like the Tor Browser, from this release on I will be using my own kernel and Mesa drivers builds for Apple Silicon. The source code is hosted here: https://gitlab.com/debian-asahi-nc. The repositories contain all the components that are needed to build and package the kernel and Mesa drivers for Debian Bookworm and Testing, which I then upload to the debian repository at asahi.noisycoil.dev/debian <http://asahi.noisycoil.dev/debian>. This repository also has an onion address, http://nyqmqytjsrpkhtxvkjttxziyylj5alfs2edsel4w2hxeerefkgpdpyqd.onion/debian, which is currently used by the wip/asahi images. For the record, the kernel and Mesa drivers I'm installing in the Apple Silicon Tails images are not specific to Tails itself, but come from my parallel Debian Asahi project (hosted at the link above). In fact, the Apple Mini I run the Tails tests on uses the very same versions that are currently installed in the Tails images.

The advantage in using my builds instead of Thomas Glanzmann's is first, again, better control on the software, but also the fact that my kernel is just Debian's kernel (v6.5, to be precise) repackaged with the Asahi patches. Except for the Apple Silicon enablement, this should make my kernel as close as possibile to that in Debian's repository.


As a nice sidenote, I managed to hack the Asahi bootloader so that it boots USB drives without needing to use the command line (can you remember my first email, where I explained that there was this UX issue?). Now you can just plug the USB drive in and it boots, no questions asked. I don't think the Asahi project is interested in this feature (I asked in the Matrix dev room and got no answer), so I simply forked the component which is responsible for this (u-boot) and use it for my projects. The source code is in https://gitlab.com/debian-asahi-nc/u-boot-asahi for Debian Asahi, in https://gitlab.com/NoisyCoil/uboot-tools for Fedora Asahi, and I also packaged it to make a nice Tails bootloader for Apple Silicon here: https://gitlab.tails.boum.org/noisycoil/tails-bootloader. You will find a video demo of Tails booting on the aforementioned M1 Apple Mini using the aforementioned bootloader here: https://mega.nz/file/57pChAga#JjdUM2h4vuh7vRACOgcxcOuanV5dae47qZ4cKTq4DoA.

All the Tails bootloader is is the standard Asahi bootloader, with the automatic-boot-from-USB hack and a nice Tails logo. You can install it as you would a normal Asahi bootloader (https://asahilinux.org/): `curl https://asahi.noisycoil.dev/tails/install | sh` (from macOS). If you want to give it a try, please make sure to be familiar with the Asahi documentation first, as this is for all effects and purposes an Asahi installation, the difference being that it only occupies 3GB of space and creates 2 new partitions (instead of 4) on your mac's SSD. Again, keep in mind that this is just a bootloader (the actual OS still resides in the USB drive), and if you are not a developer please refrain from trying any of this as this is still very experimental software.


*** Raspberry Pi ***

I would rather give you some more interesting news on this front, but my efforts in virtualizing the Raspberry Pi images have failed up to this point. Note that the RPi images are those with the larger delta with respect to an ordinary Tails installation (they use the Raspberry repository on top of Debian's, so they install a lot of custom software), thus they are those which would need more testing.


The only news I can share is I fixed the issue reported by n9iu7pk, by which systemd-sysctl.service failed. This happened on the wip/asahi and wip/raspi branches due to the fact that the kernels in use there, at variance with the one in wip/arm64 (which is 100% vanilla Debian kernel), is configured with CONFIG_ARCH_MMAP_RND_BITS_MAX = 31 (asahi), 30 (rpi-2712, version rpi8) or 24 (rpi-v8, version rpi8), but sysctl tried to set mmap_rnd_bits to a larger value (32) at boot time. In the asahi branch I fixed this issue by hardcoding a new value (31) at build time, but for the raspi branch I had to come up with something else because one doesn't know which of the two RPi kernels is in use until after boot (AFAIK RPi5s use rpi-v8, while the others use rpi-2712). What I came up with is a new initramfs hook (https://gitlab.tails.boum.org/noisycoil/tails/-/blob/wip/raspi/config/chroot_local-includes/usr/share/initramfs-tools/scripts/init-bottom/set_vm_mmap_rnd_bits?ref_type=heads) which, right after mounting the root partition but before switching to it, detects which kernel is in use and selects the correct value of mmap_rnd_bits to set in /etc/sysctl.d. This seems to have worked as intended.


Congratulations to you all on the 6.1 release,

Best,

NC