Re: [Tails-dev] [RFC] Design of our freezable APT repository

Delete this message

Reply to this message
Autor: anonym
Data:  
Dla: The Tails public development discussion list
Temat: Re: [Tails-dev] [RFC] Design of our freezable APT repository
intrigeri:
> We need input from people who are into release management


I am! :)

First off, awesome work, intrigeri and kibi! I'm really looking forward
to the time when this is available. We have such grand ideas based on it!

I have read through the topic branch's changes to the release steps, to
get a feel for how it would change things for our Dear Eternal Release
Manager, and it all makes sense to me and looks like it will add little
to not extra burden at release time, which is great. I do not, however,
particularly like the part where we "build the almost-final image" to
generate the package manifest. I get why we need it, and it sort of
shows one thing: determining which packages a given Tails build will use
is still a very hard problem. Oh well. :)

Next quotes from the blueprint, commit 9256153:

> Upgrading to a new snapshot


I expect it to be quite rare that we need to encode a particular
snapshot in a topic branch, which is both good and bad. Good, because we
then do not have to deal with the problems it may cause very often; bad,
because it happens rarely enough that one might not look for the
problems all the time, and hence let them slip through. :)

Specifically, I fear that we may have problems with merging topic
branches that encode some snapshot into a base branch, and then forget
to remove the encoding (or otherwise deal with it in a sane way) so it
messes up the base branch.

Have I missed/misunderstood something?

> Freeze exceptions

[...]
> 2. Pin, in config/chroot_apt/preferences, the upgraded package we
> have just imported. The aforementioned tool can do this as well.
>
> [Our current default APT pinning ranks Tails overlay APT suites over

any other APT source, so why the need to add an APT pinning entry? The
problem is that it's hard to do the next step (clean up) with this APT
pinning, combined with the fact that we can't easily delete a package
from an APT suite and see this deletion propagated over suite merges. I
(intrigeri) was not able to find a good solution to that problem under
these constraints, so [...] this document assumes that we change this,
and pin our overlay APT suites at the same level as the APT sources
corresponding to the Debian release Tails is currently based on. This
implies that we manually pin, in Git, the packages from our overlay APT
suites, that we want to override the ones found in other repositories
regardless of version numbers.]

I actually think making the packages we want more explicit is a good
thing for transparency, and makes it easier, as a developer, to quickly
see how we diverge from Debian by just looking in Git. In general, I
think I believe it is better to have as much as possible encoded in Git,
as opposed to other external repositories (APT or whatever).

However, the next point:

> 3. Make it so branches stop using the upgraded package once they
> have been unfrozen [...]


indeed exposes the problem. Manually removing the added pinnings feels a
bit error prone and cumbersome. However, this clean-up will only happen
when branches are unfreezed, which is only after releasing, so it
doesn't sound too bad. Right?

BTW, it would be great to have a linting tool that compared the current
APT pinnings vs what is available in the current Debian branches used
given some Tails source checkout.

> Another option, instead of adding/removing temporary APT pinning,
> would be to backport the package we want to upgrade, and make it so it
> has a version greater than the one in the time-based snapshot used by
> the frozen release branch, and lower than the one in more recent
> time-based snapshots.


This makes me really unenthusiastic. Please do not underestimate the
added overhead of having to rebuild packages for trivialities like this.
I stronly object to this approach.

> Number of distributions
>
> ... in reprepro's conf/distributions, for the reprepro instance(s)
> dedicated to taking snapshots of the regular Debian archive, assuming
> other mirrored archives such as security.d.o, deb.tpo, etc. each go to
> their own reprepro instance.


This make it sound like the design itself fixes which APT sources are
possible to use, and that it will be a pain to add new ones. Or will
some puppet magic automatically set up a new reprepro instance when a
new source is added in any random branch? If so: crazy! :)

To make the problem a bit more concrete, you later list:

> torproject: 5 (oldstable, stable, testing, unstable, obfs4proxy)


which doesn't include the *-experimental branches. How would we deal
with a Tor-alpha integration branch, for instance? Would we be force to
follow the releases manually, and then upload them ourselves to e.g.
deb.t.b.o?

Something I think we still need to support is adding APT sources (to
{{binary,config}/chroot_sources) that exist outside of the freezable APT
repo system. I imagine this will remain useful for conributors which do
not have the ability to upload packages to any of the already added
ones. Sure, we have config/chroot_local-packages, put it's not so nice
for contributors if they want to push branches to som Git repo. Imagine
if someone wanted to contribute grsec kernel integration. The would have
to push a commit with binary blobs in the order of 100 MiB.

> Garbage collection

[...]
> To ensure that garbage collection doesn't delete a snapshot we still
> need, e.g. the one currently referenced in the frozen testing branch,
> we'll maintain a list of snapshots that need to be kept around.


To be clear: each topic branch could potentially have encoded a
different snapshot, correct? In practice most will just follow their
base branches, but my point is that the garbage collector will have to
chech each branch, right?

I didn't look much past this, since it seemed a bit too
implementation-focused, and even about thus that we will not or may not
ever have.

However, I see nothing about how to deal with Debian packages that
fetches something external at install time (firmwares, sources illegal
in some jurisdictions). This sounds like a non-trivial problem, and I
really wonder what your thoughts on solutions are.

Crazy idea: along with the snapshots we also have a particular cache of
a caching proxy. For instance, we set `http_proxy` in the build env to
point to some caching proxy running in our infra, and when we build with
snapshot X, we set the proxy username to X to tell the proxy that cache
X is to be used. Cache X contains exactly the cached files used when
building with snapshot X the first time, because that is when they were
seeded. Crazy! We'd need some sort of access control for this, though.
:) And I also wonder if the same mechanism can be used to determine the
"tagged, partial snapshots", instead of the "'build the almost-final
image' to generate the package manifest" thing I mentioned I didn't like
above. Think of it as an apt-cacher-ng instance that is seeded when
doing the canonical release build, and then frozen forever.

Hopefully there is a simpler solutions!

Cheers!