WIP: Support file:/ mirrors #25

Closed
DonKult wants to merge 6 commits from DonKult/mmdebstrap:feature/filemirror into main
First-time contributor

Hi,

mmdebstrap currently doesn't support file:/ as it wants to have all deb files in /var/cache/apt/archives for what amounts to a lot of guess work. This PR removes the guess work by "just" asking apt to tell us which deb files it wants to install instead. Easy, right?

I think for the long term we can talk about adding a --print-archive-filenames flag (name subject to change) in apt, but for this WIP I opted for no-apt-change which seems to work for the most part but beware that I have neither tested it extensively nor am I a Perl coder – I had to look up whats the deal with $var vs %var vs @var, that is my perl experience level (aka: n00b).

Thankfully, the current WIP mostly consists of code removal (I just love diffs adding features/fixing bugs while reducing line count) and apt saucery, the tricky and hard part of making it maintainable & work for everyone I am gladly leaving up to someone else if need be.

Known bugs so far

Requires /tmp in chroot?

Test (56/183) mode=unshare,variant=custom: missing /dev, /sys, /proc inside the chroot fails with E: failed to open /tmp/mmdebstrap.9TctWt82qW/tmp/mmdebstrap.listofdebs.alMoIAtC1c0U for reading: No such file or directory. I suppose this chroot also lacks /tmp. If so this was broken before if you were using a non-empty archives, just that this situation isn't tested much.

We don't actually need apt to generate a file, I just lack the perl foo to open a file descriptor and work with that accordingly. An apt proof-of-concept looks like this: apt install awesome -o Debug::pkgDpkgPm=1 -o Dir::Log=/dev/null -oAPT::Keep-Fds::=5 -oDPkg::Tools::options::'cat >&5'::InfoFD=5 -oDpkg::Pre-Install-Pkgs::='cat >&5' -o Debug::NoLocking=1 -o Dpkg::Use-Pty=0 -y 5>/tmp/foo.lst; cat /tmp/foo.lst.

I fully suspect more test failures, I just haven't investigated further so far given it takes literal ages to run the tests and skipping and stuff is to be implemented by hand…

Reproducibility of /var/cache/apt/archives

The invocation I played with is reproducible just fine except for one minor problem: If no deb file is downloaded/copied to /var/cache/apt/archives it has a different timestamp compared to if something was (temporarily) there.

For context: WTH am I doing?

My test invocation is:

mmdebstrap --variant=apt --architectures=amd64 --skip=essential/unlink --setup-hook='mkdir "$1/tmp/mirror"; mount -o ro,bind /tmp/mirror "$1/tmp/mirror";' --customize-hook='sync-out /var/cache/apt/archives ./cache' --customize-hook='umount "$1/tmp/mirror"; rmdir "$1/tmp/mirror";' unstable - 'mirror+file:///tmp/mirror/list' | mmtarfilter --path-exclude='/dev/*' > debian-amd64-unstable.tar

(note that ./cache needs to exist before the call for me. A --setup-hook as advised in the archives-preserving example creates it with the wrong uid for me)

The mirrorlist file /tmp/mirror/list contains:

file:/tmp/mirror	priority:1
https://deb.debian.org/debian

The general idea here being that every file it can get via file:/ is accessed this way while not already cached files are acquired from the web, synced out of the chroot and in a step not shown here merged into the local mirror structure.

So, yes, my distant endgoal is basically reinventing caching proxies except that this "proxy" has no server component I have to setup and is agnostic to the protocol used to access one (or more) upstream mirrors. apt-cacher-ng is already at its limits if the mirror is https, good luck wanting to access mirrors only via tor or downloading from multiple mirrors in parallel.

Not sure if that will become truely useful in the end as so far its just me stretching various tools to its limits for fun (& to find regression I caused myself…). That is to say: This here has at best whistlist priority for me, it isn't blocking any serious work or similar such.

Hi, mmdebstrap currently doesn't support file:/ as it wants to have all deb files in `/var/cache/apt/archives` for what amounts to a lot of guess work. This PR removes the guess work by "just" asking apt to tell us which deb files it wants to install instead. Easy, right? I think for the long term we can talk about adding a `--print-archive-filenames` flag (name subject to change) in apt, but for this WIP I opted for no-apt-change which seems to work for the most part but beware that I have neither tested it extensively nor am I a Perl coder – I had to look up whats the deal with `$var` vs `%var` vs `@var`, that is my perl experience level (aka: n00b). Thankfully, the current WIP mostly consists of code removal (I just love diffs adding features/fixing bugs while reducing line count) and apt saucery, the tricky and hard part of making it maintainable & work for everyone I am gladly leaving up to someone else if need be. ## Known bugs so far ### Requires /tmp in chroot? Test `(56/183) mode=unshare,variant=custom: missing /dev, /sys, /proc inside the chroot` fails with `E: failed to open /tmp/mmdebstrap.9TctWt82qW/tmp/mmdebstrap.listofdebs.alMoIAtC1c0U for reading: No such file or directory`. I suppose this chroot also lacks `/tmp`. If so this was broken before if you were using a non-empty archives, just that this situation isn't tested much. We don't actually need apt to generate a file, I just lack the perl foo to open a file descriptor and work with that accordingly. An apt proof-of-concept looks like this: `apt install awesome -o Debug::pkgDpkgPm=1 -o Dir::Log=/dev/null -oAPT::Keep-Fds::=5 -oDPkg::Tools::options::'cat >&5'::InfoFD=5 -oDpkg::Pre-Install-Pkgs::='cat >&5' -o Debug::NoLocking=1 -o Dpkg::Use-Pty=0 -y 5>/tmp/foo.lst; cat /tmp/foo.lst`. I fully suspect more test failures, I just haven't investigated further so far given it takes literal ages to run the tests and skipping and stuff is to be implemented by hand… ### Reproducibility of /var/cache/apt/archives The invocation I played with is reproducible just fine except for one minor problem: If no deb file is downloaded/copied to /var/cache/apt/archives it has a different timestamp compared to if something was (temporarily) there. ## For context: WTH am I doing? My test invocation is: `mmdebstrap --variant=apt --architectures=amd64 --skip=essential/unlink --setup-hook='mkdir "$1/tmp/mirror"; mount -o ro,bind /tmp/mirror "$1/tmp/mirror";' --customize-hook='sync-out /var/cache/apt/archives ./cache' --customize-hook='umount "$1/tmp/mirror"; rmdir "$1/tmp/mirror";' unstable - 'mirror+file:///tmp/mirror/list' | mmtarfilter --path-exclude='/dev/*' > debian-amd64-unstable.tar` (note that `./cache` needs to exist before the call for me. A `--setup-hook` as advised in the archives-preserving example creates it with the wrong uid for me) The mirrorlist file `/tmp/mirror/list` contains: ``` file:/tmp/mirror priority:1 https://deb.debian.org/debian ``` The general idea here being that every file it can get via file:/ is accessed this way while not already cached files are acquired from the web, synced out of the chroot and in a step not shown here merged into the local mirror structure. So, yes, my distant endgoal is basically reinventing caching proxies except that this "proxy" has no server component I have to setup and is agnostic to the protocol used to access one (or more) upstream mirrors. `apt-cacher-ng` is already at its limits if the mirror is `https`, good luck wanting to access mirrors only via tor or downloading from multiple mirrors in parallel. Not sure if that will become truely useful in the end as so far its just me stretching various tools to its limits for fun (& [to find regression I caused myself…](https://salsa.debian.org/apt-team/apt/-/merge_requests/235)). That is to say: This here has at best whistlist priority for me, it isn't blocking any serious work or similar such.
DonKult added 5 commits 2022-04-24 14:56:24 +00:00
EIPP stands for "External Installation Planner Protocol" and is rather
similar to EDSP but with the clear advantage that we can extract the
information we need more easily as we can tell apt to write the file for
us rather than playing solver-in-the-middle and the problem space is
much smaller meaning less data for apt to generate and to pass through
our hands.

The idea here is simply that every package which doesn't have a Status
field in EIPP has the uninstalled status and the only reason its is part
of the EIPP request is that we want to change this by installing it.
That could be verified via the Install header at the start of the
request, but this commit doesn't implement that.

Note that this means we need "more" than the download-only mode can
provide: Either a simulation or "the real deal". Except we modify the
later to be a fancy no op.
Guessing filenames is boring. What if we could ask apt to tell us which
debs it downloaded (or found lying around elsewhere) directly? Turns out
we can rather easily avoiding a bunch of guesswork.
As long as you can make it so that the same path to the deb file works
inside and outside of the chroot using file:// as a mirror is no longer
a problem with the previous work.
Now that the deb files can reside in different places sorting them leads
to subtil differences in the order and hence the created chroot. apts
unpack order on the other hand might not be a good order (but why would
one sorted from a to z be one?), but it is far more stable as it is
independent on the filenames.
Owner

Very nice, thank you! I'm going to mangle your commits a little bit because I see that the EIPP support that one commit introduces is removed in another.

Do you think it's possible to somehow autodetect which directories to bind-mount?

If I have a sources.list entry like deb file:///tmp/mirror ..., is there a way to extract the /tmp/mirror bit without me writing my own sources.list parser? I cannot use apt-get index-target output, because apt inside the chroot will not be able to access /tmp/mirror when running apt-get update.

Thanks!

Very nice, thank you! I'm going to mangle your commits a little bit because I see that the EIPP support that one commit introduces is removed in another. Do you think it's possible to somehow autodetect which directories to bind-mount? If I have a sources.list entry like `deb file:///tmp/mirror ...`, is there a way to extract the `/tmp/mirror` bit without me writing my own sources.list parser? I cannot use `apt-get index-target` output, because apt inside the chroot will not be able to access `/tmp/mirror` when running `apt-get update`. Thanks!
Owner

In that sense, I also don't understand how this can work:

@@ -1533,7 +1526,7 @@ else
        skipped=$((skipped+1))
 fi
 
-print_header "mode=$defaultmode,variant=apt: fail with file:// mirror"
+print_header "mode=$defaultmode,variant=apt: file:// mirror"
 cat << END > shared/test.sh
 #!/bin/sh
 set -eu
@@ -1542,13 +1535,9 @@ if [ ! -e /mmdebstrap-testenv ]; then
        echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2
        exit 1
 fi
-ret=0
-$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian unstable main" || ret=\$?
+$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian $DEFAULT_DIST main"
+tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
 rm /tmp/debian-chroot.tar
-if [ "\$ret" = 0 ]; then
-       echo expected failure but got exit \$ret >&2
-       exit 1
-fi
 END
 if [ "$HAVE_QEMU" = "yes" ]; then
        ./run_qemu.sh

For me, this test fails with:

E: package file /mnt/cache/debian/pool/main/g/gcc-12/gcc-12-base_12.1.0-2_amd64.deb not accessible from chroot directory -- use copy:// instead of file:// or a bind-mount

Which makes sense, because the mmdebstrap call doesn't set up a bind-mount. So lets set one up by adding the following:

--setup-hook='mkdir -p "\$1"/mnt/cache/debian; mount -o ro,bind /mnt/cache/debian "\$1"/mnt/cache/debian'

But then it fails with:

E: cannot unlink /mnt/cache/debian/pool/main/g/gcc-12/gcc-12-base_12.1.0-2_amd64.deb: Read-only file system

Since mmdebstrap assumes that it had downloaded the deb, it will clean it up again so that it doesn't end up inside the resulting tarball. But with a file:// mirror and mind mounts we now have to figure out whether we downloaded the deb or whether it was there before. How can we do that?

In that sense, I also don't understand how this can work: ```diff @@ -1533,7 +1526,7 @@ else skipped=$((skipped+1)) fi -print_header "mode=$defaultmode,variant=apt: fail with file:// mirror" +print_header "mode=$defaultmode,variant=apt: file:// mirror" cat << END > shared/test.sh #!/bin/sh set -eu @@ -1542,13 +1535,9 @@ if [ ! -e /mmdebstrap-testenv ]; then echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2 exit 1 fi -ret=0 -$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian unstable main" || ret=\$? +$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian $DEFAULT_DIST main" +tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar -if [ "\$ret" = 0 ]; then - echo expected failure but got exit \$ret >&2 - exit 1 -fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh ``` For me, this test fails with: E: package file /mnt/cache/debian/pool/main/g/gcc-12/gcc-12-base_12.1.0-2_amd64.deb not accessible from chroot directory -- use copy:// instead of file:// or a bind-mount Which makes sense, because the mmdebstrap call doesn't set up a bind-mount. So lets set one up by adding the following: --setup-hook='mkdir -p "\$1"/mnt/cache/debian; mount -o ro,bind /mnt/cache/debian "\$1"/mnt/cache/debian' But then it fails with: E: cannot unlink /mnt/cache/debian/pool/main/g/gcc-12/gcc-12-base_12.1.0-2_amd64.deb: Read-only file system Since mmdebstrap assumes that it had downloaded the deb, it will clean it up again so that it doesn't end up inside the resulting tarball. But with a `file://` mirror and mind mounts we now have to figure out whether we downloaded the deb or whether it was there before. How can we do that?
DonKult force-pushed feature/filemirror from d3952de003 to 6baf4151f9 2022-05-22 20:55:46 +00:00 Compare
Author
First-time contributor

EDSP->EIPP: Yeah, I replaced it later on… I was thinking at first I would go with an external planner and/or a 'dpkg' which does the extract rather than getting the filenames out, but that turned messy rather fast and so I dropped that direction I left it in non the less as I thought that this might have value even if the rest of the MR turns into a dumbster fire.

tests: I can't say much about it as I don't quite follow how that coverage script works and I got a bit annoyed by having one of the very last tests fail on me on a clean checkout after half a day with no easy way of just trying that test again. My machine isn't powerful enough to enjoy these tests properly & my "usecase" works, so who cares about the rest… 😜 So as said in the MR, I haven't really run the tests much, so changes there are mostly done blind.

hooks: I have a setup-hook who basically does that yes. I also have --skip=essential/unlink in my calls as I want to keep $1/var/cache/apt/archives so I can sync the debs out of the chroot and merge that into my mirror later on… as that isn't very documented btw, I managed to figure out that the call you have to do in a hook script is mmdebstrap --hook-helper "$MMDEBSTRAP_ROOT" "$MMDEBSTRAP_MODE" "$MMDEBSTRAP_HOOK" 'env' "$MMDEBSTRAP_VERBOSITY" 'sync-out' '/var/cache/apt/archives' "${CACHE}/archives" >&${MMDEBSTRAP_HOOKSOCK} <&${MMDEBSTRAP_HOOKSOCK} (just added the commit setting $MMDEBSTRAP_VERBOSITY … I think that makes sense so that a hook script can be as verbose (or not) as the mmdebstrap call is supposed to be). I suppose the unlink should just check if the deb it wants to delete is in /var/cache/apt/archives and ignore it otherwise as it ignores cached debs already.

autobind: Mhh. apt-get indextargets --no-release-info doesn't need access to any downloaded files. It means it will potentially talk about files which do not exist in reality, but that should be fine as we are mostly intested in Repo-URI field I assume.

EDSP->EIPP: Yeah, I replaced it later on… I was thinking at first I would go with an external planner and/or a 'dpkg' which does the extract rather than getting the filenames out, but that turned messy rather fast and so I dropped that direction I left it in non the less as I thought that this might have value even if the rest of the MR turns into a dumbster fire. tests: I can't say much about it as I don't quite follow how that coverage script works and I got a bit annoyed by having one of the very last tests fail on me on a clean checkout after half a day with no easy way of just trying that test again. My machine isn't powerful enough to enjoy these tests properly & my "usecase" works, so who cares about the rest… 😜 So as said in the MR, I haven't really run the tests much, so changes there are mostly done blind. hooks: I have a setup-hook who basically does that yes. I also have `--skip=essential/unlink` in my calls as I want to keep `$1/var/cache/apt/archives` so I can sync the debs out of the chroot and merge that into my mirror later on… as that isn't very documented btw, I managed to figure out that the call you have to do in a hook script is `mmdebstrap --hook-helper "$MMDEBSTRAP_ROOT" "$MMDEBSTRAP_MODE" "$MMDEBSTRAP_HOOK" 'env' "$MMDEBSTRAP_VERBOSITY" 'sync-out' '/var/cache/apt/archives' "${CACHE}/archives" >&${MMDEBSTRAP_HOOKSOCK} <&${MMDEBSTRAP_HOOKSOCK}` (just added the commit setting `$MMDEBSTRAP_VERBOSITY` … I think that makes sense so that a hook script can be as verbose (or not) as the mmdebstrap call is supposed to be). I suppose the unlink should just check if the deb it wants to delete is in `/var/cache/apt/archives` and ignore it otherwise as it ignores cached debs already. autobind: Mhh. `apt-get indextargets --no-release-info` doesn't need access to any downloaded files. It means it will potentially talk about files which do not exist in reality, but that should be fine as we are mostly intested in `Repo-URI` field I assume.
Owner

EDSP->EIPP: Yeah, I replaced it later on… I was thinking at first I would go with an external planner and/or a 'dpkg' which does the extract rather than getting the filenames out, but that turned messy rather fast and so I dropped that direction I left it in non the less as I thought that this might have value even if the rest of the MR turns into a dumbster fire.

The EIPP stuff was a very interesting read but unfortunately I don't know whether I'll need it anywhere else now. :/

tests: I can't say much about it as I don't quite follow how that coverage script works and I got a bit annoyed by having one of the very last tests fail on me on a clean checkout after half a day with no easy way of just trying that test again. My machine isn't powerful enough to enjoy these tests properly & my "usecase" works, so who cares about the rest… 😜 So as said in the MR, I haven't really run the tests much, so changes there are mostly done blind.

Yeah, the testsuite has to be rewritten but since I was (so far) the only one running it, there wasn't really much motivation to do so. A related problem is, that I don't want to write a testsuite runner from scratch and am looking for a framework that does what I want but wasn't successful in finding one so far.

hooks: I have a setup-hook who basically does that yes. I also have --skip=essential/unlink in my calls as I want to keep $1/var/cache/apt/archives so I can sync the debs out of the chroot and merge that into my mirror later on… as that isn't very documented btw, I managed to figure out that the call you have to do in a hook script is mmdebstrap --hook-helper "$MMDEBSTRAP_ROOT" "$MMDEBSTRAP_MODE" "$MMDEBSTRAP_HOOK" 'env' "$MMDEBSTRAP_VERBOSITY" 'sync-out' '/var/cache/apt/archives' "${CACHE}/archives" >&${MMDEBSTRAP_HOOKSOCK} <&${MMDEBSTRAP_HOOKSOCK} (just added the commit setting $MMDEBSTRAP_VERBOSITY … I think that makes sense so that a hook script can be as verbose (or not) as the mmdebstrap call is supposed to be). I suppose the unlink should just check if the deb it wants to delete is in /var/cache/apt/archives and ignore it otherwise as it ignores cached debs already.

There are a bunch of undocumented options (but as an apt developer you are probably used to that) which are undocumented because I do not consider their interface stable yet. An MMDEBSTRAP_VERBOSITY variable sounds like a good idea, thanks! (I wish we had a setting to run maintainer scripts with -x without me having to rebuild src:glibc just because I want to put set -x in its preinst...)

autobind: Mhh. apt-get indextargets --no-release-info doesn't need access to any downloaded files. It means it will potentially talk about files which do not exist in reality, but that should be fine as we are mostly intested in Repo-URI field I assume.

Ah, an undocumented --no-release-info argument setting an undocumented option APT::Get::IndexTargets::ReleaseInfo. We were just talking about those undocumented bits. ;) See, this is why I'm happy to have an apt developer here who can teach me all the magic incantations. Indeed all the undocumented --hook-helper magic of mmdebstrap is just because I want to be just as cool as apt! :D

That sounds good. I think I'll write a hook script that does the right indextarget magic and can be used by those who want to use file:// and want to automount their stuff. Thanks!

> EDSP->EIPP: Yeah, I replaced it later on… I was thinking at first I would go with an external planner and/or a 'dpkg' which does the extract rather than getting the filenames out, but that turned messy rather fast and so I dropped that direction I left it in non the less as I thought that this might have value even if the rest of the MR turns into a dumbster fire. The EIPP stuff was a very interesting read but unfortunately I don't know whether I'll need it anywhere else now. :/ > tests: I can't say much about it as I don't quite follow how that coverage script works and I got a bit annoyed by having one of the very last tests fail on me on a clean checkout after half a day with no easy way of just trying that test again. My machine isn't powerful enough to enjoy these tests properly & my "usecase" works, so who cares about the rest… 😜 So as said in the MR, I haven't really run the tests much, so changes there are mostly done blind. Yeah, the testsuite has to be rewritten but since I was (so far) the only one running it, there wasn't really much motivation to do so. A related problem is, that I don't want to write a testsuite runner from scratch and am looking for a framework that does what I want but wasn't successful in finding one so far. > hooks: I have a setup-hook who basically does that yes. I also have `--skip=essential/unlink` in my calls as I want to keep `$1/var/cache/apt/archives` so I can sync the debs out of the chroot and merge that into my mirror later on… as that isn't very documented btw, I managed to figure out that the call you have to do in a hook script is `mmdebstrap --hook-helper "$MMDEBSTRAP_ROOT" "$MMDEBSTRAP_MODE" "$MMDEBSTRAP_HOOK" 'env' "$MMDEBSTRAP_VERBOSITY" 'sync-out' '/var/cache/apt/archives' "${CACHE}/archives" >&${MMDEBSTRAP_HOOKSOCK} <&${MMDEBSTRAP_HOOKSOCK}` (just added the commit setting `$MMDEBSTRAP_VERBOSITY` … I think that makes sense so that a hook script can be as verbose (or not) as the mmdebstrap call is supposed to be). I suppose the unlink should just check if the deb it wants to delete is in `/var/cache/apt/archives` and ignore it otherwise as it ignores cached debs already. There are a bunch of undocumented options (but as an apt developer you are probably used to that) which are undocumented because I do not consider their interface stable yet. An `MMDEBSTRAP_VERBOSITY` variable sounds like a good idea, thanks! (I wish we had a setting to run maintainer scripts with -x without me having to rebuild src:glibc just because I want to put `set -x` in its preinst...) > autobind: Mhh. `apt-get indextargets --no-release-info` doesn't need access to any downloaded files. It means it will potentially talk about files which do not exist in reality, but that should be fine as we are mostly intested in `Repo-URI` field I assume. Ah, an undocumented `--no-release-info` argument setting an undocumented option `APT::Get::IndexTargets::ReleaseInfo`. We were just talking about those undocumented bits. ;) See, this is why I'm happy to have an apt developer here who can teach me all the magic incantations. Indeed all the undocumented `--hook-helper` magic of mmdebstrap is just because I want to be just as cool as apt! :D That sounds good. I think I'll write a hook script that does the right `indextarget` magic and can be used by those who want to use `file://` and want to automount their stuff. Thanks!
Author
First-time contributor

I know you are very serious with the later remarks, but: --no-release-info is actually documented in doc/acquire-additional-files.md. That the underlying option isn't documented is somewhat normal as its just there to tell the code that the flag was used and there is no reason to use that option instead of the documented flag.

As I tried to mention the EDSP->EIPP code I dropped could theoretically be used to fold the extract step into the download step. If that is a useful direction though… also EIPP is faster than EDSP (as less data is pushed around) so if the entire MR would be trashed (as mentioned on IRC, I half-expected that) at least that part could have been salvaged.

automount: Not sure I would try to be too clever here. file:/ support is all nice, but as this MR request mentions I am actually using mirror+file:/ which points to a single file somewhere and from within potentially to other methods, I gave an example above. Trying to support that might be a bit overkill…

I know you are very serious with the later remarks, but: `--no-release-info` is actually documented in `doc/acquire-additional-files.md`. That the underlying option isn't documented is somewhat normal as its just there to tell the code that the flag was used and there is no reason to use that option instead of the **documented** flag. As I tried to mention the EDSP->EIPP code I dropped *could* theoretically be used to fold the extract step into the download step. If that is a useful direction though… also EIPP is faster than EDSP (as less data is pushed around) so if the entire MR would be trashed (as mentioned on IRC, I half-expected that) at least that part could have been salvaged. automount: Not sure I would try to be too clever here. `file:/` support is all nice, but as this MR request mentions I am actually using `mirror+file:/` which points to a single file somewhere and from within potentially to other methods, I gave an example above. Trying to support that might be a bit overkill…
Owner

As I tried to mention the EDSP->EIPP code I dropped could theoretically be used to fold the extract step into the download step. If that is a useful direction though… also EIPP is faster than EDSP (as less data is pushed around) so if the entire MR would be trashed (as mentioned on IRC, I half-expected that) at least that part could have been salvaged.

Yeah I had read that part but wasn't sure how you meant this could be done. Could you roughly outline the idea behind it being possible to download and extract at the same time?

automount: Not sure I would try to be too clever here. file:/ support is all nice, but as this MR request mentions I am actually using mirror+file:/ which points to a single file somewhere and from within potentially to other methods, I gave an example above. Trying to support that might be a bit overkill…

How would a mirror+file:// entry look like in apt-get indextarget output? Currently, I have this setup-hook script:

env APT_CONFIG=$MMDEBSTRAP_APT_CONFIG apt-get indextargets --no-release-info \
	| sed -ne 's/^Repo-URI: file:\/\+//p' \
	| sort -u \
	| while read path; do
		mkdir -p "$rootdir/run/mmdebstrap"
		mkdir -p "$rootdir/$path"
		mount -o ro,bind "/$path" "$rootdir/$path"
		printf '/%s\0' "$path" >> "$rootdir/run/mmdebstrap/file-mirror-automount"
	done

And this customize-hook script:

cat "$rootdir/run/mmdebstrap/file-mirror-automount" | xargs --null --no-run-if-empty -I '{}' --max-args=1 umount "$rootdir/{}"
rm "$rootdir/run/mmdebstrap/file-mirror-automount"
rmdir --ignore-fail-on-non-empty "$rootdir/run/mmdebstrap"
> As I tried to mention the EDSP->EIPP code I dropped *could* theoretically be used to fold the extract step into the download step. If that is a useful direction though… also EIPP is faster than EDSP (as less data is pushed around) so if the entire MR would be trashed (as mentioned on IRC, I half-expected that) at least that part could have been salvaged. Yeah I had read that part but wasn't sure how you meant this could be done. Could you roughly outline the idea behind it being possible to download and extract at the same time? > automount: Not sure I would try to be too clever here. `file:/` support is all nice, but as this MR request mentions I am actually using `mirror+file:/` which points to a single file somewhere and from within potentially to other methods, I gave an example above. Trying to support that might be a bit overkill… How would a `mirror+file://` entry look like in apt-get indextarget output? Currently, I have this setup-hook script: ``` env APT_CONFIG=$MMDEBSTRAP_APT_CONFIG apt-get indextargets --no-release-info \ | sed -ne 's/^Repo-URI: file:\/\+//p' \ | sort -u \ | while read path; do mkdir -p "$rootdir/run/mmdebstrap" mkdir -p "$rootdir/$path" mount -o ro,bind "/$path" "$rootdir/$path" printf '/%s\0' "$path" >> "$rootdir/run/mmdebstrap/file-mirror-automount" done ``` And this customize-hook script: ``` cat "$rootdir/run/mmdebstrap/file-mirror-automount" | xargs --null --no-run-if-empty -I '{}' --max-args=1 umount "$rootdir/{}" rm "$rootdir/run/mmdebstrap/file-mirror-automount" rmdir --ignore-fail-on-non-empty "$rootdir/run/mmdebstrap" ```
Owner

I think I'm done reworking your commits. I fixed some of the Perl problems (arrays are hard) and am using a file descriptor instead of a file in /tmp (super easy to set up all those read/write handles and fork an extra process to read from it...). With those changes, the tests pass with nearly no changes, so I don't think that this should introduce any regressions but just enable new functionality. If you have any comments, then I'm eager to hear them. Otherwise, this is what I'm going to commit to close this pull request:

From bc3e7501368429fb78e6c112b5fa3f496e63c87e Mon Sep 17 00:00:00 2001
From: David Kalnischkies <david@kalnischkies.de>
Date: Tue, 24 May 2022 11:47:16 +0200
Subject: [PATCH] Rework download stage to allow file:// mirrors

 - factor out package downloading function
 - replace -oApt::Get::Download-Only=true by -oDebug::pkgDpkgPm=1
 - remove guessing of package names in /var/cache/apt/archives/
 - drop edsp parsing with proxysolver/mmdebstrap-dump-solution to obtain
   downloaded filenames in favour of -oDpkg::Pre-Install-Pkgs::=cat
 - /var/cache/apt/archives/ is now allowed to contain packages
 - drop --skip=download/empty
 - file:// mirrors are now supported if their path is available inside
   the chroot
---
 coverage.sh |  14 +--
 mmdebstrap  | 336 +++++++++++++++++++---------------------------------
 2 files changed, 126 insertions(+), 224 deletions(-)

diff --git a/coverage.sh b/coverage.sh
index b8afbf6..1eb87e4 100755
--- a/coverage.sh
+++ b/coverage.sh
@@ -1535,7 +1535,7 @@ else
 	skipped=$((skipped+1))
 fi
 
-print_header "mode=$defaultmode,variant=apt: fail with file:// mirror"
+print_header "mode=$defaultmode,variant=apt: file:// mirror"
 cat << END > shared/test.sh
 #!/bin/sh
 set -eu
@@ -1544,13 +1544,9 @@ if [ ! -e /mmdebstrap-testenv ]; then
 	echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2
 	exit 1
 fi
-ret=0
-$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian unstable main" || ret=\$?
+$CMD --mode=$defaultmode --variant=apt --setup-hook='mkdir -p "\$1"/mnt/cache/debian; mount -o ro,bind /mnt/cache/debian "\$1"/mnt/cache/debian' --customize-hook='umount "\$1"/mnt/cache/debian; rmdir "\$1"/mnt/cache/debian "\$1"/mnt/cache' $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian $DEFAULT_DIST main"
+tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
 rm /tmp/debian-chroot.tar
-if [ "\$ret" = 0 ]; then
-	echo expected failure but got exit \$ret >&2
-	exit 1
-fi
 END
 if [ "$HAVE_QEMU" = "yes" ]; then
 	./run_qemu.sh
@@ -3083,7 +3079,7 @@ $CMD \$include --mode=$defaultmode --variant=$variant \
 	--setup-hook='sync-in "'"\$tmpdir"'" /var/cache/apt/archives/partial' \
 	$DEFAULT_DIST - $mirror > test1.tar
 cmp orig.tar test1.tar
-$CMD \$include --mode=$defaultmode --variant=$variant --skip=download/empty \
+$CMD \$include --mode=$defaultmode --variant=$variant \
 	--customize-hook='touch "\$1"/var/cache/apt/archives/partial' \
 	--setup-hook='mkdir -p "\$1"/var/cache/apt/archives/' \
 	--setup-hook='sync-in "'"\$tmpdir"'" /var/cache/apt/archives/' \
@@ -3250,8 +3246,6 @@ rm /tmp/debian-chroot/etc/hostname
 rm /tmp/debian-chroot/etc/resolv.conf
 rm /tmp/debian-chroot/var/lib/dpkg/status
 rm /tmp/debian-chroot/var/cache/apt/archives/lock
-rm /tmp/debian-chroot/var/lib/dpkg/lock
-rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
 rm /tmp/debian-chroot/var/lib/apt/lists/lock
 ## delete merged usr symlinks
 #rm /tmp/debian-chroot/libx32
diff --git a/mmdebstrap b/mmdebstrap
index 3f73d2b..7ff2ab7 100755
--- a/mmdebstrap
+++ b/mmdebstrap
@@ -872,30 +872,11 @@ sub run_dpkg_progress {
 }
 
 sub run_apt_progress {
-    my $options = shift;
-    my @debs    = @{ $options->{PKGS} // [] };
-    my $tmpedsp;
-    if (exists $options->{EDSP_RES}) {
-        (undef, $tmpedsp) = tempfile(
-            "mmdebstrap.edsp.XXXXXXXXXXXX",
-            OPEN   => 0,
-            TMPDIR => 1
-        );
-    }
+    my $options  = shift;
+    my @debs     = @{ $options->{PKGS} // [] };
     my $get_exec = sub {
         my @prefix = ();
         my @opts   = ();
-        if (exists $options->{EDSP_RES}) {
-            push @prefix, 'env', "APT_EDSP_DUMP_FILENAME=$tmpedsp";
-            if (-e "./proxysolver") {
-                # for development purposes, use the current directory if it
-                # contains a file called proxysolver
-                push @opts, ("-oDir::Bin::solvers=" . getcwd()),
-                  '--solver=proxysolver';
-            } else {
-                push @opts, '--solver=mmdebstrap-dump-solution';
-            }
-        }
         return (
             @prefix,
             @{ $options->{ARGV} },
@@ -950,38 +931,79 @@ sub run_apt_progress {
         }
     };
     run_progress $get_exec, $line_handler, $line_has_error, $options->{CHDIR};
-    if (exists $options->{EDSP_RES}) {
-        info "parsing EDSP results...";
-        open my $fh, '<', $tmpedsp
-          or error "failed to open $tmpedsp for reading: $!";
-        my $inst = 0;
-        my $pkg;
-        my $ver;
-        while (my $line = <$fh>) {
-            chomp $line;
-            if ($line ne "") {
-                if ($line =~ /^Install: \d+/) {
-                    $inst = 1;
-                } elsif ($line =~ /^Package: (.*)/) {
-                    $pkg = $1;
-                } elsif ($line =~ /^Version: (.*)/) {
-                    $ver = $1;
-                }
-                next;
-            }
-            if ($inst == 1 && defined $pkg && defined $ver) {
-                push @{ $options->{EDSP_RES} }, [$pkg, $ver];
-            }
-            $inst = 0;
-            undef $pkg;
-            undef $ver;
-        }
-        close $fh;
-        unlink $tmpedsp;
-    }
     return;
 }
 
+sub run_apt_download_progress {
+    my $options = shift;
+    if ($options->{dryrun}) {
+        info "simulate downloading packages with apt...";
+    } else {
+        info "downloading packages with apt...";
+    }
+    pipe my $rfh, my $wfh;
+    my $pid = open my $fh, '-|' // error "fork() failed: $!";
+    if ($pid == 0) {
+        close $wfh;
+        # read until parent process closes $wfh
+        my $content = do { local $/; <$rfh> };
+        close $rfh;
+        # the parent is done -- pass what we read back to it
+        print $content;
+        exit 0;
+    }
+    close $rfh;
+    # Unset the close-on-exec flag, so that the file descriptor does not
+    # get closed when we exec
+    my $flags = fcntl($wfh, F_GETFD, 0)        or error "fcntl F_GETFD: $!";
+    fcntl($wfh, F_SETFD, $flags & ~FD_CLOEXEC) or error "fcntl F_SETFD: $!";
+    my $fd = fileno $wfh;
+    # 2022-05-02, #debian-apt on OFTC, times in UTC+2
+    # 16:57 < josch> DonKult: how is -oDebug::pkgDpkgPm=1 -oDir::Log=/dev/null
+    #                a "fancy no-op"?
+    # 11:52 < DonKult> josch: "fancy no-op" in sofar as it does nothing to the
+    #                  system even through its not in a special mode ala
+    #                  simulation or download-only. It does all the things it
+    #                  normally does, except that it just prints the dpkg calls
+    #                  instead of execv() them which in practice amounts means
+    #                  it does nothing (the Dir::Log just prevents libapt from
+    #                  creating the /var/log/apt directories. As the code
+    #                  creates them even if no logs will be placed there…). As
+    #                  said, midterm an apt --print-install-packages or
+    #                  something would be nice to avoid running everything.
+    run_apt_progress({
+            ARGV => [
+                'apt-get',
+                '--yes',
+                '-oDebug::pkgDpkgPm=1',
+                '-oDir::Log=/dev/null',
+                $options->{dryrun}
+                ? '-oAPT::Get::Simulate=true'
+                : (
+                    "-oAPT::Keep-Fds::=$fd",
+                    "-oDPkg::Tools::options::'cat >&$fd'::InfoFD=$fd",
+                    "-oDpkg::Pre-Install-Pkgs::=cat >&$fd",
+                    # no need to look the database if we are just downloading
+                    "-oDebug::NoLocking=1",
+                    # no need for pty magic if we write no log
+                    "-oDpkg::Use-Pty=0",
+                ),
+                @{ $options->{APT_ARGV} },
+            ],
+        });
+    # signal the child process that we are done
+    close $wfh;
+    # and then read from it what it got
+    my @listofdebs = <$fh>;
+    close $fh;
+    if ($? != 0) {
+        error "status child failed";
+    }
+    # remove trailing newlines
+    chomp @listofdebs;
+    return @listofdebs;
+}
+
 sub run_chroot {
     my $cmd     = shift;
     my $options = shift;
@@ -2037,24 +2059,12 @@ sub run_update() {
 sub run_download() {
     my $options = shift;
 
-    # We use /var/cache/apt/archives/ to figure out which packages apt chooses
-    # to install. That's why the directory must be empty if:
-    #  - /var/cache/apt/archives exists, and
-    #  - no simulation run is done, and
-    #  - the variant is not extract or custom or the number to be
-    #    installed packages not zero
-    #
-    # We could also unconditionally use the proxysolver and then "apt-get
-    # download" any missing packages but using the proxysolver requires
-    # /usr/lib/apt/solvers/apt from the apt-utils package and we want to avoid
-    # that dependency.
-    #
     # In the future we want to replace downloading packages with "apt-get
-    # install --download-only" and installing them with dpkg by just installing
-    # the essential packages with apt from the outside with
-    # DPkg::Chroot-Directory. We are not doing that because then the preinst
-    # script of base-passwd will not be called early enough and packages will
-    # fail to install because they are missing /etc/passwd.
+    # install" and installing them with dpkg by just installing the essential
+    # packages with apt from the outside with DPkg::Chroot-Directory.
+    # We are not doing that because then the preinst script of base-passwd will
+    # not be called early enough and packages will fail to install because they
+    # are missing /etc/passwd.
     my @cached_debs = ();
     my @dl_debs     = ();
     if (
@@ -2076,14 +2086,6 @@ sub run_download() {
             push @cached_debs, $deb;
         }
         closedir $dh;
-        if (scalar @cached_debs > 0) {
-            if (any { $_ eq 'download/empty' } @{ $options->{skip} }) {
-                info "skipping download/empty as requested";
-            } else {
-                error("/var/cache/apt/archives/ inside the chroot contains: "
-                      . (join ', ', (sort @cached_debs)));
-            }
-        }
     }
 
     # To figure out the right package set for the apt variant we can use:
@@ -2097,7 +2099,7 @@ sub run_download() {
             info "nothing to download -- skipping...";
             return ([], \@cached_debs);
         }
-        my %pkgs_to_install;
+        my @apt_argv = ('install');
         for my $incl (@{ $options->{include} }) {
             for my $pkg (split /[,\s]+/, $incl) {
                 # strip leading and trailing whitespace
@@ -2106,32 +2108,15 @@ sub run_download() {
                 if ($pkg eq '') {
                     next;
                 }
-                $pkgs_to_install{$pkg} = ();
+                push @apt_argv, $pkg;
             }
         }
 
-        my %result = ();
-        if ($options->{dryrun}) {
-            info "simulate downloading packages with apt...";
-        } else {
-            # if there are already packages in /var/cache/apt/archives/, we
-            # need to use our proxysolver to obtain the solution chosen by apt
-            if (scalar @cached_debs > 0) {
-                $result{EDSP_RES} = \@dl_debs;
-            }
-            info "downloading packages with apt...";
-        }
-        run_apt_progress({
-                ARGV => [
-                    'apt-get',
-                    '--yes',
-                    '-oApt::Get::Download-Only=true',
-                    $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (),
-                    'install'
-                ],
-                PKGS => [keys %pkgs_to_install],
-                %result
-            });
+        @dl_debs = run_apt_download_progress({
+                APT_ARGV => [@apt_argv],
+                dryrun   => $options->{dryrun},
+            },
+        );
     } elsif ($options->{variant} eq 'apt') {
         # if we just want to install Essential:yes packages, apt and their
         # dependencies then we can make use of libapt treating apt as
@@ -2146,27 +2131,11 @@ sub run_download() {
         #                  remind me in 5+ years that I said that after I wrote
         #                  in the bugreport: "Are you crazy?!? Nobody in his
         #                  right mind would even suggest depending on it!")
-        my %result = ();
-        if ($options->{dryrun}) {
-            info "simulate downloading packages with apt...";
-        } else {
-            # if there are already packages in /var/cache/apt/archives/, we
-            # need to use our proxysolver to obtain the solution chosen by apt
-            if (scalar @cached_debs > 0) {
-                $result{EDSP_RES} = \@dl_debs;
-            }
-            info "downloading packages with apt...";
-        }
-        run_apt_progress({
-                ARGV => [
-                    'apt-get',
-                    '--yes',
-                    '-oApt::Get::Download-Only=true',
-                    $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (),
-                    'dist-upgrade'
-                ],
-                %result
-            });
+        @dl_debs = run_apt_download_progress({
+                APT_ARGV => ['dist-upgrade'],
+                dryrun   => $options->{dryrun},
+            },
+        );
     } elsif (
         any { $_ eq $options->{variant} }
         ('essential', 'standard', 'important', 'required', 'buildd')
@@ -2175,23 +2144,8 @@ sub run_download() {
         # 17:27 < DonKult> (?essential includes 'apt' through)
         # 17:30 < josch> DonKult: no, because pkgCacheGen::ForceEssential ",";
         # 17:32 < DonKult> touché
-        my %result = ();
-        if ($options->{dryrun}) {
-            info "simulate downloading packages with apt...";
-        } else {
-            # if there are already packages in /var/cache/apt/archives/, we
-            # need to use our proxysolver to obtain the solution chosen by apt
-            if (scalar @cached_debs > 0) {
-                $result{EDSP_RES} = \@dl_debs;
-            }
-            info "downloading packages with apt...";
-        }
-        run_apt_progress({
-                ARGV => [
-                    'apt-get',
-                    '--yes',
-                    '-oApt::Get::Download-Only=true',
-                    $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (),
+        @dl_debs = run_apt_download_progress({
+                APT_ARGV => [
                     'install',
                     '?narrow('
                       . (
@@ -2206,76 +2160,34 @@ sub run_download() {
                       . $options->{nativearch}
                       . '),?essential)'
                 ],
-                %result
-            });
+                dryrun => $options->{dryrun},
+            },
+        );
     } else {
         error "unknown variant: $options->{variant}";
     }
 
     my @essential_pkgs;
-    if (scalar @cached_debs > 0 && scalar @dl_debs > 0) {
-        my $archives = "/var/cache/apt/archives/";
-        # for each package in @dl_debs, check if it's in
-        # /var/cache/apt/archives/ and add it to @essential_pkgs
-        foreach my $p (@dl_debs) {
-            my ($pkg, $ver_epoch) = @{$p};
-            # apt appends the architecture at the end of the package name
-            ($pkg, my $arch) = split ':', $pkg, 2;
-            # apt replaces the colon by its percent encoding %3a
-            my $ver = $ver_epoch;
-            $ver =~ s/:/%3a/;
-            # the architecture returned by apt is the native architecture.
-            # Since we don't know whether the package is architecture
-            # independent or not, we first try with the native arch and then
-            # with "all" and only error out if neither exists.
-            if (-e "$options->{root}/$archives/${pkg}_${ver}_$arch.deb") {
-                push @essential_pkgs, "$archives/${pkg}_${ver}_$arch.deb";
-            } elsif (-e "$options->{root}/$archives/${pkg}_${ver}_all.deb") {
-                push @essential_pkgs, "$archives/${pkg}_${ver}_all.deb";
-            } else {
-                error(  "cannot find package for $pkg:$arch (= $ver_epoch) "
-                      . "in /var/cache/apt/archives/");
+    # strip the chroot directory from the filenames
+    foreach my $deb (@dl_debs) {
+        # if filename does not start with chroot directory then the user
+        # might've used a file:// mirror and we check whether the path is
+        # accessible inside the chroot
+        if (rindex $deb, $options->{root}, 0) {
+            if (!-e "$options->{root}/$deb") {
+                error "package file $deb not accessible from chroot directory"
+                  . " -- use copy:// instead of file:// or a bind-mount";
             }
+            push @essential_pkgs, $deb;
+            next;
         }
-    } else {
-        # collect the .deb files that were downloaded by apt from the content
-        # of /var/cache/apt/archives/
-        if (!$options->{dryrun}) {
-            my $apt_archives = "/var/cache/apt/archives/";
-            opendir my $dh, "$options->{root}/$apt_archives"
-              or error "cannot read $apt_archives";
-            while (my $deb = readdir $dh) {
-                if ($deb !~ /\.deb$/) {
-                    next;
-                }
-                $deb = "$apt_archives/$deb";
-                if (!-f "$options->{root}/$deb") {
-                    next;
-                }
-                push @essential_pkgs, $deb;
-            }
-            closedir $dh;
-
-            if (scalar @essential_pkgs == 0) {
-                # check if a file:// URI was used
-                open(my $pipe_apt, '-|', 'apt-get', 'indextargets', '--format',
-                    '$(URI)', 'Created-By: Packages')
-                  or error "cannot start apt-get indextargets: $!";
-                while (my $uri = <$pipe_apt>) {
-                    if ($uri =~ /^file:\/\//) {
-                        error
-                          "nothing got downloaded -- use copy:// instead of"
-                          . " file://";
-                    }
-                }
-                error "nothing got downloaded";
-            }
+        # filename starts with chroot directory, strip it off
+        # this is the normal case
+        if (!-e $deb) {
+            error "cannot find package file $deb";
         }
+        push @essential_pkgs, substr($deb, length($options->{root}));
     }
-    # Unpack order matters. Since we create this list using two different
-    # methods but we want both methods to have the same result, we sort the
-    # list before returning it.
-    @essential_pkgs = sort @essential_pkgs;
 
     return (\@essential_pkgs, \@cached_debs);
 }
@@ -2732,6 +2644,10 @@ sub run_essential() {
             # before the download phase
             next
               if any { "/var/cache/apt/archives/$_" eq $deb } @{$cached_debs};
+            # do not unlink those packages that were not in
+            # /var/cache/apt/archive (for example because they were provided by
+            # a file:// mirror)
+            next if $deb !~ /\/var\/cache\/apt\/archives\//;
             unlink "$options->{root}/$deb"
               or error "cannot unlink $deb: $!";
         }
@@ -6721,15 +6637,13 @@ the B<setup> step. This can be disabled using B<--skip=update>.
 
 =item B<download>
 
-Checks whether F</var/cache/apt/archives/> is empty. This can be disabled with
-B<--skip=download/empty>. In the B<extract> and B<custom> variants, C<apt-get
---download-only install> is used to download all the packages requested via the
-B<--include> option. The B<apt> variant uses the fact that libapt treats the
-C<apt> packages as implicitly essential to download only all C<Essential:yes>
-packages plus apt using C<apt-get --download-only dist-upgrade>. In the
-remaining variants, all Packages files downloaded by the B<update> step are
-inspected to find the C<Essential:yes> package set as well as all packages of
-the required priority.
+In the B<extract> and B<custom> variants, C<apt-get install> is used to
+download all the packages requested via the B<--include> option. The B<apt>
+variant uses the fact that libapt treats the C<apt> packages as implicitly
+essential to download only all C<Essential:yes> packages plus apt using
+C<apt-get dist-upgrade>. In the remaining variants, all Packages files
+downloaded by the B<update> step are inspected to find the C<Essential:yes>
+package set as well as all packages of the required priority.
 
 =item B<extract>
 
@@ -6979,7 +6893,7 @@ apt-cacher-ng, you can use the B<sync-in> and B<sync-out> special hooks to
 synchronize a directory outside the chroot with F</var/cache/apt/archives>
 inside the chroot.
 
-    $ mmdebstrap --variant=apt --skip=download/empty --skip=essential/unlink \
+    $ mmdebstrap --variant=apt --skip=essential/unlink \
         --setup-hook='mkdir -p ./cache "$1"/var/cache/apt/archives/' \
         --setup-hook='sync-in ./cache /var/cache/apt/archives/' \
         --customize-hook='sync-out /var/cache/apt/archives ./cache' \
@@ -7149,12 +7063,6 @@ as the non-root user, then as a workaround you could run C<chmod 600
 /etc/dpkg/dpkg.cfg.d/*> so that the config files are only accessible by the
 root user. See Debian bug #808203.
 
-The C<file://> URI type cannot be used to install the essential packages. This
-is because B<mmdebstrap> uses dpkg to install the packages that apt places into
-F</var/cache/apt/archives> but with C<file://> apt will not copy the files even
-with C<--download-only>. Use C<copy://> instead, which is equivalent to
-C<file://> but copies the archives into F</var/cache/apt/archives>.
-
 With apt versions before 2.1.16, setting C<[trusted=yes]> or
 C<Acquire::AllowInsecureRepositories "1"> to allow signed archives without a
 known public key or unsigned archives will fail because of a gpg warning in the
-- 
2.35.1
I think I'm done reworking your commits. I fixed some of the Perl problems (arrays are hard) and am using a file descriptor instead of a file in `/tmp` (super easy to set up all those read/write handles and fork an extra process to read from it...). With those changes, the tests pass with nearly no changes, so I don't think that this should introduce any regressions but just enable new functionality. If you have any comments, then I'm eager to hear them. Otherwise, this is what I'm going to commit to close this pull request: ```patch From bc3e7501368429fb78e6c112b5fa3f496e63c87e Mon Sep 17 00:00:00 2001 From: David Kalnischkies <david@kalnischkies.de> Date: Tue, 24 May 2022 11:47:16 +0200 Subject: [PATCH] Rework download stage to allow file:// mirrors - factor out package downloading function - replace -oApt::Get::Download-Only=true by -oDebug::pkgDpkgPm=1 - remove guessing of package names in /var/cache/apt/archives/ - drop edsp parsing with proxysolver/mmdebstrap-dump-solution to obtain downloaded filenames in favour of -oDpkg::Pre-Install-Pkgs::=cat - /var/cache/apt/archives/ is now allowed to contain packages - drop --skip=download/empty - file:// mirrors are now supported if their path is available inside the chroot --- coverage.sh | 14 +-- mmdebstrap | 336 +++++++++++++++++++--------------------------------- 2 files changed, 126 insertions(+), 224 deletions(-) diff --git a/coverage.sh b/coverage.sh index b8afbf6..1eb87e4 100755 --- a/coverage.sh +++ b/coverage.sh @@ -1535,7 +1535,7 @@ else skipped=$((skipped+1)) fi -print_header "mode=$defaultmode,variant=apt: fail with file:// mirror" +print_header "mode=$defaultmode,variant=apt: file:// mirror" cat << END > shared/test.sh #!/bin/sh set -eu @@ -1544,13 +1544,9 @@ if [ ! -e /mmdebstrap-testenv ]; then echo "this test requires the cache directory to be mounted on /mnt and should only be run inside a container" >&2 exit 1 fi -ret=0 -$CMD --mode=$defaultmode --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian unstable main" || ret=\$? +$CMD --mode=$defaultmode --variant=apt --setup-hook='mkdir -p "\$1"/mnt/cache/debian; mount -o ro,bind /mnt/cache/debian "\$1"/mnt/cache/debian' --customize-hook='umount "\$1"/mnt/cache/debian; rmdir "\$1"/mnt/cache/debian "\$1"/mnt/cache' $DEFAULT_DIST /tmp/debian-chroot.tar "deb file:///mnt/cache/debian $DEFAULT_DIST main" +tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt - rm /tmp/debian-chroot.tar -if [ "\$ret" = 0 ]; then - echo expected failure but got exit \$ret >&2 - exit 1 -fi END if [ "$HAVE_QEMU" = "yes" ]; then ./run_qemu.sh @@ -3083,7 +3079,7 @@ $CMD \$include --mode=$defaultmode --variant=$variant \ --setup-hook='sync-in "'"\$tmpdir"'" /var/cache/apt/archives/partial' \ $DEFAULT_DIST - $mirror > test1.tar cmp orig.tar test1.tar -$CMD \$include --mode=$defaultmode --variant=$variant --skip=download/empty \ +$CMD \$include --mode=$defaultmode --variant=$variant \ --customize-hook='touch "\$1"/var/cache/apt/archives/partial' \ --setup-hook='mkdir -p "\$1"/var/cache/apt/archives/' \ --setup-hook='sync-in "'"\$tmpdir"'" /var/cache/apt/archives/' \ @@ -3250,8 +3246,6 @@ rm /tmp/debian-chroot/etc/hostname rm /tmp/debian-chroot/etc/resolv.conf rm /tmp/debian-chroot/var/lib/dpkg/status rm /tmp/debian-chroot/var/cache/apt/archives/lock -rm /tmp/debian-chroot/var/lib/dpkg/lock -rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend rm /tmp/debian-chroot/var/lib/apt/lists/lock ## delete merged usr symlinks #rm /tmp/debian-chroot/libx32 diff --git a/mmdebstrap b/mmdebstrap index 3f73d2b..7ff2ab7 100755 --- a/mmdebstrap +++ b/mmdebstrap @@ -872,30 +872,11 @@ sub run_dpkg_progress { } sub run_apt_progress { - my $options = shift; - my @debs = @{ $options->{PKGS} // [] }; - my $tmpedsp; - if (exists $options->{EDSP_RES}) { - (undef, $tmpedsp) = tempfile( - "mmdebstrap.edsp.XXXXXXXXXXXX", - OPEN => 0, - TMPDIR => 1 - ); - } + my $options = shift; + my @debs = @{ $options->{PKGS} // [] }; my $get_exec = sub { my @prefix = (); my @opts = (); - if (exists $options->{EDSP_RES}) { - push @prefix, 'env', "APT_EDSP_DUMP_FILENAME=$tmpedsp"; - if (-e "./proxysolver") { - # for development purposes, use the current directory if it - # contains a file called proxysolver - push @opts, ("-oDir::Bin::solvers=" . getcwd()), - '--solver=proxysolver'; - } else { - push @opts, '--solver=mmdebstrap-dump-solution'; - } - } return ( @prefix, @{ $options->{ARGV} }, @@ -950,38 +931,79 @@ sub run_apt_progress { } }; run_progress $get_exec, $line_handler, $line_has_error, $options->{CHDIR}; - if (exists $options->{EDSP_RES}) { - info "parsing EDSP results..."; - open my $fh, '<', $tmpedsp - or error "failed to open $tmpedsp for reading: $!"; - my $inst = 0; - my $pkg; - my $ver; - while (my $line = <$fh>) { - chomp $line; - if ($line ne "") { - if ($line =~ /^Install: \d+/) { - $inst = 1; - } elsif ($line =~ /^Package: (.*)/) { - $pkg = $1; - } elsif ($line =~ /^Version: (.*)/) { - $ver = $1; - } - next; - } - if ($inst == 1 && defined $pkg && defined $ver) { - push @{ $options->{EDSP_RES} }, [$pkg, $ver]; - } - $inst = 0; - undef $pkg; - undef $ver; - } - close $fh; - unlink $tmpedsp; - } return; } +sub run_apt_download_progress { + my $options = shift; + if ($options->{dryrun}) { + info "simulate downloading packages with apt..."; + } else { + info "downloading packages with apt..."; + } + pipe my $rfh, my $wfh; + my $pid = open my $fh, '-|' // error "fork() failed: $!"; + if ($pid == 0) { + close $wfh; + # read until parent process closes $wfh + my $content = do { local $/; <$rfh> }; + close $rfh; + # the parent is done -- pass what we read back to it + print $content; + exit 0; + } + close $rfh; + # Unset the close-on-exec flag, so that the file descriptor does not + # get closed when we exec + my $flags = fcntl($wfh, F_GETFD, 0) or error "fcntl F_GETFD: $!"; + fcntl($wfh, F_SETFD, $flags & ~FD_CLOEXEC) or error "fcntl F_SETFD: $!"; + my $fd = fileno $wfh; + # 2022-05-02, #debian-apt on OFTC, times in UTC+2 + # 16:57 < josch> DonKult: how is -oDebug::pkgDpkgPm=1 -oDir::Log=/dev/null + # a "fancy no-op"? + # 11:52 < DonKult> josch: "fancy no-op" in sofar as it does nothing to the + # system even through its not in a special mode ala + # simulation or download-only. It does all the things it + # normally does, except that it just prints the dpkg calls + # instead of execv() them which in practice amounts means + # it does nothing (the Dir::Log just prevents libapt from + # creating the /var/log/apt directories. As the code + # creates them even if no logs will be placed there…). As + # said, midterm an apt --print-install-packages or + # something would be nice to avoid running everything. + run_apt_progress({ + ARGV => [ + 'apt-get', + '--yes', + '-oDebug::pkgDpkgPm=1', + '-oDir::Log=/dev/null', + $options->{dryrun} + ? '-oAPT::Get::Simulate=true' + : ( + "-oAPT::Keep-Fds::=$fd", + "-oDPkg::Tools::options::'cat >&$fd'::InfoFD=$fd", + "-oDpkg::Pre-Install-Pkgs::=cat >&$fd", + # no need to look the database if we are just downloading + "-oDebug::NoLocking=1", + # no need for pty magic if we write no log + "-oDpkg::Use-Pty=0", + ), + @{ $options->{APT_ARGV} }, + ], + }); + # signal the child process that we are done + close $wfh; + # and then read from it what it got + my @listofdebs = <$fh>; + close $fh; + if ($? != 0) { + error "status child failed"; + } + # remove trailing newlines + chomp @listofdebs; + return @listofdebs; +} + sub run_chroot { my $cmd = shift; my $options = shift; @@ -2037,24 +2059,12 @@ sub run_update() { sub run_download() { my $options = shift; - # We use /var/cache/apt/archives/ to figure out which packages apt chooses - # to install. That's why the directory must be empty if: - # - /var/cache/apt/archives exists, and - # - no simulation run is done, and - # - the variant is not extract or custom or the number to be - # installed packages not zero - # - # We could also unconditionally use the proxysolver and then "apt-get - # download" any missing packages but using the proxysolver requires - # /usr/lib/apt/solvers/apt from the apt-utils package and we want to avoid - # that dependency. - # # In the future we want to replace downloading packages with "apt-get - # install --download-only" and installing them with dpkg by just installing - # the essential packages with apt from the outside with - # DPkg::Chroot-Directory. We are not doing that because then the preinst - # script of base-passwd will not be called early enough and packages will - # fail to install because they are missing /etc/passwd. + # install" and installing them with dpkg by just installing the essential + # packages with apt from the outside with DPkg::Chroot-Directory. + # We are not doing that because then the preinst script of base-passwd will + # not be called early enough and packages will fail to install because they + # are missing /etc/passwd. my @cached_debs = (); my @dl_debs = (); if ( @@ -2076,14 +2086,6 @@ sub run_download() { push @cached_debs, $deb; } closedir $dh; - if (scalar @cached_debs > 0) { - if (any { $_ eq 'download/empty' } @{ $options->{skip} }) { - info "skipping download/empty as requested"; - } else { - error("/var/cache/apt/archives/ inside the chroot contains: " - . (join ', ', (sort @cached_debs))); - } - } } # To figure out the right package set for the apt variant we can use: @@ -2097,7 +2099,7 @@ sub run_download() { info "nothing to download -- skipping..."; return ([], \@cached_debs); } - my %pkgs_to_install; + my @apt_argv = ('install'); for my $incl (@{ $options->{include} }) { for my $pkg (split /[,\s]+/, $incl) { # strip leading and trailing whitespace @@ -2106,32 +2108,15 @@ sub run_download() { if ($pkg eq '') { next; } - $pkgs_to_install{$pkg} = (); + push @apt_argv, $pkg; } } - my %result = (); - if ($options->{dryrun}) { - info "simulate downloading packages with apt..."; - } else { - # if there are already packages in /var/cache/apt/archives/, we - # need to use our proxysolver to obtain the solution chosen by apt - if (scalar @cached_debs > 0) { - $result{EDSP_RES} = \@dl_debs; - } - info "downloading packages with apt..."; - } - run_apt_progress({ - ARGV => [ - 'apt-get', - '--yes', - '-oApt::Get::Download-Only=true', - $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), - 'install' - ], - PKGS => [keys %pkgs_to_install], - %result - }); + @dl_debs = run_apt_download_progress({ + APT_ARGV => [@apt_argv], + dryrun => $options->{dryrun}, + }, + ); } elsif ($options->{variant} eq 'apt') { # if we just want to install Essential:yes packages, apt and their # dependencies then we can make use of libapt treating apt as @@ -2146,27 +2131,11 @@ sub run_download() { # remind me in 5+ years that I said that after I wrote # in the bugreport: "Are you crazy?!? Nobody in his # right mind would even suggest depending on it!") - my %result = (); - if ($options->{dryrun}) { - info "simulate downloading packages with apt..."; - } else { - # if there are already packages in /var/cache/apt/archives/, we - # need to use our proxysolver to obtain the solution chosen by apt - if (scalar @cached_debs > 0) { - $result{EDSP_RES} = \@dl_debs; - } - info "downloading packages with apt..."; - } - run_apt_progress({ - ARGV => [ - 'apt-get', - '--yes', - '-oApt::Get::Download-Only=true', - $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), - 'dist-upgrade' - ], - %result - }); + @dl_debs = run_apt_download_progress({ + APT_ARGV => ['dist-upgrade'], + dryrun => $options->{dryrun}, + }, + ); } elsif ( any { $_ eq $options->{variant} } ('essential', 'standard', 'important', 'required', 'buildd') @@ -2175,23 +2144,8 @@ sub run_download() { # 17:27 < DonKult> (?essential includes 'apt' through) # 17:30 < josch> DonKult: no, because pkgCacheGen::ForceEssential ","; # 17:32 < DonKult> touché - my %result = (); - if ($options->{dryrun}) { - info "simulate downloading packages with apt..."; - } else { - # if there are already packages in /var/cache/apt/archives/, we - # need to use our proxysolver to obtain the solution chosen by apt - if (scalar @cached_debs > 0) { - $result{EDSP_RES} = \@dl_debs; - } - info "downloading packages with apt..."; - } - run_apt_progress({ - ARGV => [ - 'apt-get', - '--yes', - '-oApt::Get::Download-Only=true', - $options->{dryrun} ? '-oAPT::Get::Simulate=true' : (), + @dl_debs = run_apt_download_progress({ + APT_ARGV => [ 'install', '?narrow(' . ( @@ -2206,76 +2160,34 @@ sub run_download() { . $options->{nativearch} . '),?essential)' ], - %result - }); + dryrun => $options->{dryrun}, + }, + ); } else { error "unknown variant: $options->{variant}"; } my @essential_pkgs; - if (scalar @cached_debs > 0 && scalar @dl_debs > 0) { - my $archives = "/var/cache/apt/archives/"; - # for each package in @dl_debs, check if it's in - # /var/cache/apt/archives/ and add it to @essential_pkgs - foreach my $p (@dl_debs) { - my ($pkg, $ver_epoch) = @{$p}; - # apt appends the architecture at the end of the package name - ($pkg, my $arch) = split ':', $pkg, 2; - # apt replaces the colon by its percent encoding %3a - my $ver = $ver_epoch; - $ver =~ s/:/%3a/; - # the architecture returned by apt is the native architecture. - # Since we don't know whether the package is architecture - # independent or not, we first try with the native arch and then - # with "all" and only error out if neither exists. - if (-e "$options->{root}/$archives/${pkg}_${ver}_$arch.deb") { - push @essential_pkgs, "$archives/${pkg}_${ver}_$arch.deb"; - } elsif (-e "$options->{root}/$archives/${pkg}_${ver}_all.deb") { - push @essential_pkgs, "$archives/${pkg}_${ver}_all.deb"; - } else { - error( "cannot find package for $pkg:$arch (= $ver_epoch) " - . "in /var/cache/apt/archives/"); + # strip the chroot directory from the filenames + foreach my $deb (@dl_debs) { + # if filename does not start with chroot directory then the user + # might've used a file:// mirror and we check whether the path is + # accessible inside the chroot + if (rindex $deb, $options->{root}, 0) { + if (!-e "$options->{root}/$deb") { + error "package file $deb not accessible from chroot directory" + . " -- use copy:// instead of file:// or a bind-mount"; } + push @essential_pkgs, $deb; + next; } - } else { - # collect the .deb files that were downloaded by apt from the content - # of /var/cache/apt/archives/ - if (!$options->{dryrun}) { - my $apt_archives = "/var/cache/apt/archives/"; - opendir my $dh, "$options->{root}/$apt_archives" - or error "cannot read $apt_archives"; - while (my $deb = readdir $dh) { - if ($deb !~ /\.deb$/) { - next; - } - $deb = "$apt_archives/$deb"; - if (!-f "$options->{root}/$deb") { - next; - } - push @essential_pkgs, $deb; - } - closedir $dh; - - if (scalar @essential_pkgs == 0) { - # check if a file:// URI was used - open(my $pipe_apt, '-|', 'apt-get', 'indextargets', '--format', - '$(URI)', 'Created-By: Packages') - or error "cannot start apt-get indextargets: $!"; - while (my $uri = <$pipe_apt>) { - if ($uri =~ /^file:\/\//) { - error - "nothing got downloaded -- use copy:// instead of" - . " file://"; - } - } - error "nothing got downloaded"; - } + # filename starts with chroot directory, strip it off + # this is the normal case + if (!-e $deb) { + error "cannot find package file $deb"; } + push @essential_pkgs, substr($deb, length($options->{root})); } - # Unpack order matters. Since we create this list using two different - # methods but we want both methods to have the same result, we sort the - # list before returning it. - @essential_pkgs = sort @essential_pkgs; return (\@essential_pkgs, \@cached_debs); } @@ -2732,6 +2644,10 @@ sub run_essential() { # before the download phase next if any { "/var/cache/apt/archives/$_" eq $deb } @{$cached_debs}; + # do not unlink those packages that were not in + # /var/cache/apt/archive (for example because they were provided by + # a file:// mirror) + next if $deb !~ /\/var\/cache\/apt\/archives\//; unlink "$options->{root}/$deb" or error "cannot unlink $deb: $!"; } @@ -6721,15 +6637,13 @@ the B<setup> step. This can be disabled using B<--skip=update>. =item B<download> -Checks whether F</var/cache/apt/archives/> is empty. This can be disabled with -B<--skip=download/empty>. In the B<extract> and B<custom> variants, C<apt-get ---download-only install> is used to download all the packages requested via the -B<--include> option. The B<apt> variant uses the fact that libapt treats the -C<apt> packages as implicitly essential to download only all C<Essential:yes> -packages plus apt using C<apt-get --download-only dist-upgrade>. In the -remaining variants, all Packages files downloaded by the B<update> step are -inspected to find the C<Essential:yes> package set as well as all packages of -the required priority. +In the B<extract> and B<custom> variants, C<apt-get install> is used to +download all the packages requested via the B<--include> option. The B<apt> +variant uses the fact that libapt treats the C<apt> packages as implicitly +essential to download only all C<Essential:yes> packages plus apt using +C<apt-get dist-upgrade>. In the remaining variants, all Packages files +downloaded by the B<update> step are inspected to find the C<Essential:yes> +package set as well as all packages of the required priority. =item B<extract> @@ -6979,7 +6893,7 @@ apt-cacher-ng, you can use the B<sync-in> and B<sync-out> special hooks to synchronize a directory outside the chroot with F</var/cache/apt/archives> inside the chroot. - $ mmdebstrap --variant=apt --skip=download/empty --skip=essential/unlink \ + $ mmdebstrap --variant=apt --skip=essential/unlink \ --setup-hook='mkdir -p ./cache "$1"/var/cache/apt/archives/' \ --setup-hook='sync-in ./cache /var/cache/apt/archives/' \ --customize-hook='sync-out /var/cache/apt/archives ./cache' \ @@ -7149,12 +7063,6 @@ as the non-root user, then as a workaround you could run C<chmod 600 /etc/dpkg/dpkg.cfg.d/*> so that the config files are only accessible by the root user. See Debian bug #808203. -The C<file://> URI type cannot be used to install the essential packages. This -is because B<mmdebstrap> uses dpkg to install the packages that apt places into -F</var/cache/apt/archives> but with C<file://> apt will not copy the files even -with C<--download-only>. Use C<copy://> instead, which is equivalent to -C<file://> but copies the archives into F</var/cache/apt/archives>. - With apt versions before 2.1.16, setting C<[trusted=yes]> or C<Acquire::AllowInsecureRepositories "1"> to allow signed archives without a known public key or unsigned archives will fail because of a gpg warning in the -- 2.35.1 ```
Author
First-time contributor

The Repo-URI for a (.*\+)mirror+file is e.g. tor+mirror+file:/path/to/file/mirror.file/ – which is the filename of the file with a / at the end as that is how the URI will look in general like tor+mirror+file:/path/to/file/mirror.file/dists/unstable/InRelease.

Regarding merging download and unpack: You could e.g. extract in a dpkg::pre-install-pkgs hook and then have apt somewhat normally run dpkg over it. Probably harder than I make it sound though.

The patch seems mostly fine (except that you are adding yet another quote…) with a few comments:

  • the patch doesn't apply cleanly on main – some fuzzy some offset, but it can be applied automatically.
  • my branch was dropping proxysolver as its no longer in use
  • typo:
diff --git a/mmdebstrap b/mmdebstrap
index c4b48ce..d94bbfc 100755
--- a/mmdebstrap
+++ b/mmdebstrap
@@ -983,7 +983,7 @@ sub run_apt_download_progress {
                     "-oAPT::Keep-Fds::=$fd",
                     "-oDPkg::Tools::options::'cat >&$fd'::InfoFD=$fd",
                     "-oDpkg::Pre-Install-Pkgs::=cat >&$fd",
-                    # no need to look the database if we are just downloading
+                    # no need to lock the database if we are just downloading
                     "-oDebug::NoLocking=1",
                     # no need for pty magic if we write no log
                     "-oDpkg::Use-Pty=0",

Appart from that the diff shows that I only made a couple Perl errors… gonna write "Perl Expert" in my CV now. 😉 My "setup" also still works with this patch, so as said, and in summary, fine by me 👍

The `Repo-URI` for a `(.*\+)mirror+file` is e.g. `tor+mirror+file:/path/to/file/mirror.file/` – which is the filename of the file with a / at the end as that is how the URI will look in general like `tor+mirror+file:/path/to/file/mirror.file/dists/unstable/InRelease`. Regarding merging download and unpack: You could e.g. extract in a `dpkg::pre-install-pkgs` hook and then have apt somewhat normally run dpkg over it. Probably harder than I make it sound though. The patch seems mostly fine (except that you are adding yet another quote…) with a few comments: * the patch doesn't apply cleanly on `main` – some fuzzy some offset, but it can be applied automatically. * my branch was dropping `proxysolver` as its no longer in use * typo: ``` diff --git a/mmdebstrap b/mmdebstrap index c4b48ce..d94bbfc 100755 --- a/mmdebstrap +++ b/mmdebstrap @@ -983,7 +983,7 @@ sub run_apt_download_progress { "-oAPT::Keep-Fds::=$fd", "-oDPkg::Tools::options::'cat >&$fd'::InfoFD=$fd", "-oDpkg::Pre-Install-Pkgs::=cat >&$fd", - # no need to look the database if we are just downloading + # no need to lock the database if we are just downloading "-oDebug::NoLocking=1", # no need for pty magic if we write no log "-oDpkg::Use-Pty=0", ``` Appart from that the diff shows that I only made a couple Perl errors… gonna write "Perl Expert" in my CV now. 😉 My "setup" also still works with this patch, so as said, and in summary, fine by me 👍
Owner

The Repo-URI for a (.*\+)mirror+file is e.g. tor+mirror+file:/path/to/file/mirror.file/ – which is the filename of the file with a / at the end as that is how the URI will look in general like tor+mirror+file:/path/to/file/mirror.file/dists/unstable/InRelease.

Okay, thanks. Currently, the file-mirror-automount hook scripts only support file:// mirrors but i think that's already a useful start.

Regarding merging download and unpack: You could e.g. extract in a dpkg::pre-install-pkgs hook and then have apt somewhat normally run dpkg over it. Probably harder than I make it sound though.

I'm currently in the process to rewrite the test suite such that it becomes trivial to skip tests or only run specific tests in an attempt to motivate future contributions. Looking forward to your patch! ;)

The patch seems mostly fine (except that you are adding yet another quote…)

Hey, I need those so that future-me can still figure out what the apt magic is all about!

with a few comments:

  • the patch doesn't apply cleanly on main – some fuzzy some offset, but it can be applied automatically.

Yes, I have a number of uncommitted changes locally that I will push after I am done with rewriting the testsuite.

  • my branch was dropping proxysolver as its no longer in use

Originally, I wrote the proxysolver for a completely different use-case outside of mmdebstrap. Even if mmdebstrap isn't using it anymore, I need a package that ships it until I port everything else away from it.

  • typo:
diff --git a/mmdebstrap b/mmdebstrap
index c4b48ce..d94bbfc 100755
--- a/mmdebstrap
+++ b/mmdebstrap
@@ -983,7 +983,7 @@ sub run_apt_download_progress {
                     "-oAPT::Keep-Fds::=$fd",
                     "-oDPkg::Tools::options::'cat >&$fd'::InfoFD=$fd",
                     "-oDpkg::Pre-Install-Pkgs::=cat >&$fd",
-                    # no need to look the database if we are just downloading
+                    # no need to lock the database if we are just downloading
                     "-oDebug::NoLocking=1",
                     # no need for pty magic if we write no log
                     "-oDpkg::Use-Pty=0",

Thank you! Fixed locally.

Appart from that the diff shows that I only made a couple Perl errors… gonna write "Perl Expert" in my CV now. 😉

The big thing in Perl that takes a lot of getting used to (at least it did for me) is that the type of a variable is decided by the context the variable is used in. You only confused Perl arrays with Perl lists, resulting in Odd number of elements in anonymous hash even though you didn't attempt to create a hash anywere. ;)

My "setup" also still works with this patch, so as said, and in summary, fine by me 👍

Awesome! I really like your patch because (as you also already pointed out) it removes so much code without breaking functionality. Let me know if you ever implement apt-get bootstrap so that I can remove another few thousand lines. ;)

> The `Repo-URI` for a `(.*\+)mirror+file` is e.g. `tor+mirror+file:/path/to/file/mirror.file/` – which is the filename of the file with a / at the end as that is how the URI will look in general like `tor+mirror+file:/path/to/file/mirror.file/dists/unstable/InRelease`. Okay, thanks. Currently, the `file-mirror-automount` hook scripts only support `file://` mirrors but i think that's already a useful start. > Regarding merging download and unpack: You could e.g. extract in a `dpkg::pre-install-pkgs` hook and then have apt somewhat normally run dpkg over it. Probably harder than I make it sound though. I'm currently in the process to rewrite the test suite such that it becomes trivial to skip tests or only run specific tests in an attempt to motivate future contributions. Looking forward to your patch! ;) > The patch seems mostly fine (except that you are adding yet another quote…) Hey, I need those so that future-me can still figure out what the apt magic is all about! > with a few comments: > * the patch doesn't apply cleanly on `main` – some fuzzy some offset, but it can be applied automatically. Yes, I have a number of uncommitted changes locally that I will push after I am done with rewriting the testsuite. > * my branch was dropping `proxysolver` as its no longer in use Originally, I wrote the proxysolver for a completely different use-case outside of mmdebstrap. Even if mmdebstrap isn't using it anymore, I need a package that ships it until I port everything else away from it. > * typo: > ``` > diff --git a/mmdebstrap b/mmdebstrap > index c4b48ce..d94bbfc 100755 > --- a/mmdebstrap > +++ b/mmdebstrap > @@ -983,7 +983,7 @@ sub run_apt_download_progress { > "-oAPT::Keep-Fds::=$fd", > "-oDPkg::Tools::options::'cat >&$fd'::InfoFD=$fd", > "-oDpkg::Pre-Install-Pkgs::=cat >&$fd", > - # no need to look the database if we are just downloading > + # no need to lock the database if we are just downloading > "-oDebug::NoLocking=1", > # no need for pty magic if we write no log > "-oDpkg::Use-Pty=0", > ``` Thank you! Fixed locally. > Appart from that the diff shows that I only made a couple Perl errors… gonna write "Perl Expert" in my CV now. 😉 The big thing in Perl that takes a lot of getting used to (at least it did for me) is that the type of a variable is decided by the context the variable is used in. You only confused Perl arrays with Perl lists, resulting in `Odd number of elements in anonymous hash` even though you didn't attempt to create a hash anywere. ;) > My "setup" also still works with this patch, so as said, and in summary, fine by me 👍 Awesome! I really like your patch because (as you also already pointed out) it removes so much code without breaking functionality. Let me know if you ever implement `apt-get bootstrap` so that I can remove another few thousand lines. ;)
Owner

Fixed in cc3150ef04

Fixed in cc3150ef04c24b6837c04eed6b89ee15139c4377
josch closed this pull request 2022-05-27 06:02:25 +00:00

Pull request closed

Sign in to join this conversation.
No reviewers
No labels
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: josch/mmdebstrap#25
No description provided.