Compare commits

..

No commits in common. "7d472ca116a1f48a208e20840a4e69ff3b1321aa" and "594ea3c72e7af26b680d2bfb696c0271839b731d" have entirely different histories.

5 changed files with 404 additions and 531 deletions

View file

@ -39,7 +39,6 @@ Summary:
- foreign architecture chroots with qemu-user
- variant installing only Essential:yes packages and dependencies
- temporary chroots by redirecting to /dev/null
- chroots without apt inside (for chroot from buildinfo file with debootsnap)
The author believes that a chroot of a Debian stable release should include the
latest packages including security fixes by default. This has been a wontfix
@ -76,11 +75,11 @@ reproducible** if the `$SOURCE_DATE_EPOCH` environment variable is set.
The author believes, that it should not be necessary to have superuser
privileges to create a file (the chroot tarball) in one's home directory.
Thus, mmdebstrap provides multiple options to create a chroot tarball with the
right permissions **without superuser privileges**. This avoids a whole class
of bugs like #921815. Depending on what is available, it uses either Linux user
namespaces, fakechroot or proot. Debootstrap supports fakechroot but will not
create a tarball with the right permissions by itself. Support for Linux user
namespaces and proot is missing (see bugs #829134 and #698347, respectively).
right permissions **without superuser privileges**. Depending on what is
available, it uses either Linux user namespaces, fakechroot or proot.
Debootstrap supports fakechroot but will not create a tarball with the right
permissions by itself. Support for Linux user namespaces and proot is missing
(see bugs #829134 and #698347, respectively).
When creating a chroot tarball with debootstrap, the temporary chroot directory
cannot be on a filesystem that has been mounted with nodev. In unprivileged
@ -94,10 +93,7 @@ Limitations in comparison to debootstrap
----------------------------------------
Debootstrap supports creating a Debian chroot on non-Debian systems but
mmdebstrap requires apt and is thus limited to Debian and derivatives. This
means that mmdebstrap can never fully replace debootstrap and debootstrap will
continue to be relevant in situations where you want to create a Debian chroot
from a platform without apt and dpkg.
mmdebstrap requires apt and is thus limited to Debian and derivatives.
There is no `SCRIPT` argument.
@ -143,11 +139,7 @@ https://gitlab.mister-muffin.de/josch/mmdebstrap/issues
Contributors
============
- Johannes Schauer Marin Rodrigues (main author)
- Johannes Schauer (main author)
- Helmut Grohne
- Benjamin Drung
- Steve Dodd
- Josh Triplett
- Konstantin Demin
- Trent W. Buck
- Vagrant Cascadian

View file

@ -54,7 +54,7 @@ fi
# check if all required debootstrap tarballs exist
notfound=0
for dist in oldstable stable testing unstable; do
for dist in stable testing unstable; do
for variant in minbase buildd -; do
if [ ! -e "shared/cache/debian-$dist-$variant.tar" ]; then
echo "shared/cache/debian-$dist-$variant.tar does not exist" >&2
@ -120,7 +120,7 @@ if [ ! -e shared/hooks/eatmydata/customize.sh ] || [ hooks/eatmydata/customize.s
fi
fi
starttime=
total=217
total=193
skipped=0
runtests=0
i=1
@ -160,7 +160,7 @@ fi
: "${CMD:=perl -MDevel::Cover=-silent,-nogcov ./mmdebstrap}"
mirror="http://127.0.0.1/debian"
for dist in oldstable stable testing unstable; do
for dist in stable testing unstable; do
for variant in minbase buildd -; do
print_header "mode=$defaultmode,variant=$variant: check against debootstrap $dist"
cat << END > shared/test.sh
@ -168,14 +168,7 @@ for dist in oldstable stable testing unstable; do
set -eu
export LC_ALL=C.UTF-8
export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH
# we create the apt user ourselves or otherwise its uid/gid will differ
# compared to the one chosen in debootstrap because of different installation
# order in comparison to the systemd users
# https://bugs.debian.org/969631
$CMD --variant=$variant --mode=$defaultmode \
--essential-hook='if [ $variant = - ]; then echo _apt:*:100:65534::/nonexistent:/usr/sbin/nologin >> "\$1"/etc/passwd; fi' \
$dist /tmp/debian-$dist-mm.tar $mirror
$CMD --variant=$variant --mode=$defaultmode $dist /tmp/debian-$dist-mm.tar $mirror
mkdir /tmp/debian-$dist-mm
tar --xattrs --xattrs-include='*' -C /tmp/debian-$dist-mm -xf /tmp/debian-$dist-mm.tar
@ -272,7 +265,7 @@ if [ "$variant" = "-" ]; then
cap=\$(chroot /tmp/debian-$dist-debootstrap /sbin/getcap /bin/ping)
expected="/bin/ping cap_net_raw=ep"
if [ "$dist" = oldstable ]; then
if [ "$dist" = stable ]; then
expected="/bin/ping = cap_net_raw+ep"
fi
if [ "\$cap" != "\$expected" ]; then
@ -405,7 +398,7 @@ diff --no-dereference --recursive /tmp/debian-debootstrap /tmp/debian-mm
# check permissions, ownership, symlink targets, modification times using tar
# mtimes of directories created by mmdebstrap will differ, thus we equalize them first
for d in etc/apt/preferences.d/ etc/apt/sources.list.d/ etc/dpkg/dpkg.cfg.d/ var/log/apt/; do
for d in etc/apt/preferences.d/ etc/apt/sources.list.d/ etc/dpkg/dpkg.cfg.d/; do
touch --date="@$SOURCE_DATE_EPOCH" /tmp/debian-debootstrap/\$d /tmp/debian-mm/\$d
done
# debootstrap never ran apt -- fixing permissions
@ -526,63 +519,6 @@ else
runtests=$((runtests+1))
fi
print_header "mode=unshare,variant=apt: fail without /etc/subuid"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
if [ ! -e /mmdebstrap-testenv ]; then
echo "this test modifies the system and should only be run inside a container" >&2
exit 1
fi
adduser --gecos user --disabled-password user
sysctl -w kernel.unprivileged_userns_clone=1
rm /etc/subuid
ret=0
runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
rm -r /tmp/debian-chroot
END
if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
echo "HAVE_QEMU != yes -- Skipping test..." >&2
skipped=$((skipped+1))
fi
print_header "mode=unshare,variant=apt: fail without username in /etc/subuid"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
if [ ! -e /mmdebstrap-testenv ]; then
echo "this test modifies the system and should only be run inside a container" >&2
exit 1
fi
adduser --gecos user --disabled-password user
sysctl -w kernel.unprivileged_userns_clone=1
awk -F: '\$1!="user"' /etc/subuid > /etc/subuid.tmp
mv /etc/subuid.tmp /etc/subuid
ret=0
runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
rm -r /tmp/debian-chroot
END
if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
echo "HAVE_QEMU != yes -- Skipping test..." >&2
skipped=$((skipped+1))
fi
# Before running unshare mode as root, we run "unshare --mount" but that fails
# if mmdebstrap itself is executed from within a chroot:
# unshare: cannot change root filesystem propagation: Invalid argument
@ -655,18 +591,19 @@ else
runtests=$((runtests+1))
fi
print_header "mode=unshare,variant=apt: root without cap_sys_admin"
print_header "mode=root,variant=apt: fail with root without cap_sys_admin"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
[ "\$(whoami)" = "root" ]
ret=0
capsh --drop=cap_sys_admin -- -c 'exec "\$@"' exec \
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" sh -c "test ! -e /proc/self/fd"' \
$DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
rm /tmp/debian-chroot.tar
$CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
[ ! -e /tmp/debian-chroot ]
END
if [ "$CONTAINER" = "lxc" ]; then
# see https://stackoverflow.com/questions/65748254/
@ -680,19 +617,45 @@ else
runtests=$((runtests+1))
fi
print_header "mode=root,variant=apt: mount is missing"
print_header "mode=root,variant=apt: fail without mounted /proc"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
if [ ! -e /mmdebstrap-testenv ]; then
echo "this test modifies the system and should only be run inside a container" >&2
# success with /proc mounted
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" bash -c "test \\"\\\$(cat <(echo foobar))\\" = foobar"' \
$DEFAULT_DIST /dev/null $mirror
# failure without /proc mounted (using --skip=check/canmount)
ret=0
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" bash -c "test \\"\\\$(cat <(echo foobar))\\" = foobar"' \
--skip=check/canmount $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
for p in /bin /usr/bin /sbin /usr/sbin; do
rm -f "\$p/mount"
done
$CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror
rm -r /tmp/debian-chroot
END
if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
./run_null.sh SUDO
runtests=$((runtests+1))
fi
print_header "mode=unshare,variant=apt: root without cap_sys_admin but --skip=check/canmount"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
[ "\$(whoami)" = "root" ]
capsh --drop=cap_sys_admin -- -c 'exec "\$@"' exec \
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" sh -c "test ! -e /proc/self/fd"' \
--skip=check/canmount $DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
rm /tmp/debian-chroot.tar
END
@ -700,8 +663,8 @@ if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
echo "HAVE_QEMU != yes -- Skipping test..." >&2
skipped=$((skipped+1))
./run_null.sh SUDO
runtests=$((runtests+1))
fi
for variant in essential apt minbase buildd important standard; do
@ -715,18 +678,18 @@ for variant in essential apt minbase buildd important standard; do
skipped=$((skipped+1))
continue
fi
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because /var/lib/systemd/catalog/database differs" >&2
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because /var/lib/systemd/catalog/database differs" >&2
skipped=$((skipped+1))
continue
fi
if [ "$format" = "squashfs" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because squashfs-tools-ng is not available" >&2
if [ "$format" = "squashfs" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because squashfs-tools-ng is not available" >&2
skipped=$((skipped+1))
continue
fi
if [ "$format" = "ext2" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
if [ "$format" = "ext2" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
skipped=$((skipped+1))
continue
fi
@ -827,8 +790,8 @@ runuser -u user -- $CMD --unshare-helper /usr/sbin/chroot /tmp/debian-chroot get
rm /tmp/debian-chroot.tar /tmp/debian-chroot-shifted.tar /tmp/debian-chroot.txt /tmp/debian-chroot-shiftedback.tar /tmp/expected
rm -r /tmp/debian-chroot
END
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "the python3 tarfile module in oldstable does not preserve xattrs -- Skipping test..." >&2
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "the python3 tarfile module in stable does not preserve xattrs -- Skipping test..." >&2
skipped=$((skipped+1))
elif [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
@ -1152,8 +1115,8 @@ sqfs2tar --no-skip --root-becomes . /tmp/debian-chroot.squashfs | tar -t \
| sort | diff -u /tmp/tar1noslash.txt -
rm /tmp/debian-chroot.squashfs /tmp/tar1noslash.txt
END
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because squashfs-tools-ng is not available" >&2
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because squashfs-tools-ng is not available" >&2
skipped=$((skipped+1))
elif [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
@ -1168,8 +1131,8 @@ fi
for mode in root unshare fakechroot proot; do
print_header "mode=$mode,variant=apt: test ext2 image"
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
skipped=$((skipped+1))
continue
fi
@ -1381,7 +1344,7 @@ $CMD --mode=root --variant=apt stable /tmp/debian-chroot
cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list
deb http://deb.debian.org/debian stable main
deb http://deb.debian.org/debian stable-updates main
deb http://security.debian.org/debian-security stable-security main
deb http://security.debian.org/debian-security stable/updates main
SOURCES
rm -r /tmp/debian-chroot
END
@ -2923,11 +2886,8 @@ cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
$CMD --mode=$defaultmode --variant=essential --include=apt \
--essential-hook='apt-get -o Dir="\$1" -oDir::Etc::TrustedParts=/etc/apt/trusted.gpg.d -oAcquire::Languages=none update' \
--essential-hook='apt-get --yes install --no-install-recommends -o Dir="\$1" -oDPkg::Chroot-Directory="\$1" apt' \
$DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | grep -v ./var/lib/apt/extended_states | diff -u tar1.txt -
$CMD --mode=$defaultmode --variant=essential --include=apt --setup-hook="apt-get update" --setup-hook="apt-get --yes -oApt::Get::Download-Only=true install apt" $DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
rm /tmp/debian-chroot.tar
END
if [ "$HAVE_QEMU" = "yes" ]; then
@ -2973,8 +2933,8 @@ for variant in extract custom essential apt minbase buildd important standard; d
skipped=$((skipped+1))
continue
fi
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because /var/lib/systemd/catalog/database differs" >&2
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because /var/lib/systemd/catalog/database differs" >&2
skipped=$((skipped+1))
continue
fi
@ -3070,9 +3030,7 @@ fi
# test all --dry-run variants
# we are testing all variants here because with 0.7.5 we had a bug:
# mmdebstrap sid /dev/null --simulate ==> E: cannot read /var/cache/apt/archives/
for variant in extract custom essential apt minbase buildd important standard; do
for variant in extract custom essential apt; do
for mode in root unshare fakechroot proot chrootless; do
print_header "mode=$mode,variant=$variant: create tarball --dry-run"
if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then
@ -3353,6 +3311,8 @@ fi
prefix=
[ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --"
\$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian $DEFAULT_DIST /tmp/debian-chroot $mirror
# preserve output with permissions and timestamps for later test
chmod 700 /tmp/debian-chroot
tar -C /tmp/debian-chroot --owner=0 --group=0 --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/debian-chroot.tar .
tar tvf /tmp/debian-chroot.tar > doc-debian.tar.list
rm /tmp/debian-chroot.tar
@ -3370,6 +3330,7 @@ rm /tmp/debian-chroot/var/cache/apt/archives/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
rm /tmp/debian-chroot/var/lib/apt/lists/lock
rm /tmp/debian-chroot/var/lib/apt/extended_states
if [ "$mode" != "chrootless" ] || dpkg --compare-versions "\$(dpkg --robot --version)" lt 1.20.0; then
rm /tmp/debian-chroot/var/lib/dpkg/available
rm /tmp/debian-chroot/var/lib/dpkg/cmethopt
@ -3443,8 +3404,8 @@ prefix=
[ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --"
\$prefix $CMD --mode=chrootless --variant=custom --include=bsdutils,coreutils,debianutils,diffutils,dpkg,findutils,grep,gzip,hostname,init-system-helpers,ncurses-base,ncurses-bin,perl-base,sed,sysvinit-utils,tar $DEFAULT_DIST /dev/null $mirror
END
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "chrootless doesn't work in oldstable -- Skipping test..." >&2
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "chrootless doesn't work in stable -- Skipping test..." >&2
skipped=$((skipped+1))
elif [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
@ -3496,9 +3457,10 @@ if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then
fi
prefix=
[ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --"
\$prefix $CMD --mode=chrootless --skip=cleanup/tmp --variant=custom --include=doc-debian --setup-hook='touch "\$1/tmp/setup"' --customize-hook='touch "\$1/tmp/customize"' $DEFAULT_DIST /tmp/debian-chroot $mirror
rm /tmp/debian-chroot/tmp/setup
rm /tmp/debian-chroot/tmp/customize
\$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian --setup-hook='touch "\$1/setup"' --customize-hook='touch "\$1/customize"' $DEFAULT_DIST /tmp/debian-chroot $mirror
rm /tmp/debian-chroot/setup
rm /tmp/debian-chroot/customize
chmod 700 /tmp/debian-chroot
tar -C /tmp/debian-chroot --owner=0 --group=0 --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/debian-chroot.tar .
tar tvf /tmp/debian-chroot.tar | grep -v ' ./dev' | diff -u doc-debian.tar.list -
rm /tmp/debian-chroot.tar
@ -3516,6 +3478,7 @@ rm /tmp/debian-chroot/var/cache/apt/archives/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
rm /tmp/debian-chroot/var/lib/apt/lists/lock
rm /tmp/debian-chroot/var/lib/apt/extended_states
if dpkg --compare-versions "\$(dpkg --robot --version)" lt 1.20.0; then
rm /tmp/debian-chroot/var/lib/dpkg/available
rm /tmp/debian-chroot/var/lib/dpkg/cmethopt
@ -3578,6 +3541,7 @@ rm /tmp/debian-chroot/var/cache/apt/archives/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
rm /tmp/debian-chroot/var/lib/apt/lists/lock
rm /tmp/debian-chroot/var/lib/apt/extended_states
if dpkg --compare-versions "\$(dpkg --robot --version)" lt 1.20.0; then
rm /tmp/debian-chroot/var/lib/dpkg/available
rm /tmp/debian-chroot/var/lib/dpkg/cmethopt

View file

@ -20,7 +20,7 @@ deletecache() {
return 1
fi
# be very careful with removing the old directory
for dist in oldstable stable testing unstable; do
for dist in stable testing unstable; do
for variant in minbase buildd -; do
if [ -e "$dir/debian-$dist-$variant.tar" ]; then
rm "$dir/debian-$dist-$variant.tar"
@ -33,30 +33,18 @@ deletecache() {
else
echo "does not exist: $dir/debian/dists/$dist" >&2
fi
case "$dist" in oldstable|stable)
if [ -e "$dir/debian/dists/$dist-updates" ]; then
rm --one-file-system --recursive "$dir/debian/dists/$dist-updates"
if [ "$dist" = "stable" ]; then
if [ -e "$dir/debian/dists/stable-updates" ]; then
rm --one-file-system --recursive "$dir/debian/dists/stable-updates"
else
echo "does not exist: $dir/debian/dists/$dist-updates" >&2
echo "does not exist: $dir/debian/dists/stable-updates" >&2
fi
;;
esac
case "$dist" in
oldstable)
if [ -e "$dir/debian-security/dists/$dist/updates" ]; then
rm --one-file-system --recursive "$dir/debian-security/dists/$dist/updates"
else
echo "does not exist: $dir/debian-security/dists/$dist/updates" >&2
fi
;;
stable)
if [ -e "$dir/debian-security/dists/$dist-security" ]; then
rm --one-file-system --recursive "$dir/debian-security/dists/$dist-security"
else
echo "does not exist: $dir/debian-security/dists/$dist-security" >&2
fi
;;
esac
if [ -e "$dir/debian-security/dists/stable/updates" ]; then
rm --one-file-system --recursive "$dir/debian-security/dists/stable/updates"
else
echo "does not exist: $dir/debian-security/dists/stable/updates" >&2
fi
fi
done
if [ -e $dir/debian-*.qcow ]; then
rm --one-file-system "$dir"/debian-*.qcow
@ -228,6 +216,7 @@ END
> "$rootdir/var/lib/dpkg/status"
APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get update
# before downloading packages and before replacing the old Packages
@ -236,18 +225,10 @@ END
# packages that we already have
{
get_oldaptnames "$oldmirrordir" "dists/$dist/main/binary-$nativearch/Packages.gz"
case "$dist" in oldstable|stable)
get_oldaptnames "$oldmirrordir" "dists/$dist-updates/main/binary-$nativearch/Packages.gz"
;;
esac
case "$dist" in
oldstable)
get_oldaptnames "$oldcachedir/debian-security" "dists/$dist/updates/main/binary-$nativearch/Packages.gz"
;;
stable)
get_oldaptnames "$oldcachedir/debian-security" "dists/$dist-security/main/binary-$nativearch/Packages.gz"
;;
esac
if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then
get_oldaptnames "$oldmirrordir" "dists/stable-updates/main/binary-$nativearch/Packages.gz"
get_oldaptnames "$oldcachedir/debian-security" "dists/stable/updates/main/binary-$nativearch/Packages.gz"
fi
} | sort -u > "$rootdir/oldaptnames"
pkgs=$(APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get indextargets \
@ -267,27 +248,16 @@ END
curl --location "$mirror/dists/$dist/Release" > "$newmirrordir/dists/$dist/Release"
curl --location "$mirror/dists/$dist/Release.gpg" > "$newmirrordir/dists/$dist/Release.gpg"
curl --location "$mirror/dists/$dist/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/$dist/main/binary-$nativearch/Packages.gz"
case "$dist" in oldstable|stable)
mkdir -p "$newmirrordir/dists/$dist-updates/main/binary-$nativearch/"
curl --location "$mirror/dists/$dist-updates/Release" > "$newmirrordir/dists/$dist-updates/Release"
curl --location "$mirror/dists/$dist-updates/Release.gpg" > "$newmirrordir/dists/$dist-updates/Release.gpg"
curl --location "$mirror/dists/$dist-updates/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/$dist-updates/main/binary-$nativearch/Packages.gz"
;;
esac
case "$dist" in
oldstable)
mkdir -p "$newcachedir/debian-security/dists/$dist/updates/main/binary-$nativearch/"
curl --location "$security_mirror/dists/$dist/updates/Release" > "$newcachedir/debian-security/dists/$dist/updates/Release"
curl --location "$security_mirror/dists/$dist/updates/Release.gpg" > "$newcachedir/debian-security/dists/$dist/updates/Release.gpg"
curl --location "$security_mirror/dists/$dist/updates/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/$dist/updates/main/binary-$nativearch/Packages.gz"
;;
stable)
mkdir -p "$newcachedir/debian-security/dists/$dist-security/main/binary-$nativearch/"
curl --location "$security_mirror/dists/$dist-security/Release" > "$newcachedir/debian-security/dists/$dist-security/Release"
curl --location "$security_mirror/dists/$dist-security/Release.gpg" > "$newcachedir/debian-security/dists/$dist-security/Release.gpg"
curl --location "$security_mirror/dists/$dist-security/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/$dist-security/main/binary-$nativearch/Packages.gz"
;;
esac
if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then
mkdir -p "$newmirrordir/dists/stable-updates/main/binary-$nativearch/"
curl --location "$mirror/dists/stable-updates/Release" > "$newmirrordir/dists/stable-updates/Release"
curl --location "$mirror/dists/stable-updates/Release.gpg" > "$newmirrordir/dists/stable-updates/Release.gpg"
curl --location "$mirror/dists/stable-updates/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/stable-updates/main/binary-$nativearch/Packages.gz"
mkdir -p "$newcachedir/debian-security/dists/stable/updates/main/binary-$nativearch/"
curl --location "$security_mirror/dists/stable/updates/Release" > "$newcachedir/debian-security/dists/stable/updates/Release"
curl --location "$security_mirror/dists/stable/updates/Release.gpg" > "$newcachedir/debian-security/dists/stable/updates/Release.gpg"
curl --location "$security_mirror/dists/stable/updates/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/stable/updates/main/binary-$nativearch/Packages.gz"
fi
# the deb files downloaded by apt must be moved to their right locations in the
# pool directory
@ -299,18 +269,10 @@ END
# This way, it doesn't matter where the mirror ends up storing the package.
{
get_newaptnames "$newmirrordir" "dists/$dist/main/binary-$nativearch/Packages.gz";
case "$dist" in oldstable|stable)
get_newaptnames "$newmirrordir" "dists/$dist-updates/main/binary-$nativearch/Packages.gz"
;;
esac
case "$dist" in
oldstable)
get_newaptnames "$newcachedir/debian-security" "dists/$dist/updates/main/binary-$nativearch/Packages.gz"
;;
stable)
get_newaptnames "$newcachedir/debian-security" "dists/$dist-security/main/binary-$nativearch/Packages.gz"
;;
esac
if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then
get_newaptnames "$newmirrordir" "dists/stable-updates/main/binary-$nativearch/Packages.gz"
get_newaptnames "$newcachedir/debian-security" "dists/stable/updates/main/binary-$nativearch/Packages.gz"
fi
} | sort -u > "$rootdir/newaptnames"
rm "$rootdir/var/cache/apt/archives/lock"
@ -401,33 +363,22 @@ else
fi
for nativearch in $arches; do
for dist in oldstable stable testing unstable; do
for dist in stable testing unstable; do
# non-host architectures are only downloaded for $DEFAULT_DIST
if [ $nativearch != $HOSTARCH ] && [ $DEFAULT_DIST != $dist ]; then
continue
fi
# we need a first pass without updates and security patches
# because otherwise, old package versions needed by
# debootstrap will not get included
echo "deb [arch=$nativearch] $mirror $dist $components" | update_cache "$dist" "$nativearch"
# we need to include the base mirror again or otherwise
# packages like build-essential will be missing
case "$dist" in
oldstable)
cat << END | update_cache "$dist" "$nativearch"
cat << END | update_cache "$dist" "$nativearch"
deb [arch=$nativearch] $mirror $dist $components
deb [arch=$nativearch] $mirror $dist-updates main
deb [arch=$nativearch] $security_mirror $dist/updates main
END
;;
stable)
cat << END | update_cache "$dist" "$nativearch"
if [ "$dist" = "stable" ]; then
# starting wit bullseye, stable/updates becomes stable-security
cat << END | update_cache "$dist" "$nativearch"
deb [arch=$nativearch] $mirror $dist $components
deb [arch=$nativearch] $mirror $dist-updates main
deb [arch=$nativearch] $security_mirror $dist-security main
deb [arch=$nativearch] $mirror stable-updates main
deb [arch=$nativearch] $security_mirror stable/updates main
END
;;
esac
fi
done
done
@ -476,7 +427,7 @@ if [ "$HAVE_QEMU" = "yes" ]; then
trap "cleanuptmpdir; cleanup_newcachedir" EXIT INT TERM
pkgs=perl-doc,systemd-sysv,perl,arch-test,fakechroot,fakeroot,mount,uidmap,qemu-user-static,binfmt-support,qemu-user,dpkg-dev,mini-httpd,libdevel-cover-perl,libtemplate-perl,debootstrap,procps,apt-cudf,aspcud,python3,libcap2-bin,gpg,debootstrap,distro-info-data,iproute2,ubuntu-keyring,apt-utils
if [ "$DEFAULT_DIST" != "oldstable" ]; then
if [ "$DEFAULT_DIST" != "stable" ]; then
pkgs="$pkgs,squashfs-tools-ng,genext2fs"
fi
if [ "$HAVE_PROOT" = "yes" ]; then
@ -628,7 +579,7 @@ END
fi
mirror="http://127.0.0.1/debian"
for dist in oldstable stable testing unstable; do
for dist in stable testing unstable; do
for variant in minbase buildd -; do
echo "running debootstrap --no-merged-usr --variant=$variant $dist \${TEMPDIR} $mirror"
cat << END > shared/test.sh

View file

@ -58,14 +58,9 @@ use version;
*CLONE_NEWNET = \0x40000000; # net
*_LINUX_CAPABILITY_VERSION_3 = \0x20080522;
*CAP_SYS_ADMIN = \21;
*PR_CAPBSET_READ = \23;
our (
$CLONE_NEWNS, $CLONE_NEWUTS,
$CLONE_NEWIPC, $CLONE_NEWUSER,
$CLONE_NEWPID, $CLONE_NEWNET,
$_LINUX_CAPABILITY_VERSION_3, $CAP_SYS_ADMIN,
$PR_CAPBSET_READ
);
our ($CLONE_NEWNS, $CLONE_NEWUTS, $CLONE_NEWIPC,
$CLONE_NEWUSER, $CLONE_NEWPID, $CLONE_NEWNET,
$_LINUX_CAPABILITY_VERSION_3, $CAP_SYS_ADMIN);
#<<<
# type codes:
@ -330,7 +325,7 @@ sub test_unshare_userns {
}
sub read_subuid_subgid() {
my $username = getpwuid $REAL_USER_ID;
my $username = getpwuid $<;
my ($subid, $num_subid, $fh, $n);
my @result = ();
@ -350,14 +345,6 @@ sub read_subuid_subgid() {
last if ($n eq $username);
}
close $fh;
if (!length $subid) {
warning "/etc/subuid is empty";
return;
}
if ($n ne $username) {
warning "no entry in /etc/subuid for $username";
return;
}
push @result, ["u", 0, $subid, $num_subid];
if (scalar(@result) < 1) {
@ -369,40 +356,21 @@ sub read_subuid_subgid() {
return;
}
my $groupname = getgrgid $REAL_GROUP_ID;
if (!-e "/etc/subgid") {
warning "/etc/subgid doesn't exist";
return;
}
if (!-r "/etc/subgid") {
warning "/etc/subgid is not readable";
return;
}
open $fh, "<", "/etc/subgid"
or error "cannot open /etc/subgid for reading: $!";
while (my $line = <$fh>) {
($n, $subid, $num_subid) = split(/:/, $line, 3);
last if ($n eq $groupname);
last if ($n eq $username);
}
close $fh;
if (!length $subid) {
warning "/etc/subgid is empty";
return;
}
if ($n ne $groupname) {
warning "no entry in /etc/subgid for $groupname";
return;
}
push @result, ["g", 0, $subid, $num_subid];
if (scalar(@result) < 2) {
warning "/etc/subgid does not contain an entry for $groupname";
warning "/etc/subgid does not contain an entry for $username";
return;
}
if (scalar(@result) > 2) {
warning "/etc/subgid contains multiple entries for $groupname";
warning "/etc/subgid contains multiple entries for $username";
return;
}
@ -1014,8 +982,7 @@ sub run_chroot {
}
} elsif ($type == 3 or $type == 4) {
# character/block special
if ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
if ($options->{mode} eq 'root' && !$options->{canmount}) {
warning "skipping bind-mounting ./dev/$fname";
} elsif (!$options->{havemknod}) {
if (!-d "$options->{root}/dev") {
@ -1057,7 +1024,7 @@ sub run_chroot {
or error "mount ./dev/$fname failed: $?";
}
} elsif ($type == 5
&& (any { $_ eq $options->{mode} } ('root', 'unshare'))
&& $options->{mode} eq 'root'
&& !$options->{canmount}) {
warning "skipping bind-mounting ./dev/$fname";
} elsif ($type == 5) { # directory
@ -1141,21 +1108,11 @@ sub run_chroot {
# We can only mount /proc and /sys after extracting the essential
# set because if we mount it before, then base-files will not be able
# to extract those
if ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
if ($options->{mode} eq 'root' && !$options->{canmount}) {
warning "skipping mount sysfs";
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "$options->{root}/sys") {
} elsif ($options->{mode} eq 'root' && !-d "$options->{root}/sys") {
warning("skipping mounting of sysfs because the"
. " /sys directory is missing in the target");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-e "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys does not exist on the outside");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys on the outside is not a directory");
} elsif ($options->{mode} eq 'root') {
push @cleanup_tasks, sub {
0 == system('umount', "$options->{root}/sys")
@ -1166,6 +1123,15 @@ sub run_chroot {
'-o', 'ro,nosuid,nodev,noexec', 'sys',
"$options->{root}/sys"
) or error "mount /sys failed: $?";
} elsif ($options->{mode} eq 'unshare' && !-d "$options->{root}/sys") {
warning("skipping bind-mounting /sys because the"
. " /sys directory is missing in the target");
} elsif ($options->{mode} eq 'unshare' && !-e "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys does not exist on the outside");
} elsif ($options->{mode} eq 'unshare' && !-d "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys on the outside is not a directory");
} elsif ($options->{mode} eq 'unshare') {
# naturally we have to clean up after ourselves in sudo mode where
# we do a real mount. But we also need to unmount in unshare mode
@ -1196,21 +1162,11 @@ sub run_chroot {
} else {
error "unknown mode: $options->{mode}";
}
if ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
if ($options->{mode} eq 'root' && !$options->{canmount}) {
warning "skipping mount proc";
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "$options->{root}/proc") {
} elsif ($options->{mode} eq 'root' && !-d "$options->{root}/proc") {
warning("skipping mounting of proc because the"
. " /proc directory is missing in the target");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-e "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc does not exist on the outside");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc on the outside is not a directory");
} elsif ($options->{mode} eq 'root') {
push @cleanup_tasks, sub {
# some maintainer scripts mount additional stuff into /proc
@ -1230,6 +1186,16 @@ sub run_chroot {
0 == system('mount', '-t', 'proc', '-o', 'ro', 'proc',
"$options->{root}/proc")
or error "mount /proc failed: $?";
} elsif ($options->{mode} eq 'unshare' && !-d "$options->{root}/proc")
{
warning("skipping bind-mounting /proc because the"
. " /proc directory is missing in the target");
} elsif ($options->{mode} eq 'unshare' && !-e "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc does not exist on the outside");
} elsif ($options->{mode} eq 'unshare' && !-d "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc on the outside is not a directory");
} elsif ($options->{mode} eq 'unshare') {
# naturally we have to clean up after ourselves in sudo mode where
# we do a real mount. But we also need to unmount in unshare mode
@ -1478,30 +1444,24 @@ sub setup {
warning "cannot read $options->{apttrustedparts}";
}
if (any { $_ eq 'setup' } @{ $options->{skip} }) {
info "skipping setup as requested";
} else {
run_setup($options);
}
run_setup($options);
run_hooks('setup', $options);
if (any { $_ eq 'update' } @{ $options->{skip} }) {
info "skipping update as requested";
} else {
run_update($options);
}
run_update($options);
(my $pkgs_to_install, my $essential_pkgs, my $cached_debs)
= run_download($options);
# in theory, we don't have to extract the packages in chrootless mode
# but we do it anyways because otherwise directory creation timestamps
# will differ compared to non-chrootless and we want to create bit-by-bit
# identical tar output
#
# FIXME: dpkg could be changed to produce the same results
run_extract($options, $essential_pkgs);
if ( $options->{mode} ne 'chrootless'
or $options->{variant} eq 'extract') {
# We have to extract the packages from @essential_pkgs either if we run
# in chrootless mode and extract variant or in any other mode. In
# other words, the only scenario in which the @essential_pkgs are not
# extracted are in chrootless mode in any other than the extract
# variant.
run_extract($options, $essential_pkgs);
}
run_hooks('extract', $options);
@ -1520,11 +1480,7 @@ sub setup {
run_hooks('customize', $options);
}
if (any { $_ eq 'cleanup' } @{ $options->{skip} }) {
info "skipping cleanup as requested";
} else {
run_cleanup($options);
}
run_cleanup($options);
return;
}
@ -1584,10 +1540,8 @@ sub run_setup() {
}
# if dpkg and apt operate from the outside we need some more
# directories because dpkg and apt might not even be installed inside
# the chroot. Thus, the following block is not strictly necessary in
# chrootless mode. We unconditionally add it anyways, so that the
# output with and without chrootless mode is equal.
{
# the chroot
if ($options->{mode} eq 'chrootless') {
push @directories, '/var/log/apt';
# since we do not know the dpkg version inside the chroot at this
# point, we can only omit it in chrootless mode
@ -1626,10 +1580,6 @@ sub run_setup() {
# This will affect calls to tempfile() as well as runs of "apt-get update"
# which will create temporary clearsigned.message.XXXXXX files to verify
# signatures.
#
# Setting TMPDIR to inside the chroot is also necessary for when packages
# are installed with apt from outside the chroot with
# DPkg::Chroot-Directory
{
## no critic (Variables::RequireLocalizedPunctuationVars)
$ENV{"TMPDIR"} = "$options->{root}/tmp";
@ -2092,18 +2042,6 @@ sub run_download() {
# - no simulation run is done, and
# - the variant is not extract or custom or the number to be
# installed packages not zero
#
# We could also unconditionally use the proxysolver and then "apt-get
# download" any missing packages but using the proxysolver requires
# /usr/lib/apt/solvers/apt from the apt-utils package and we want to avoid
# that dependency.
#
# In the future we want to replace downloading packages with "apt-get
# install --download-only" and installing them with dpkg by just installing
# the essential packages with apt from the outside with
# DPkg::Chroot-Directory. We are not doing that because then the preinst
# script of base-passwd will not be called early enough and packages will
# fail to install because they are missing /etc/passwd.
my @cached_debs = ();
my @dl_debs = ();
if (
@ -2204,54 +2142,12 @@ sub run_download() {
],
%result
});
} elsif ($options->{variant} eq 'essential') {
# 2021-06-07, #debian-apt on OFTC, times in UTC+2
# 17:27 < DonKult> (?essential includes 'apt' through)
# 17:30 < josch> DonKult: no, because pkgCacheGen::ForceEssential ",";
# 17:32 < DonKult> touché
my %result = ();
if ($options->{dryrun}) {
info "simulate downloading packages with apt...";
} else {
# if there are already packages in /var/cache/apt/archives/, we
# need to use our proxysolver to obtain the solution chosen by apt
if (scalar @cached_debs > 0) {
$result{EDSP_RES} = \@dl_debs;
}
info "downloading packages with apt...";
}
run_apt_progress({
ARGV => [
'apt-get',
'--yes',
'-oApt::Get::Download-Only=true',
$options->{dryrun} ? '-oAPT::Get::Simulate=true' : (),
'install',
'?narrow('
. (
defined($options->{suite})
? '?archive(' . $options->{suite} . '),'
: ''
)
. '?architecture('
. $options->{nativearch}
. '),?essential)'
],
%result
});
} elsif (
any { $_ eq $options->{variant} }
('standard', 'important', 'required', 'minbase', 'buildd')
any { $_ eq $options->{variant} } (
'essential', 'standard', 'important', 'required', 'buildd',
'minbase'
)
) {
# In the future, after bug https://bugs.debian.org/989558 is fixed, we
# want to use apt patterns to select the packages to install:
#
# ?narrow(?archive(unstable),?architecture(amd64),?priority(important))
#
# Once this is possible, we can append above statement to the apt-get
# install call in run_download() instead of assembling the package list
# here and passing it through all the way. Then this function only
# takes care of retrieving the essential packages.
my %ess_pkgs;
my %ess_pkgs_target;
my %pkgs_to_install_target = %pkgs_to_install;
@ -2566,7 +2462,6 @@ sub run_extract() {
}
# not using dpkg-deb --extract as that would replace the
# merged-usr symlinks with plain directories
# https://bugs.debian.org/989602
# not using dpkg --unpack because that would try running preinst
# maintainer scripts
my $pid1 = fork() // error "fork() failed: $!";
@ -2849,7 +2744,7 @@ sub run_essential() {
# FIXME: the dpkg config from the host is parsed before the command
# line arguments are parsed and might break this mode
# Example: if the host has --path-exclude set, then this will also
# affect the chroot. See #808203
# affect the chroot.
my @chrootless_opts = (
'-oDPkg::Options::=--force-not-root',
'-oDPkg::Options::=--force-script-chrootless',
@ -2869,23 +2764,10 @@ sub run_essential() {
$ENV{QEMU_LD_PREFIX} = $options->{root};
}
}
# we don't use apt because that will not run the base-passwd preinst
# early enough
#run_apt_progress({
# ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'],
# PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}],
#});
run_dpkg_progress({
ARGV => [
'dpkg',
'--force-not-root',
'--force-script-chrootless',
"--root=$options->{root}",
"--log=$options->{root}/var/log/dpkg.log",
'--install',
'--force-depends'
],
PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}] });
run_apt_progress({
ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'],
PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}],
});
} elsif (
any { $_ eq $options->{mode} }
('root', 'unshare', 'fakechroot', 'proot')
@ -2894,13 +2776,6 @@ sub run_essential() {
# we need --force-depends because dpkg does not take Pre-Depends
# into account and thus doesn't install them in the right order
# And the --predep-package option is broken: #539133
#
# We could use apt from outside the chroot using DPkg::Chroot-Directory
# but then the preinst script of base-passwd will not be called early
# enough and packages will fail to install because they are missing
# /etc/passwd. Also, with plain dpkg the essential variant can finish
# within 9 seconds. If we use apt instead, it becomes 12 seconds. We
# prefer speed here.
if ($options->{dryrun}) {
info "simulate installing essential packages...";
} else {
@ -2964,26 +2839,176 @@ sub run_install() {
) {
if ($options->{variant} ne 'custom'
and scalar @{$pkgs_to_install} > 0) {
# Advantage of running apt on the outside instead of inside the
# chroot:
# some packages have to be installed from the outside before
# anything can be installed from the inside.
#
# - we can build chroots without apt (for example from buildinfo
# files)
#
# - we do not need to install additional packages like
# apt-transport-* or ca-certificates inside the chroot
#
# - we do not not need additional key material inside the chroot
#
# - we can make use of file:// and copy://
#
# The DPkg::Install::Recursive::force=true workaround can be
# dropped after this issue is fixed:
# https://salsa.debian.org/apt-team/apt/-/merge_requests/178
#
# We could also move the dpkg call to the outside and run dpkg with
# --root but this would only make sense in situations where there
# is no dpkg inside the chroot.
# we do not need to install any *-archive-keyring packages
# inside the chroot prior to installing the packages, because
# the keyring is only used when doing "apt-get update" and that
# was already done at the beginning using key material from the
# outside. Since the apt cache is already filled and we are not
# calling "apt-get update" again, the keyring can be installed
# later during installation. But: if it's not installed during
# installation, then we might end up with a fully installed
# system without keyrings that are valid for its sources.list.
my @pkgs_to_install_from_outside;
# install apt if necessary
if ($options->{variant} ne 'apt') {
push @pkgs_to_install_from_outside, 'apt';
}
# since apt will be run inside the chroot, make sure that
# apt-transport-https and ca-certificates gets installed first
# if any mirror is a https URI
open(my $pipe_apt, '-|', 'apt-get', 'indextargets',
'--format', '$(URI)', 'Created-By: Packages')
or error "cannot start apt-get indextargets: $!";
while (my $uri = <$pipe_apt>) {
if ($uri =~ /^https:\/\//) {
info "https mirror found -- adding apt-transport-https "
. "and ca-certificates";
# FIXME: support for https is part of apt >= 1.5
push @pkgs_to_install_from_outside, 'apt-transport-https';
push @pkgs_to_install_from_outside, 'ca-certificates';
last;
} elsif ($uri =~ /^tor(\+[a-z]+)*:\/\//) {
# tor URIs can be tor+http://, tor+https:// or even
# tor+mirror+file://
info "tor mirror found -- adding apt-transport-tor";
push @pkgs_to_install_from_outside, 'apt-transport-tor';
last;
}
}
close $pipe_apt;
$? == 0 or error "apt-get indextargets failed";
if (scalar @pkgs_to_install_from_outside > 0) {
my @cached_debs = ();
my @dl_debs = ();
# /var/cache/apt/archives/ might not be empty either because
# the user used hooks to populate it or because skip options
# like essential/unlink or check/empty were used.
{
my $apt_archives = "/var/cache/apt/archives/";
opendir my $dh, "$options->{root}/$apt_archives"
or error "cannot read $apt_archives";
while (my $deb = readdir $dh) {
if ($deb !~ /\.deb$/) {
next;
}
if (!-f "$options->{root}/$apt_archives/$deb") {
next;
}
push @cached_debs, $deb;
}
closedir $dh;
}
my %result = ();
if ($options->{dryrun}) {
info 'simulate downloading '
. (join ', ', @pkgs_to_install_from_outside) . "...";
} else {
if (scalar @cached_debs > 0) {
$result{EDSP_RES} = \@dl_debs;
}
info 'downloading '
. (join ', ', @pkgs_to_install_from_outside) . "...";
}
run_apt_progress({
ARGV => [
'apt-get',
'--yes',
'-oApt::Get::Download-Only=true',
$options->{dryrun}
? '-oAPT::Get::Simulate=true'
: (),
'install'
],
PKGS => [@pkgs_to_install_from_outside],
%result
});
if ($options->{dryrun}) {
info 'simulate installing '
. (join ', ', @pkgs_to_install_from_outside) . "...";
} else {
my @debs_to_install;
if (scalar @cached_debs > 0 && scalar @dl_debs > 0) {
my $archives = "/var/cache/apt/archives/";
my $prefix = "$options->{root}/$archives";
# for each package in @dl_debs, check if it's in
# /var/cache/apt/archives/ and add it to
# @debs_to_install
foreach my $p (@dl_debs) {
my ($pkg, $ver_epoch) = @{$p};
# apt appends the architecture at the end of the
# package name
($pkg, my $arch) = split ':', $pkg, 2;
# apt replaces the colon by its percent encoding
my $ver = $ver_epoch;
$ver =~ s/:/%3a/;
# the architecture returned by apt is the native
# architecture. Since we don't know whether the
# package is architecture independent or not, we
# first try with the native arch and then
# with "all" and only error out if neither exists.
if (-e "$prefix/${pkg}_${ver}_$arch.deb") {
push @debs_to_install,
"$archives/${pkg}_${ver}_$arch.deb";
} elsif (-e "$prefix/${pkg}_${ver}_all.deb") {
push @debs_to_install,
"$archives/${pkg}_${ver}_all.deb";
} else {
error( "cannot find package for "
. "$pkg:$arch (= $ver_epoch) "
. "in /var/cache/apt/archives/");
}
}
} else {
my $apt_archives = "/var/cache/apt/archives/";
opendir my $dh, "$options->{root}/$apt_archives"
or error "cannot read $apt_archives";
while (my $deb = readdir $dh) {
if ($deb !~ /\.deb$/) {
next;
}
$deb = "$apt_archives/$deb";
if (!-f "$options->{root}/$deb") {
next;
}
push @debs_to_install, $deb;
}
closedir $dh;
}
if (scalar @debs_to_install == 0) {
warning "nothing got downloaded -- maybe the packages"
. " were already installed?";
} else {
# we need --force-depends because dpkg does not take
# Pre-Depends into account and thus doesn't install
# them in the right order
info 'installing '
. (join ', ', @pkgs_to_install_from_outside) . "...";
run_dpkg_progress({
ARGV => [
@{$chrootcmd}, 'dpkg',
'--install', '--force-depends'
],
PKGS => \@debs_to_install,
});
foreach my $deb (@debs_to_install) {
# do not unlink those packages that were in
# /var/cache/apt/archive before the install phase
next
if any { "/var/cache/apt/archives/$_" eq $deb }
@cached_debs;
unlink "$options->{root}/$deb"
or error "cannot unlink $deb: $!";
}
}
}
}
if (!$options->{dryrun}) {
run_chroot(
sub {
@ -2991,19 +3016,8 @@ sub run_install() {
. " chroot...";
run_apt_progress({
ARGV => [
'apt-get',
'-o',
'Dir::Bin::dpkg=env',
'-o',
'DPkg::Options::=--unset=TMPDIR',
'-o',
'DPkg::Options::=dpkg',
'-o',
'DPkg::Install::Recursive::force=true',
'-o',
"DPkg::Chroot-Directory=$options->{root}",
'--yes',
'install'
@{$chrootcmd}, 'apt-get',
'--yes', 'install'
],
PKGS => $pkgs_to_install,
});
@ -3067,15 +3081,8 @@ sub run_cleanup() {
# apt since 1.6 creates the auxfiles directory. If apt inside the
# chroot is older than that, then it will not know how to clean it.
if (-e "$options->{root}/var/lib/apt/lists/auxfiles") {
remove_tree("$options->{root}/var/lib/apt/lists/auxfiles",
{ error => \my $err });
if (@$err) {
for my $diag (@$err) {
my ($file, $message) = %$diag;
if ($file eq '') { warning "general error: $message"; }
else { warning "problem unlinking $file: $message"; }
}
}
rmdir "$options->{root}/var/lib/apt/lists/auxfiles"
or die "cannot rmdir /var/lib/apt/lists/auxfiles: $!";
}
}
@ -3128,7 +3135,6 @@ sub run_cleanup() {
or error "cannot unlink /etc/machine-id: $!";
open my $fh, '>', "$options->{root}/etc/machine-id"
or error "failed to open(): $!";
print $fh "uninitialized";
close $fh;
}
}
@ -4157,8 +4163,6 @@ sub get_sourceslist_by_suite {
or error "cannot open $distro_info: $!";
my $i = 0;
my $matching_version;
my @releases;
my $today = POSIX::strftime "%Y-%m-%d", localtime;
while (my $line = <$fh>) {
chomp($line);
$i++;
@ -4181,11 +4185,6 @@ sub get_sourceslist_by_suite {
if ($i == 1) {
next;
}
if ( scalar @cells > 4
and $cells[4] =~ m/^\d\d\d\d-\d\d-\d\d$/
and $cells[4] lt $today) {
push @releases, $cells[0];
}
if (lc $cells[1] eq $suite or lc $cells[2] eq $suite) {
$matching_version = $cells[0];
last;
@ -4195,15 +4194,9 @@ sub get_sourceslist_by_suite {
if (defined $matching_version and $matching_version >= 11) {
$bullseye_or_later = 1;
}
if ($suite eq "stable" and $releases[-1] >= 11) {
$bullseye_or_later = 1;
}
} else {
# neither libdistro-info-perl nor distro-info-data is installed
if (
any { $_ eq $suite }
('stable', 'bullseye', 'bookworm', 'trixie')
) {
if (any { $_ eq $suite } ('bullseye', 'bookworm', 'trixie')) {
$bullseye_or_later = 1;
}
}
@ -4298,14 +4291,11 @@ sub main() {
# lxc-usernsexec -- lxc-unshare -s 'MOUNT|PID|UTSNAME|IPC' ...
# but without needing lxc
if ($ARGV[0] eq "--unshare-helper") {
if ($EFFECTIVE_USER_ID != 0 && !test_unshare_userns(1)) {
if (!test_unshare_userns(1)) {
exit 1;
}
my @idmap = ();
if ($EFFECTIVE_USER_ID != 0) {
@idmap = read_subuid_subgid;
}
my $pid = get_unshare_cmd(
my @idmap = read_subuid_subgid;
my $pid = get_unshare_cmd(
sub {
0 == system @ARGV[1 .. $#ARGV] or error "system failed: $?";
},
@ -4493,6 +4483,12 @@ sub main() {
$options->{variant} = 'important';
}
if ($options->{variant} eq 'essential'
and scalar @{ $options->{include} } > 0) {
warning "cannot install extra packages with variant essential because"
. " apt is missing";
}
# fakeroot is an alias for fakechroot
if ($options->{mode} eq 'fakeroot') {
$options->{mode} = 'fakechroot';
@ -4655,8 +4651,13 @@ sub main() {
error "unknown mode: $options->{mode}";
}
$options->{canmount} = 1;
if ($options->{mode} eq 'root') {
# By default, mount is not used. This is so that mounting is skipped if the
# user supplies --skip=check/canmount. This only gets enabled if
# CAP_SYS_ADMIN and unshare --mount are available in root mode.
$options->{canmount} = 0;
if (any { $_ eq 'check/canmount' } @{ $options->{skip} }) {
info "skipping check/canmount as requested";
} elsif ($options->{mode} eq 'root') {
# It's possible to be root but not be able to mount anything.
# This is for example the case when running under docker.
# Mounting needs CAP_SYS_ADMIN which might not be available.
@ -4673,15 +4674,11 @@ sub main() {
0 == syscall &SYS_capget, $hdrp, $datap
or error "capget failed: $!";
my ($effective, undef) = unpack "LLLLLL", $datap;
if (($effective >> $CAP_SYS_ADMIN) & 1 != 1) {
warning
"cannot mount because CAP_SYS_ADMIN is not in the effective set";
$options->{canmount} = 0;
}
if (0 == syscall &SYS_prctl, $PR_CAPBSET_READ, $CAP_SYS_ADMIN) {
warning
"cannot mount because CAP_SYS_ADMIN is not in the bounding set";
$options->{canmount} = 0;
if (($effective >> $CAP_SYS_ADMIN) & 1 == 1) {
# we have CAP_SYS_ADMIN, and thus can mount
$options->{canmount} = 1;
} else {
error "root mode requires mount which requires CAP_SYS_ADMIN";
}
# To test whether we can use mount without actually trying to mount
# something we try unsharing the mount namespace. If this is allowed,
@ -4691,26 +4688,15 @@ sub main() {
# we get 'cannot change root filesystem propagation' when running
# mmdebstrap inside a chroot for which the root of the chroot is not
# its own mount point.
if (0 != system 'unshare --mount --propagation unchanged -- true') {
if (0 == system 'unshare --mount --propagation unchanged -- true') {
$options->{canmount} = 1;
} else {
# if we cannot unshare the mount namespace as root, then we also
# cannot mount
warning "cannot mount because unshare --mount failed";
$options->{canmount} = 0;
error "root mode requires mount but unshare --mount failed";
}
}
if (any { $_ eq $options->{mode} } ('root', 'unshare')) {
if (system('mount --version>/dev/null') != 0) {
warning "cannot execute mount";
$options->{canmount} = 0;
}
}
# we can only possibly mount in root and unshare mode
if (none { $_ eq $options->{mode} } ('root', 'unshare')) {
$options->{canmount} = 0;
}
my @architectures = ();
foreach my $archs (@{ $options->{architectures} }) {
foreach my $arch (split /[,\s]+/, $archs) {
@ -5003,6 +4989,7 @@ sub main() {
'--ignore-time-conflict', '--no-options',
'--no-default-keyring', '--homedir',
$gpghome, '--no-auto-check-trustdb',
'--trust-model', 'always'
);
my ($ret, $message);
{
@ -5362,9 +5349,7 @@ sub main() {
# in unshare and root mode, other users than the current user need to
# access the rootfs, most prominently, the _apt user. Thus, make the
# temporary directory world readable.
if (any { $_ eq $options->{mode} } ('unshare', 'root')
or
($EFFECTIVE_USER_ID == 0 and $options->{mode} eq 'chrootless')) {
if (any { $_ eq $options->{mode} } ('unshare', 'root')) {
chmod 0755, $options->{root} or error "cannot chmod root: $!";
}
} elsif ($format eq 'directory') {
@ -5435,14 +5420,12 @@ sub main() {
my @idmap;
# for unshare mode the rootfs directory has to have appropriate
# permissions
if ($EFFECTIVE_USER_ID != 0 and $options->{mode} eq 'unshare') {
if ($options->{mode} eq 'unshare') {
@idmap = read_subuid_subgid;
# sanity check
if ( scalar(@idmap) != 2
|| $idmap[0][0] ne 'u'
|| $idmap[1][0] ne 'g'
|| !length $idmap[0][2]
|| !length $idmap[1][2]) {
|| $idmap[1][0] ne 'g') {
error "invalid idmap";
}
@ -5531,11 +5514,6 @@ sub main() {
);
# tar2sqfs and genext2fs do not support extended attributes
if ($format eq "squashfs") {
# tar2sqfs supports user.*, trusted.* and security.* but not system.*
# https://bugs.debian.org/988100
# lib/sqfs/xattr/xattr.c of https://github.com/AgentD/squashfs-tools-ng
# https://github.com/AgentD/squashfs-tools-ng/issues/83
# https://github.com/AgentD/squashfs-tools-ng/issues/25
warning("tar2sqfs does not support extended attributes"
. " from the 'system' namespace");
push @taropts, '--xattrs', '--xattrs-exclude=system.*';
@ -6010,11 +5988,10 @@ option happens to be an existing file, then its contents are pasted into the
chroot's sources.list. This can be used to supply a deb822 style
sources.list. If I<MIRROR> is C<-> then standard input is pasted into the
chroot's sources.list. More than one mirror can be specified and are appended
to the chroot's sources.list in the given order. If you specify a https or tor
I<MIRROR> and you want the chroot to be able to update itself, don't forget to
also install the ca-certificates package, the apt-transport-https package for
apt versions less than 1.5 and/or the apt-transport-tor package using the
B<--include> option, as necessary.
to the chroot's sources.list in the given order. If any mirror contains a
https URI, then the packages apt-transport-https and ca-certificates will be
installed inside the chroot. If any mirror contains a tor+xxx URI, then the
apt-transport-tor package will be installed inside the chroot.
The optional I<TARGET> argument can either be the path to a directory, the path
to a tarball filename, the path to a squashfs image, the path to an ext2 image,
@ -6028,7 +6005,8 @@ symbolic name (eg, unstable, testing, stable, oldstable). Any suite name that
works with apt on the given mirror will work. If no I<SUITE> was specified,
then a single I<MIRROR> C<-> is added and thus the information of the desired
suite has to come from standard input as part of a valid apt sources.list file.
The value of the I<SUITE> argument will be used to determine which apt index to
If mmdebstrap is instructed to retrieve packages from multiple releases, then
the value of the I<SUITE> argument will be used to determine which apt index to
use for finding out the set of C<Essential:yes> packages and/or the set of
packages with the right priority for the selected variant. See the section
B<VARIANTS> for more information.
@ -6175,7 +6153,9 @@ option depends on the selected variant. The B<extract> and B<custom> variants
install no packages by default, so for these variants, the packages specified
by this option will be the only ones that get either extracted or installed by
dpkg, respectively. For all other variants, apt is used to install the
additional packages. Package names are directly passed to
additional packages. The B<essential> variant does not include apt and thus,
the include option will only work when the B<chrootless> mode is selected and
thus apt from the outside can be used. Package names are directly passed to
apt and thus, you can use apt features like C<pkg/suite>, C<pkg=version>,
C<pkg-> or use a glob or regex for C<pkg>. See apt(8) for the supported
syntax. The option can be specified multiple times and the packages are
@ -6469,9 +6449,9 @@ B<chroot(1)>.
All package sets also include the direct and indirect hard dependencies (but
not recommends) of the selected package sets. The variants B<minbase>,
B<buildd> and B<->, resemble the package sets that debootstrap would install
with the same I<--variant> argument. The release with a name matching the
I<SUITE> argument will be used to determine the C<Essential:yes> and priority
values.
with the same I<--variant> argument. If multiple releases are passed as apt
sources to B<mmdebstrap>, then the release with a name matching the I<SUITE>
argument will be used to determine the C<Essential:yes> and priority values.
=over 8
@ -6706,6 +6686,8 @@ Upon startup, several checks are carried out, like:
=item * which mode to use and whether prerequisites are met
=item * if you are root, check whether you have the ability to mount. This check requires the C<unshare> program from the C<util-linux> package and can be disabled by using B<--skip=check/canmount>.
=item * whether the requested architecture can be executed (requires arch-test) using qemu binfmt_misc support. This requires arch-test and can be disabled using B<--skip=check/qemu>
=item * how the apt sources can be assembled from I<SUITE>, I<MIRROR> and B<--components> and/or from standard input as deb822 or one-line format and whether the required GPG keys exist.
@ -6718,7 +6700,7 @@ Upon startup, several checks are carried out, like:
=item B<setup>
The following tasks are carried out unless B<--skip=setup> is used:
The following tasks are carried out:
=over 4
@ -6743,7 +6725,7 @@ Run B<--setup-hook> options and all F<setup*> scripts in B<--hook-dir>.
=item B<update>
Runs C<apt-get update> using the temporary apt configuration file created in
the B<setup> step. This can be disabled using B<--skip=update>.
the B<setup> step.
=item B<download>
@ -6759,7 +6741,8 @@ the required priority.
=item B<extract>
Extract the downloaded packages into the rootfs.
Extract the downloaded packages into the rootfs. This step is not carried out
in chrootless mode if the variant is not B<extract>.
=item B<extract-hook>
@ -6799,7 +6782,7 @@ This step is not carried out in B<extract> mode.
=item B<cleanup>
Performs cleanup tasks, unless B<--skip=cleanup> is used:
Performs cleanup tasks like:
=over 4
@ -6980,14 +6963,6 @@ Create a system that can be used with docker:
root
$ sudo docker rmi debian
Create a system that can be used with podman:
$ mmdebstrap unstable | podman import - debian
[...]
$ podman run --network=none -it --rm debian whoami
root
$ podman rmi debian
=head1 ENVIRONMENT VARIABLES
=over 8
@ -7094,13 +7069,7 @@ Therefore, until this dpkg limitation is fixed, a default dpkg configuration is
recommended on machines running B<mmdebstrap>. If you are using B<mmdebstrap>
as the non-root user, then as a workaround you could run C<chmod 600
/etc/dpkg/dpkg.cfg.d/*> so that the config files are only accessible by the
root user. See Debian bug #808203.
The C<file://> URI type cannot be used to install the essential packages. This
is because B<mmdebstrap> uses dpkg to install the packages that apt places into
F</var/cache/apt/archives> but with C<file://> apt will not copy the files even
with C<--download-only>. Use C<copy://> instead, which is equivalent to
C<file://> but copies the archives into F</var/cache/apt/archives>.
root user.
With apt versions before 2.1.16, setting C<[trusted=yes]> or
C<Acquire::AllowInsecureRepositories "1"> to allow signed archives without a

View file

@ -46,10 +46,7 @@ def main():
description="""\
Filters a tarball on standard input by the same rules as the dpkg --path-exclude
and --path-include options and writes resulting tarball to standard output. See
dpkg(1) for information on how these two options work in detail. Since this is
meant for filtering tarballs storing a rootfs, notice that paths must be given
as /path and not as ./path even though they might be stored as such in the
tarball.
dpkg(1) for information on how these two options work in detail.
Similarly, filter out unwanted pax extended headers. This is useful in cases
where a tool only accepts certain xattr prefixes. For example tar2sqfs only