Compare commits

...

25 Commits

Author SHA1 Message Date
Johannes Schauer Marin Rodrigues 7d472ca116
document on how to use mmdebstrap with podman 3 years ago
Johannes Schauer Marin Rodrigues 047619967e
also check whether CAP_SYS_ADMIN is in the bounding set 3 years ago
Johannes Schauer Marin Rodrigues 5a5f57b404
Automatically skip using mount if that's not possible
- instead of throwing an error, just print a warning
 - can now run as root without cap_sys_admin
 - can now run without mount installed
 - --skip=check/canmount is not needed anymore
3 years ago
Johannes Schauer Marin Rodrigues 7ab770267c
README.md: document chroots without apt as a feature 3 years ago
Johannes Schauer Marin Rodrigues 1a18160fe8
document that apt-transport-https, ca-certificates and apt-transport-tor are no longer installed automatically 3 years ago
Johannes Schauer Marin Rodrigues e53d246a3b
also test minbase buildd important standard with --dry-run/--simulate 3 years ago
Johannes Schauer Marin Rodrigues 91d8be5f9c
Do not use gpg --trust-model=always
- gpg will not create a trustdb when running with --update-trustdb with
   --trust-model=always:
       gpg: no need for a trustdb update with 'always' trust model
 - subsequent gpg calls will fail because there is no trustdb in GPGHOME
3 years ago
Johannes Schauer Marin Rodrigues 850eeb24d5
more code comments 3 years ago
Johannes Schauer Marin Rodrigues 8b12375de3
add more references to #808203 3 years ago
Johannes Schauer Marin Rodrigues c627606110
document copy:// vs. file:// 3 years ago
Johannes Schauer Marin Rodrigues dddccd5e55
tarfilter: expand description text 3 years ago
Johannes Schauer Marin Rodrigues 60dba1c19e
fixup read_subuid_subgid
- use $REAL_USER_ID from English instead of $<
 - use getgrgid $REAL_GROUP_ID to get the group name instead of assuming
   the group name to be equal to the user name
 - also check whether /etc/subgid exists and is readable
3 years ago
Joe Groocock 15029c1c3b
improve error message for missing /etc/subuid entry (closes: #9) 3 years ago
Johannes Schauer Marin Rodrigues 3c37d692a0
write 'uninitialized' to /etc/machine-id to support systemd ConditionFirstBoot (closes: #10) 3 years ago
Nicolas Vigier 5283d74dfe
Remove files inside the auxfiles directory
This is fixing the error:
  cannot rmdir /var/lib/apt/lists/auxfiles: Directory not empty at ./mmdebstrap/mmdebstrap line 3084.
which happens when using apt-transport-mirror.
3 years ago
Johannes Schauer Marin Rodrigues ea82b267c9
only run test_unshare_userns() if not root user 3 years ago
Johannes Schauer Marin Rodrigues dfbf9cdcef
several fixes to chrootless mode 3 years ago
Johannes Schauer Marin Rodrigues f868073b6e
add --skip=setup, --skip=update and --skip=cleanup 3 years ago
Johannes Schauer Marin Rodrigues d62c5b7a91
README.md: add more docs 3 years ago
Johannes Schauer Marin Rodrigues b354502b7c
README.md: add missing contributors 3 years ago
Johannes Schauer Marin Rodrigues 98f1f0abde
use apt pattern to select essential set 3 years ago
Johannes Schauer Marin Rodrigues aae47da9ab
coverage.sh: fix test that was wrongly installing outside the chroot and download-only 3 years ago
Johannes Schauer Marin Rodrigues 3e488dd1dd
use apt from the outside by setting DPkg::Chroot-Directory 3 years ago
Johannes Schauer Marin Rodrigues 3e61382763
README.md: fix my name 3 years ago
Johannes Schauer Marin Rodrigues c63ad87310
changes for release of Debian 11 Buster 3 years ago

@ -39,6 +39,7 @@ Summary:
- foreign architecture chroots with qemu-user
- variant installing only Essential:yes packages and dependencies
- temporary chroots by redirecting to /dev/null
- chroots without apt inside (for chroot from buildinfo file with debootsnap)
The author believes that a chroot of a Debian stable release should include the
latest packages including security fixes by default. This has been a wontfix
@ -75,11 +76,11 @@ reproducible** if the `$SOURCE_DATE_EPOCH` environment variable is set.
The author believes, that it should not be necessary to have superuser
privileges to create a file (the chroot tarball) in one's home directory.
Thus, mmdebstrap provides multiple options to create a chroot tarball with the
right permissions **without superuser privileges**. Depending on what is
available, it uses either Linux user namespaces, fakechroot or proot.
Debootstrap supports fakechroot but will not create a tarball with the right
permissions by itself. Support for Linux user namespaces and proot is missing
(see bugs #829134 and #698347, respectively).
right permissions **without superuser privileges**. This avoids a whole class
of bugs like #921815. Depending on what is available, it uses either Linux user
namespaces, fakechroot or proot. Debootstrap supports fakechroot but will not
create a tarball with the right permissions by itself. Support for Linux user
namespaces and proot is missing (see bugs #829134 and #698347, respectively).
When creating a chroot tarball with debootstrap, the temporary chroot directory
cannot be on a filesystem that has been mounted with nodev. In unprivileged
@ -93,7 +94,10 @@ Limitations in comparison to debootstrap
----------------------------------------
Debootstrap supports creating a Debian chroot on non-Debian systems but
mmdebstrap requires apt and is thus limited to Debian and derivatives.
mmdebstrap requires apt and is thus limited to Debian and derivatives. This
means that mmdebstrap can never fully replace debootstrap and debootstrap will
continue to be relevant in situations where you want to create a Debian chroot
from a platform without apt and dpkg.
There is no `SCRIPT` argument.
@ -139,7 +143,11 @@ https://gitlab.mister-muffin.de/josch/mmdebstrap/issues
Contributors
============
- Johannes Schauer (main author)
- Johannes Schauer Marin Rodrigues (main author)
- Helmut Grohne
- Benjamin Drung
- Steve Dodd
- Josh Triplett
- Konstantin Demin
- Trent W. Buck
- Vagrant Cascadian

@ -54,7 +54,7 @@ fi
# check if all required debootstrap tarballs exist
notfound=0
for dist in stable testing unstable; do
for dist in oldstable stable testing unstable; do
for variant in minbase buildd -; do
if [ ! -e "shared/cache/debian-$dist-$variant.tar" ]; then
echo "shared/cache/debian-$dist-$variant.tar does not exist" >&2
@ -120,7 +120,7 @@ if [ ! -e shared/hooks/eatmydata/customize.sh ] || [ hooks/eatmydata/customize.s
fi
fi
starttime=
total=193
total=217
skipped=0
runtests=0
i=1
@ -160,7 +160,7 @@ fi
: "${CMD:=perl -MDevel::Cover=-silent,-nogcov ./mmdebstrap}"
mirror="http://127.0.0.1/debian"
for dist in stable testing unstable; do
for dist in oldstable stable testing unstable; do
for variant in minbase buildd -; do
print_header "mode=$defaultmode,variant=$variant: check against debootstrap $dist"
cat << END > shared/test.sh
@ -168,7 +168,14 @@ for dist in stable testing unstable; do
set -eu
export LC_ALL=C.UTF-8
export SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH
$CMD --variant=$variant --mode=$defaultmode $dist /tmp/debian-$dist-mm.tar $mirror
# we create the apt user ourselves or otherwise its uid/gid will differ
# compared to the one chosen in debootstrap because of different installation
# order in comparison to the systemd users
# https://bugs.debian.org/969631
$CMD --variant=$variant --mode=$defaultmode \
--essential-hook='if [ $variant = - ]; then echo _apt:*:100:65534::/nonexistent:/usr/sbin/nologin >> "\$1"/etc/passwd; fi' \
$dist /tmp/debian-$dist-mm.tar $mirror
mkdir /tmp/debian-$dist-mm
tar --xattrs --xattrs-include='*' -C /tmp/debian-$dist-mm -xf /tmp/debian-$dist-mm.tar
@ -265,7 +272,7 @@ if [ "$variant" = "-" ]; then
cap=\$(chroot /tmp/debian-$dist-debootstrap /sbin/getcap /bin/ping)
expected="/bin/ping cap_net_raw=ep"
if [ "$dist" = stable ]; then
if [ "$dist" = oldstable ]; then
expected="/bin/ping = cap_net_raw+ep"
fi
if [ "\$cap" != "\$expected" ]; then
@ -398,7 +405,7 @@ diff --no-dereference --recursive /tmp/debian-debootstrap /tmp/debian-mm
# check permissions, ownership, symlink targets, modification times using tar
# mtimes of directories created by mmdebstrap will differ, thus we equalize them first
for d in etc/apt/preferences.d/ etc/apt/sources.list.d/ etc/dpkg/dpkg.cfg.d/; do
for d in etc/apt/preferences.d/ etc/apt/sources.list.d/ etc/dpkg/dpkg.cfg.d/ var/log/apt/; do
touch --date="@$SOURCE_DATE_EPOCH" /tmp/debian-debootstrap/\$d /tmp/debian-mm/\$d
done
# debootstrap never ran apt -- fixing permissions
@ -519,6 +526,63 @@ else
runtests=$((runtests+1))
fi
print_header "mode=unshare,variant=apt: fail without /etc/subuid"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
if [ ! -e /mmdebstrap-testenv ]; then
echo "this test modifies the system and should only be run inside a container" >&2
exit 1
fi
adduser --gecos user --disabled-password user
sysctl -w kernel.unprivileged_userns_clone=1
rm /etc/subuid
ret=0
runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
rm -r /tmp/debian-chroot
END
if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
echo "HAVE_QEMU != yes -- Skipping test..." >&2
skipped=$((skipped+1))
fi
print_header "mode=unshare,variant=apt: fail without username in /etc/subuid"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
if [ ! -e /mmdebstrap-testenv ]; then
echo "this test modifies the system and should only be run inside a container" >&2
exit 1
fi
adduser --gecos user --disabled-password user
sysctl -w kernel.unprivileged_userns_clone=1
awk -F: '\$1!="user"' /etc/subuid > /etc/subuid.tmp
mv /etc/subuid.tmp /etc/subuid
ret=0
runuser -u user -- $CMD --mode=unshare --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
rm -r /tmp/debian-chroot
END
if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
echo "HAVE_QEMU != yes -- Skipping test..." >&2
skipped=$((skipped+1))
fi
# Before running unshare mode as root, we run "unshare --mount" but that fails
# if mmdebstrap itself is executed from within a chroot:
# unshare: cannot change root filesystem propagation: Invalid argument
@ -591,19 +655,18 @@ else
runtests=$((runtests+1))
fi
print_header "mode=root,variant=apt: fail with root without cap_sys_admin"
print_header "mode=unshare,variant=apt: root without cap_sys_admin"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
ret=0
[ "\$(whoami)" = "root" ]
capsh --drop=cap_sys_admin -- -c 'exec "\$@"' exec \
$CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
exit 1
fi
[ ! -e /tmp/debian-chroot ]
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" sh -c "test ! -e /proc/self/fd"' \
$DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
rm /tmp/debian-chroot.tar
END
if [ "$CONTAINER" = "lxc" ]; then
# see https://stackoverflow.com/questions/65748254/
@ -617,45 +680,19 @@ else
runtests=$((runtests+1))
fi
print_header "mode=root,variant=apt: fail without mounted /proc"
print_header "mode=root,variant=apt: mount is missing"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
# success with /proc mounted
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" bash -c "test \\"\\\$(cat <(echo foobar))\\" = foobar"' \
$DEFAULT_DIST /dev/null $mirror
# failure without /proc mounted (using --skip=check/canmount)
ret=0
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" bash -c "test \\"\\\$(cat <(echo foobar))\\" = foobar"' \
--skip=check/canmount $DEFAULT_DIST /tmp/debian-chroot $mirror || ret=\$?
if [ "\$ret" = 0 ]; then
echo expected failure but got exit \$ret >&2
if [ ! -e /mmdebstrap-testenv ]; then
echo "this test modifies the system and should only be run inside a container" >&2
exit 1
fi
rm -r /tmp/debian-chroot
END
if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
./run_null.sh SUDO
runtests=$((runtests+1))
fi
print_header "mode=unshare,variant=apt: root without cap_sys_admin but --skip=check/canmount"
cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
[ "\$(whoami)" = "root" ]
capsh --drop=cap_sys_admin -- -c 'exec "\$@"' exec \
$CMD --mode=root --variant=apt \
--customize-hook='chroot "\$1" sh -c "test ! -e /proc/self/fd"' \
--skip=check/canmount $DEFAULT_DIST /tmp/debian-chroot.tar $mirror
for p in /bin /usr/bin /sbin /usr/sbin; do
rm -f "\$p/mount"
done
$CMD --mode=root --variant=apt $DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
rm /tmp/debian-chroot.tar
END
@ -663,8 +700,8 @@ if [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
runtests=$((runtests+1))
else
./run_null.sh SUDO
runtests=$((runtests+1))
echo "HAVE_QEMU != yes -- Skipping test..." >&2
skipped=$((skipped+1))
fi
for variant in essential apt minbase buildd important standard; do
@ -678,18 +715,18 @@ for variant in essential apt minbase buildd important standard; do
skipped=$((skipped+1))
continue
fi
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because /var/lib/systemd/catalog/database differs" >&2
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because /var/lib/systemd/catalog/database differs" >&2
skipped=$((skipped+1))
continue
fi
if [ "$format" = "squashfs" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because squashfs-tools-ng is not available" >&2
if [ "$format" = "squashfs" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because squashfs-tools-ng is not available" >&2
skipped=$((skipped+1))
continue
fi
if [ "$format" = "ext2" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
if [ "$format" = "ext2" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
skipped=$((skipped+1))
continue
fi
@ -790,8 +827,8 @@ runuser -u user -- $CMD --unshare-helper /usr/sbin/chroot /tmp/debian-chroot get
rm /tmp/debian-chroot.tar /tmp/debian-chroot-shifted.tar /tmp/debian-chroot.txt /tmp/debian-chroot-shiftedback.tar /tmp/expected
rm -r /tmp/debian-chroot
END
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "the python3 tarfile module in stable does not preserve xattrs -- Skipping test..." >&2
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "the python3 tarfile module in oldstable does not preserve xattrs -- Skipping test..." >&2
skipped=$((skipped+1))
elif [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
@ -1115,8 +1152,8 @@ sqfs2tar --no-skip --root-becomes . /tmp/debian-chroot.squashfs | tar -t \
| sort | diff -u /tmp/tar1noslash.txt -
rm /tmp/debian-chroot.squashfs /tmp/tar1noslash.txt
END
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because squashfs-tools-ng is not available" >&2
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because squashfs-tools-ng is not available" >&2
skipped=$((skipped+1))
elif [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
@ -1131,8 +1168,8 @@ fi
for mode in root unshare fakechroot proot; do
print_header "mode=$mode,variant=apt: test ext2 image"
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because genext2fs does not support SOURCE_DATE_EPOCH" >&2
skipped=$((skipped+1))
continue
fi
@ -1344,7 +1381,7 @@ $CMD --mode=root --variant=apt stable /tmp/debian-chroot
cat << SOURCES | cmp /tmp/debian-chroot/etc/apt/sources.list
deb http://deb.debian.org/debian stable main
deb http://deb.debian.org/debian stable-updates main
deb http://security.debian.org/debian-security stable/updates main
deb http://security.debian.org/debian-security stable-security main
SOURCES
rm -r /tmp/debian-chroot
END
@ -2886,8 +2923,11 @@ cat << END > shared/test.sh
#!/bin/sh
set -eu
export LC_ALL=C.UTF-8
$CMD --mode=$defaultmode --variant=essential --include=apt --setup-hook="apt-get update" --setup-hook="apt-get --yes -oApt::Get::Download-Only=true install apt" $DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | diff -u tar1.txt -
$CMD --mode=$defaultmode --variant=essential --include=apt \
--essential-hook='apt-get -o Dir="\$1" -oDir::Etc::TrustedParts=/etc/apt/trusted.gpg.d -oAcquire::Languages=none update' \
--essential-hook='apt-get --yes install --no-install-recommends -o Dir="\$1" -oDPkg::Chroot-Directory="\$1" apt' \
$DEFAULT_DIST /tmp/debian-chroot.tar $mirror
tar -tf /tmp/debian-chroot.tar | sort | grep -v ./var/lib/apt/extended_states | diff -u tar1.txt -
rm /tmp/debian-chroot.tar
END
if [ "$HAVE_QEMU" = "yes" ]; then
@ -2933,8 +2973,8 @@ for variant in extract custom essential apt minbase buildd important standard; d
skipped=$((skipped+1))
continue
fi
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "stable" ]; then
echo "skipping test on stable because /var/lib/systemd/catalog/database differs" >&2
if [ "$variant" = "important" ] && [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "skipping test on oldstable because /var/lib/systemd/catalog/database differs" >&2
skipped=$((skipped+1))
continue
fi
@ -3030,7 +3070,9 @@ fi
# test all --dry-run variants
for variant in extract custom essential apt; do
# we are testing all variants here because with 0.7.5 we had a bug:
# mmdebstrap sid /dev/null --simulate ==> E: cannot read /var/cache/apt/archives/
for variant in extract custom essential apt minbase buildd important standard; do
for mode in root unshare fakechroot proot chrootless; do
print_header "mode=$mode,variant=$variant: create tarball --dry-run"
if [ "$mode" = "unshare" ] && [ "$HAVE_UNSHARE" != "yes" ]; then
@ -3311,8 +3353,6 @@ fi
prefix=
[ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --"
\$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian $DEFAULT_DIST /tmp/debian-chroot $mirror
# preserve output with permissions and timestamps for later test
chmod 700 /tmp/debian-chroot
tar -C /tmp/debian-chroot --owner=0 --group=0 --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/debian-chroot.tar .
tar tvf /tmp/debian-chroot.tar > doc-debian.tar.list
rm /tmp/debian-chroot.tar
@ -3330,7 +3370,6 @@ rm /tmp/debian-chroot/var/cache/apt/archives/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
rm /tmp/debian-chroot/var/lib/apt/lists/lock
rm /tmp/debian-chroot/var/lib/apt/extended_states
if [ "$mode" != "chrootless" ] || dpkg --compare-versions "\$(dpkg --robot --version)" lt 1.20.0; then
rm /tmp/debian-chroot/var/lib/dpkg/available
rm /tmp/debian-chroot/var/lib/dpkg/cmethopt
@ -3404,8 +3443,8 @@ prefix=
[ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --"
\$prefix $CMD --mode=chrootless --variant=custom --include=bsdutils,coreutils,debianutils,diffutils,dpkg,findutils,grep,gzip,hostname,init-system-helpers,ncurses-base,ncurses-bin,perl-base,sed,sysvinit-utils,tar $DEFAULT_DIST /dev/null $mirror
END
if [ "$DEFAULT_DIST" = "stable" ]; then
echo "chrootless doesn't work in stable -- Skipping test..." >&2
if [ "$DEFAULT_DIST" = "oldstable" ]; then
echo "chrootless doesn't work in oldstable -- Skipping test..." >&2
skipped=$((skipped+1))
elif [ "$HAVE_QEMU" = "yes" ]; then
./run_qemu.sh
@ -3457,10 +3496,9 @@ if [ "\$(id -u)" -eq 0 ] && ! id -u user > /dev/null 2>&1; then
fi
prefix=
[ "\$(id -u)" -eq 0 ] && prefix="runuser -u user --"
\$prefix $CMD --mode=chrootless --variant=custom --include=doc-debian --setup-hook='touch "\$1/setup"' --customize-hook='touch "\$1/customize"' $DEFAULT_DIST /tmp/debian-chroot $mirror
rm /tmp/debian-chroot/setup
rm /tmp/debian-chroot/customize
chmod 700 /tmp/debian-chroot
\$prefix $CMD --mode=chrootless --skip=cleanup/tmp --variant=custom --include=doc-debian --setup-hook='touch "\$1/tmp/setup"' --customize-hook='touch "\$1/tmp/customize"' $DEFAULT_DIST /tmp/debian-chroot $mirror
rm /tmp/debian-chroot/tmp/setup
rm /tmp/debian-chroot/tmp/customize
tar -C /tmp/debian-chroot --owner=0 --group=0 --numeric-owner --sort=name --clamp-mtime --mtime=$(date --utc --date=@$SOURCE_DATE_EPOCH --iso-8601=seconds) -cf /tmp/debian-chroot.tar .
tar tvf /tmp/debian-chroot.tar | grep -v ' ./dev' | diff -u doc-debian.tar.list -
rm /tmp/debian-chroot.tar
@ -3478,7 +3516,6 @@ rm /tmp/debian-chroot/var/cache/apt/archives/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
rm /tmp/debian-chroot/var/lib/apt/lists/lock
rm /tmp/debian-chroot/var/lib/apt/extended_states
if dpkg --compare-versions "\$(dpkg --robot --version)" lt 1.20.0; then
rm /tmp/debian-chroot/var/lib/dpkg/available
rm /tmp/debian-chroot/var/lib/dpkg/cmethopt
@ -3541,7 +3578,6 @@ rm /tmp/debian-chroot/var/cache/apt/archives/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock
rm /tmp/debian-chroot/var/lib/dpkg/lock-frontend
rm /tmp/debian-chroot/var/lib/apt/lists/lock
rm /tmp/debian-chroot/var/lib/apt/extended_states
if dpkg --compare-versions "\$(dpkg --robot --version)" lt 1.20.0; then
rm /tmp/debian-chroot/var/lib/dpkg/available
rm /tmp/debian-chroot/var/lib/dpkg/cmethopt

@ -20,7 +20,7 @@ deletecache() {
return 1
fi
# be very careful with removing the old directory
for dist in stable testing unstable; do
for dist in oldstable stable testing unstable; do
for variant in minbase buildd -; do
if [ -e "$dir/debian-$dist-$variant.tar" ]; then
rm "$dir/debian-$dist-$variant.tar"
@ -33,18 +33,30 @@ deletecache() {
else
echo "does not exist: $dir/debian/dists/$dist" >&2
fi
if [ "$dist" = "stable" ]; then
if [ -e "$dir/debian/dists/stable-updates" ]; then
rm --one-file-system --recursive "$dir/debian/dists/stable-updates"
case "$dist" in oldstable|stable)
if [ -e "$dir/debian/dists/$dist-updates" ]; then
rm --one-file-system --recursive "$dir/debian/dists/$dist-updates"
else
echo "does not exist: $dir/debian/dists/stable-updates" >&2
echo "does not exist: $dir/debian/dists/$dist-updates" >&2
fi
if [ -e "$dir/debian-security/dists/stable/updates" ]; then
rm --one-file-system --recursive "$dir/debian-security/dists/stable/updates"
else
echo "does not exist: $dir/debian-security/dists/stable/updates" >&2
fi
fi
;;
esac
case "$dist" in
oldstable)
if [ -e "$dir/debian-security/dists/$dist/updates" ]; then
rm --one-file-system --recursive "$dir/debian-security/dists/$dist/updates"
else
echo "does not exist: $dir/debian-security/dists/$dist/updates" >&2
fi
;;
stable)
if [ -e "$dir/debian-security/dists/$dist-security" ]; then
rm --one-file-system --recursive "$dir/debian-security/dists/$dist-security"
else
echo "does not exist: $dir/debian-security/dists/$dist-security" >&2
fi
;;
esac
done
if [ -e $dir/debian-*.qcow ]; then
rm --one-file-system "$dir"/debian-*.qcow
@ -216,7 +228,6 @@ END
> "$rootdir/var/lib/dpkg/status"
APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get update
# before downloading packages and before replacing the old Packages
@ -225,10 +236,18 @@ END
# packages that we already have
{
get_oldaptnames "$oldmirrordir" "dists/$dist/main/binary-$nativearch/Packages.gz"
if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then
get_oldaptnames "$oldmirrordir" "dists/stable-updates/main/binary-$nativearch/Packages.gz"
get_oldaptnames "$oldcachedir/debian-security" "dists/stable/updates/main/binary-$nativearch/Packages.gz"
fi
case "$dist" in oldstable|stable)
get_oldaptnames "$oldmirrordir" "dists/$dist-updates/main/binary-$nativearch/Packages.gz"
;;
esac
case "$dist" in
oldstable)
get_oldaptnames "$oldcachedir/debian-security" "dists/$dist/updates/main/binary-$nativearch/Packages.gz"
;;
stable)
get_oldaptnames "$oldcachedir/debian-security" "dists/$dist-security/main/binary-$nativearch/Packages.gz"
;;
esac
} | sort -u > "$rootdir/oldaptnames"
pkgs=$(APT_CONFIG="$rootdir/etc/apt/apt.conf" apt-get indextargets \
@ -248,16 +267,27 @@ END
curl --location "$mirror/dists/$dist/Release" > "$newmirrordir/dists/$dist/Release"
curl --location "$mirror/dists/$dist/Release.gpg" > "$newmirrordir/dists/$dist/Release.gpg"
curl --location "$mirror/dists/$dist/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/$dist/main/binary-$nativearch/Packages.gz"
if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then
mkdir -p "$newmirrordir/dists/stable-updates/main/binary-$nativearch/"
curl --location "$mirror/dists/stable-updates/Release" > "$newmirrordir/dists/stable-updates/Release"
curl --location "$mirror/dists/stable-updates/Release.gpg" > "$newmirrordir/dists/stable-updates/Release.gpg"
curl --location "$mirror/dists/stable-updates/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/stable-updates/main/binary-$nativearch/Packages.gz"
mkdir -p "$newcachedir/debian-security/dists/stable/updates/main/binary-$nativearch/"
curl --location "$security_mirror/dists/stable/updates/Release" > "$newcachedir/debian-security/dists/stable/updates/Release"
curl --location "$security_mirror/dists/stable/updates/Release.gpg" > "$newcachedir/debian-security/dists/stable/updates/Release.gpg"
curl --location "$security_mirror/dists/stable/updates/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/stable/updates/main/binary-$nativearch/Packages.gz"
fi
case "$dist" in oldstable|stable)
mkdir -p "$newmirrordir/dists/$dist-updates/main/binary-$nativearch/"
curl --location "$mirror/dists/$dist-updates/Release" > "$newmirrordir/dists/$dist-updates/Release"
curl --location "$mirror/dists/$dist-updates/Release.gpg" > "$newmirrordir/dists/$dist-updates/Release.gpg"
curl --location "$mirror/dists/$dist-updates/main/binary-$nativearch/Packages.gz" > "$newmirrordir/dists/$dist-updates/main/binary-$nativearch/Packages.gz"
;;
esac
case "$dist" in
oldstable)
mkdir -p "$newcachedir/debian-security/dists/$dist/updates/main/binary-$nativearch/"
curl --location "$security_mirror/dists/$dist/updates/Release" > "$newcachedir/debian-security/dists/$dist/updates/Release"
curl --location "$security_mirror/dists/$dist/updates/Release.gpg" > "$newcachedir/debian-security/dists/$dist/updates/Release.gpg"
curl --location "$security_mirror/dists/$dist/updates/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/$dist/updates/main/binary-$nativearch/Packages.gz"
;;
stable)
mkdir -p "$newcachedir/debian-security/dists/$dist-security/main/binary-$nativearch/"
curl --location "$security_mirror/dists/$dist-security/Release" > "$newcachedir/debian-security/dists/$dist-security/Release"
curl --location "$security_mirror/dists/$dist-security/Release.gpg" > "$newcachedir/debian-security/dists/$dist-security/Release.gpg"
curl --location "$security_mirror/dists/$dist-security/main/binary-$nativearch/Packages.gz" > "$newcachedir/debian-security/dists/$dist-security/main/binary-$nativearch/Packages.gz"
;;
esac
# the deb files downloaded by apt must be moved to their right locations in the
# pool directory
@ -269,10 +299,18 @@ END
# This way, it doesn't matter where the mirror ends up storing the package.
{
get_newaptnames "$newmirrordir" "dists/$dist/main/binary-$nativearch/Packages.gz";
if grep --quiet security.debian.org "$rootdir/etc/apt/sources.list"; then
get_newaptnames "$newmirrordir" "dists/stable-updates/main/binary-$nativearch/Packages.gz"
get_newaptnames "$newcachedir/debian-security" "dists/stable/updates/main/binary-$nativearch/Packages.gz"
fi
case "$dist" in oldstable|stable)
get_newaptnames "$newmirrordir" "dists/$dist-updates/main/binary-$nativearch/Packages.gz"
;;
esac
case "$dist" in
oldstable)
get_newaptnames "$newcachedir/debian-security" "dists/$dist/updates/main/binary-$nativearch/Packages.gz"
;;
stable)
get_newaptnames "$newcachedir/debian-security" "dists/$dist-security/main/binary-$nativearch/Packages.gz"
;;
esac
} | sort -u > "$rootdir/newaptnames"
rm "$rootdir/var/cache/apt/archives/lock"
@ -363,22 +401,33 @@ else
fi
for nativearch in $arches; do
for dist in stable testing unstable; do
for dist in oldstable stable testing unstable; do
# non-host architectures are only downloaded for $DEFAULT_DIST
if [ $nativearch != $HOSTARCH ] && [ $DEFAULT_DIST != $dist ]; then
continue
fi
cat << END | update_cache "$dist" "$nativearch"
# we need a first pass without updates and security patches
# because otherwise, old package versions needed by
# debootstrap will not get included
echo "deb [arch=$nativearch] $mirror $dist $components" | update_cache "$dist" "$nativearch"
# we need to include the base mirror again or otherwise
# packages like build-essential will be missing
case "$dist" in
oldstable)
cat << END | update_cache "$dist" "$nativearch"
deb [arch=$nativearch] $mirror $dist $components
deb [arch=$nativearch] $mirror $dist-updates main
deb [arch=$nativearch] $security_mirror $dist/updates main
END
if [ "$dist" = "stable" ]; then
# starting wit bullseye, stable/updates becomes stable-security
cat << END | update_cache "$dist" "$nativearch"
;;
stable)
cat << END | update_cache "$dist" "$nativearch"
deb [arch=$nativearch] $mirror $dist $components
deb [arch=$nativearch] $mirror stable-updates main
deb [arch=$nativearch] $security_mirror stable/updates main
deb [arch=$nativearch] $mirror $dist-updates main
deb [arch=$nativearch] $security_mirror $dist-security main
END
fi
;;
esac
done
done
@ -427,7 +476,7 @@ if [ "$HAVE_QEMU" = "yes" ]; then
trap "cleanuptmpdir; cleanup_newcachedir" EXIT INT TERM
pkgs=perl-doc,systemd-sysv,perl,arch-test,fakechroot,fakeroot,mount,uidmap,qemu-user-static,binfmt-support,qemu-user,dpkg-dev,mini-httpd,libdevel-cover-perl,libtemplate-perl,debootstrap,procps,apt-cudf,aspcud,python3,libcap2-bin,gpg,debootstrap,distro-info-data,iproute2,ubuntu-keyring,apt-utils
if [ "$DEFAULT_DIST" != "stable" ]; then
if [ "$DEFAULT_DIST" != "oldstable" ]; then
pkgs="$pkgs,squashfs-tools-ng,genext2fs"
fi
if [ "$HAVE_PROOT" = "yes" ]; then
@ -579,7 +628,7 @@ END
fi
mirror="http://127.0.0.1/debian"
for dist in stable testing unstable; do
for dist in oldstable stable testing unstable; do
for variant in minbase buildd -; do
echo "running debootstrap --no-merged-usr --variant=$variant $dist \${TEMPDIR} $mirror"
cat << END > shared/test.sh

@ -58,9 +58,14 @@ use version;
*CLONE_NEWNET = \0x40000000; # net
*_LINUX_CAPABILITY_VERSION_3 = \0x20080522;
*CAP_SYS_ADMIN = \21;
our ($CLONE_NEWNS, $CLONE_NEWUTS, $CLONE_NEWIPC,
$CLONE_NEWUSER, $CLONE_NEWPID, $CLONE_NEWNET,
$_LINUX_CAPABILITY_VERSION_3, $CAP_SYS_ADMIN);
*PR_CAPBSET_READ = \23;
our (
$CLONE_NEWNS, $CLONE_NEWUTS,
$CLONE_NEWIPC, $CLONE_NEWUSER,
$CLONE_NEWPID, $CLONE_NEWNET,
$_LINUX_CAPABILITY_VERSION_3, $CAP_SYS_ADMIN,
$PR_CAPBSET_READ
);
#<<<
# type codes:
@ -325,7 +330,7 @@ sub test_unshare_userns {
}
sub read_subuid_subgid() {
my $username = getpwuid $<;
my $username = getpwuid $REAL_USER_ID;
my ($subid, $num_subid, $fh, $n);
my @result = ();
@ -345,6 +350,14 @@ sub read_subuid_subgid() {
last if ($n eq $username);
}
close $fh;
if (!length $subid) {
warning "/etc/subuid is empty";
return;
}
if ($n ne $username) {
warning "no entry in /etc/subuid for $username";
return;
}
push @result, ["u", 0, $subid, $num_subid];
if (scalar(@result) < 1) {
@ -356,21 +369,40 @@ sub read_subuid_subgid() {
return;
}
my $groupname = getgrgid $REAL_GROUP_ID;
if (!-e "/etc/subgid") {
warning "/etc/subgid doesn't exist";
return;
}
if (!-r "/etc/subgid") {
warning "/etc/subgid is not readable";
return;
}
open $fh, "<", "/etc/subgid"
or error "cannot open /etc/subgid for reading: $!";
while (my $line = <$fh>) {
($n, $subid, $num_subid) = split(/:/, $line, 3);
last if ($n eq $username);
last if ($n eq $groupname);
}
close $fh;
if (!length $subid) {
warning "/etc/subgid is empty";
return;
}
if ($n ne $groupname) {
warning "no entry in /etc/subgid for $groupname";
return;
}
push @result, ["g", 0, $subid, $num_subid];
if (scalar(@result) < 2) {
warning "/etc/subgid does not contain an entry for $username";
warning "/etc/subgid does not contain an entry for $groupname";
return;
}
if (scalar(@result) > 2) {
warning "/etc/subgid contains multiple entries for $username";
warning "/etc/subgid contains multiple entries for $groupname";
return;
}
@ -982,7 +1014,8 @@ sub run_chroot {
}
} elsif ($type == 3 or $type == 4) {
# character/block special
if ($options->{mode} eq 'root' && !$options->{canmount}) {
if ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
warning "skipping bind-mounting ./dev/$fname";
} elsif (!$options->{havemknod}) {
if (!-d "$options->{root}/dev") {
@ -1024,7 +1057,7 @@ sub run_chroot {
or error "mount ./dev/$fname failed: $?";
}
} elsif ($type == 5
&& $options->{mode} eq 'root'
&& (any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
warning "skipping bind-mounting ./dev/$fname";
} elsif ($type == 5) { # directory
@ -1108,11 +1141,21 @@ sub run_chroot {
# We can only mount /proc and /sys after extracting the essential
# set because if we mount it before, then base-files will not be able
# to extract those
if ($options->{mode} eq 'root' && !$options->{canmount}) {
if ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
warning "skipping mount sysfs";
} elsif ($options->{mode} eq 'root' && !-d "$options->{root}/sys") {
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "$options->{root}/sys") {
warning("skipping mounting of sysfs because the"
. " /sys directory is missing in the target");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-e "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys does not exist on the outside");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys on the outside is not a directory");
} elsif ($options->{mode} eq 'root') {
push @cleanup_tasks, sub {
0 == system('umount', "$options->{root}/sys")
@ -1123,15 +1166,6 @@ sub run_chroot {
'-o', 'ro,nosuid,nodev,noexec', 'sys',
"$options->{root}/sys"
) or error "mount /sys failed: $?";
} elsif ($options->{mode} eq 'unshare' && !-d "$options->{root}/sys") {
warning("skipping bind-mounting /sys because the"
. " /sys directory is missing in the target");
} elsif ($options->{mode} eq 'unshare' && !-e "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys does not exist on the outside");
} elsif ($options->{mode} eq 'unshare' && !-d "/sys") {
warning("skipping bind-mounting /sys because"
. " /sys on the outside is not a directory");
} elsif ($options->{mode} eq 'unshare') {
# naturally we have to clean up after ourselves in sudo mode where
# we do a real mount. But we also need to unmount in unshare mode
@ -1162,11 +1196,21 @@ sub run_chroot {
} else {
error "unknown mode: $options->{mode}";
}
if ($options->{mode} eq 'root' && !$options->{canmount}) {
if ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !$options->{canmount}) {
warning "skipping mount proc";
} elsif ($options->{mode} eq 'root' && !-d "$options->{root}/proc") {
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "$options->{root}/proc") {
warning("skipping mounting of proc because the"
. " /proc directory is missing in the target");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-e "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc does not exist on the outside");
} elsif ((any { $_ eq $options->{mode} } ('root', 'unshare'))
&& !-d "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc on the outside is not a directory");
} elsif ($options->{mode} eq 'root') {
push @cleanup_tasks, sub {
# some maintainer scripts mount additional stuff into /proc
@ -1186,16 +1230,6 @@ sub run_chroot {
0 == system('mount', '-t', 'proc', '-o', 'ro', 'proc',
"$options->{root}/proc")
or error "mount /proc failed: $?";
} elsif ($options->{mode} eq 'unshare' && !-d "$options->{root}/proc")
{
warning("skipping bind-mounting /proc because the"
. " /proc directory is missing in the target");
} elsif ($options->{mode} eq 'unshare' && !-e "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc does not exist on the outside");
} elsif ($options->{mode} eq 'unshare' && !-d "/proc") {
warning("skipping bind-mounting /proc because"
. " /proc on the outside is not a directory");
} elsif ($options->{mode} eq 'unshare') {
# naturally we have to clean up after ourselves in sudo mode where
# we do a real mount. But we also need to unmount in unshare mode
@ -1444,24 +1478,30 @@ sub setup {
warning "cannot read $options->{apttrustedparts}";
}
run_setup($options);
if (any { $_ eq 'setup' } @{ $options->{skip} }) {
info "skipping setup as requested";
} else {
run_setup($options);
}
run_hooks('setup', $options);
run_update($options);
if (any { $_ eq 'update' } @{ $options->{skip} }) {
info "skipping update as requested";
} else {
run_update($options);
}
(my $pkgs_to_install, my $essential_pkgs, my $cached_debs)
= run_download($options);
if ( $options->{mode} ne 'chrootless'
or $options->{variant} eq 'extract') {
# We have to extract the packages from @essential_pkgs either if we run
# in chrootless mode and extract variant or in any other mode. In
# other words, the only scenario in which the @essential_pkgs are not
# extracted are in chrootless mode in any other than the extract
# variant.
run_extract($options, $essential_pkgs);
}
# in theory, we don't have to extract the packages in chrootless mode
# but we do it anyways because otherwise directory creation timestamps
# will differ compared to non-chrootless and we want to create bit-by-bit
# identical tar output
#
# FIXME: dpkg could be changed to produce the same results
run_extract($options, $essential_pkgs);
run_hooks('extract', $options);
@ -1480,7 +1520,11 @@ sub setup {
run_hooks('customize', $options);
}
run_cleanup($options);
if (any { $_ eq 'cleanup' } @{ $options->{skip} }) {
info "skipping cleanup as requested";
} else {
run_cleanup($options);
}
return;
}
@ -1540,8 +1584,10 @@ sub run_setup() {
}
# if dpkg and apt operate from the outside we need some more
# directories because dpkg and apt might not even be installed inside
# the chroot
if ($options->{mode} eq 'chrootless') {
# the chroot. Thus, the following block is not strictly necessary in
# chrootless mode. We unconditionally add it anyways, so that the
# output with and without chrootless mode is equal.
{
push @directories, '/var/log/apt';
# since we do not know the dpkg version inside the chroot at this
# point, we can only omit it in chrootless mode
@ -1580,6 +1626,10 @@ sub run_setup() {
# This will affect calls to tempfile() as well as runs of "apt-get update"
# which will create temporary clearsigned.message.XXXXXX files to verify
# signatures.
#
# Setting TMPDIR to inside the chroot is also necessary for when packages
# are installed with apt from outside the chroot with
# DPkg::Chroot-Directory
{
## no critic (Variables::RequireLocalizedPunctuationVars)
$ENV{"TMPDIR"} = "$options->{root}/tmp";
@ -2042,6 +2092,18 @@ sub run_download() {
# - no simulation run is done, and
# - the variant is not extract or custom or the number to be
# installed packages not zero
#
# We could also unconditionally use the proxysolver and then "apt-get
# download" any missing packages but using the proxysolver requires
# /usr/lib/apt/solvers/apt from the apt-utils package and we want to avoid
# that dependency.
#
# In the future we want to replace downloading packages with "apt-get
# install --download-only" and installing them with dpkg by just installing
# the essential packages with apt from the outside with
# DPkg::Chroot-Directory. We are not doing that because then the preinst
# script of base-passwd will not be called early enough and packages will
# fail to install because they are missing /etc/passwd.
my @cached_debs = ();
my @dl_debs = ();
if (
@ -2142,12 +2204,54 @@ sub run_download() {
],
%result
});
} elsif ($options->{variant} eq 'essential') {
# 2021-06-07, #debian-apt on OFTC, times in UTC+2
# 17:27 < DonKult> (?essential includes 'apt' through)
# 17:30 < josch> DonKult: no, because pkgCacheGen::ForceEssential ",";
# 17:32 < DonKult> touché
my %result = ();
if ($options->{dryrun}) {
info "simulate downloading packages with apt...";
} else {
# if there are already packages in /var/cache/apt/archives/, we
# need to use our proxysolver to obtain the solution chosen by apt
if (scalar @cached_debs > 0) {
$result{EDSP_RES} = \@dl_debs;
}
info "downloading packages with apt...";
}
run_apt_progress({
ARGV => [
'apt-get',
'--yes',
'-oApt::Get::Download-Only=true',
$options->{dryrun} ? '-oAPT::Get::Simulate=true' : (),
'install',
'?narrow('
. (
defined($options->{suite})
? '?archive(' . $options->{suite} . '),'
: ''
)
. '?architecture('
. $options->{nativearch}
. '),?essential)'
],
%result
});
} elsif (
any { $_ eq $options->{variant} } (
'essential', 'standard', 'important', 'required', 'buildd',
'minbase'
)
any { $_ eq $options->{variant} }
('standard', 'important', 'required', 'minbase', 'buildd')
) {
# In the future, after bug https://bugs.debian.org/989558 is fixed, we
# want to use apt patterns to select the packages to install:
#
# ?narrow(?archive(unstable),?architecture(amd64),?priority(important))
#
# Once this is possible, we can append above statement to the apt-get
# install call in run_download() instead of assembling the package list
# here and passing it through all the way. Then this function only
# takes care of retrieving the essential packages.
my %ess_pkgs;
my %ess_pkgs_target;
my %pkgs_to_install_target = %pkgs_to_install;
@ -2462,6 +2566,7 @@ sub run_extract() {
}
# not using dpkg-deb --extract as that would replace the
# merged-usr symlinks with plain directories
# https://bugs.debian.org/989602
# not using dpkg --unpack because that would try running preinst
# maintainer scripts
my $pid1 = fork() // error "fork() failed: $!";
@ -2744,7 +2849,7 @@ sub run_essential() {
# FIXME: the dpkg config from the host is parsed before the command
# line arguments are parsed and might break this mode
# Example: if the host has --path-exclude set, then this will also
# affect the chroot.
# affect the chroot. See #808203
my @chrootless_opts = (
'-oDPkg::Options::=--force-not-root',
'-oDPkg::Options::=--force-script-chrootless',
@ -2764,10 +2869,23 @@ sub run_essential() {
$ENV{QEMU_LD_PREFIX} = $options->{root};
}
}
run_apt_progress({
ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'],
PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}],
});
# we don't use apt because that will not run the base-passwd preinst
# early enough
#run_apt_progress({
# ARGV => ['apt-get', '--yes', @chrootless_opts, 'install'],
# PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}],
#});
run_dpkg_progress({
ARGV => [
'dpkg',
'--force-not-root',
'--force-script-chrootless',
"--root=$options->{root}",
"--log=$options->{root}/var/log/dpkg.log",
'--install',
'--force-depends'
],
PKGS => [map { "$options->{root}/$_" } @{$essential_pkgs}] });
} elsif (
any { $_ eq $options->{mode} }
('root', 'unshare', 'fakechroot', 'proot')
@ -2776,6 +2894,13 @@ sub run_essential() {
# we need --force-depends because dpkg does not take Pre-Depends
# into account and thus doesn't install them in the right order
# And the --predep-package option is broken: #539133
#
# We could use apt from outside the chroot using DPkg::Chroot-Directory
# but then the preinst script of base-passwd will not be called early
# enough and packages will fail to install because they are missing
# /etc/passwd. Also, with plain dpkg the essential variant can finish
# within 9 seconds. If we use apt instead, it becomes 12 seconds. We
# prefer speed here.
if ($options->{dryrun}) {
info "simulate installing essential packages...";
} else {
@ -2839,176 +2964,26 @@ sub run_install() {
) {
if ($options->{variant} ne 'custom'
and scalar @{$pkgs_to_install} > 0) {
# some packages have to be installed from the outside before
# anything can be installed from the inside.
# Advantage of running apt on the outside instead of inside the
# chroot:
#
# we do not need to install any *-archive-keyring packages
# inside the chroot prior to installing the packages, because
# the keyring is only used when doing "apt-get update" and that
# was already done at the beginning using key material from the
# outside. Since the apt cache is already filled and we are not
# calling "apt-get update" again, the keyring can be installed
# later during installation. But: if it's not installed during
# installation, then we might end up with a fully installed
# system without keyrings that are valid for its sources.list.
my @pkgs_to_install_from_outside;
# install apt if necessary
if ($options->{variant} ne 'apt') {
push @pkgs_to_install_from_outside, 'apt';
}
# since apt will be run inside the chroot, make sure that
# apt-transport-https and ca-certificates gets installed first
# if any mirror is a https URI
open(my $pipe_apt, '-|', 'apt-get', 'indextargets',
'--format', '$(URI)', 'Created-By: Packages')
or error "cannot start apt-get indextargets: $!";
while (my $uri = <$pipe_apt>) {
if ($uri =~ /^https:\/\//) {
info "https mirror found -- adding apt-transport-https "
. "and ca-certificates";
# FIXME: support for https is part of apt >= 1.5
push @pkgs_to_install_from_outside, 'apt-transport-https';
push @pkgs_to_install_from_outside, 'ca-certificates';
last;
} elsif ($uri =~ /^tor(\+[a-z]+)*:\/\//) {
# tor URIs can be tor+http://, tor+https:// or even
# tor+mirror+file://
info "tor mirror found -- adding apt-transport-tor";
push @pkgs_to_install_from_outside, 'apt-transport-tor';
last;
}
}
close $pipe_apt;
$? == 0 or error "apt-get indextargets failed";
if (scalar @pkgs_to_install_from_outside > 0) {
my @cached_debs = ();
my @dl_debs = ();
# /var/cache/apt/archives/ might not be empty either because
# the user used hooks to populate it or because skip options
# like essential/unlink or check/empty were used.
{
my $apt_archives = "/var/cache/apt/archives/";
opendir my $dh, "$options->{root}/$apt_archives"
or error "cannot read $apt_archives";
while (my $deb = readdir $dh) {
if ($deb !~ /\.deb$/) {
next;
}
if (!-f "$options->{root}/$apt_archives/$deb") {
next;
}
push @cached_debs, $deb;
}
closedir $dh;
}
my %result = ();
if ($options->{dryrun}) {
info 'simulate downloading '
. (join ', ', @pkgs_to_install_from_outside) . "...";
} else {
if (scalar @cached_debs > 0) {
$result{EDSP_RES} = \@dl_debs;
}
info 'downloading '
. (join ', ', @pkgs_to_install_from_outside) . "...";
}
run_apt_progress({
ARGV => [
'apt-get',
'--yes',
'-oApt::Get::Download-Only=true',
$options->{dryrun}
? '-oAPT::Get::Simulate=true'
: (),
'install'
],
PKGS => [@pkgs_to_install_from_outside],
%result
});
if ($options->{dryrun}) {
info 'simulate installing '
. (join ', ', @pkgs_to_install_from_outside) . "...";
} else {
my @debs_to_install;
if (scalar @cached_debs > 0 && scalar @dl_debs > 0) {
my $archives = "/var/cache/apt/archives/";
my $prefix = "$options->{root}/$archives";
# for each package in @dl_debs, check if it's in
# /var/cache/apt/archives/ and add it to
# @debs_to_install
foreach my $p (@dl_debs) {
my ($pkg, $ver_epoch) = @{$p};
# apt appends the architecture at the end of the
# package name
($pkg, my $arch) = split ':', $pkg, 2;
# apt replaces the colon by its percent encoding
my $ver = $ver_epoch;
$ver =~ s/:/%3a/;
# the architecture returned by apt is the native
# architecture. Since we don't know whether the
# package is architecture independent or not, we
# first try with the native arch and then
# with "all" and only error out if neither exists.
if (-e "$prefix/${pkg}_${ver}_$arch.deb") {
push @debs_to_install,
"$archives/${pkg}_${ver}_$arch.deb";
} elsif (-e "$prefix/${pkg}_${ver}_all.deb") {
push @debs_to_install,
"$archives/${pkg}_${ver}_all.deb";
} else {
error( "cannot find package for "
. "$pkg:$arch (= $ver_epoch) "
. "in /var/cache/apt/archives/");
}
}
} else {
my $apt_archives = "/var/cache/apt/archives/";
opendir my $dh, "$options->{root}/$apt_archives"
or error "cannot read $apt_archives";
while (my $deb = readdir $dh) {
if ($deb !~ /\.deb$/) {
next;
}
$deb = "$apt_archives/$deb";
if (!-f "$options->{root}/$deb") {
next;
}
push @debs_to_install, $deb;
}
closedir $dh;
}
if (scalar @debs_to_install == 0) {
warning "nothing got downloaded -- maybe the packages"
. " were already installed?";
} else {
# we need --force-depends because dpkg does not take
# Pre-Depends into account and thus doesn't install
# them in the right order
info 'installing '
. (join ', ', @pkgs_to_install_from_outside) . "...";
run_dpkg_progress({
ARGV => [
@{$chrootcmd}, 'dpkg',
'--install', '--force-depends'
],
PKGS => \@debs_to_install,
});
foreach my $deb (@debs_to_install) {
# do not unlink those packages that were in
# /var/cache/apt/archive before the install phase
next
if any { "/var/cache/apt/archives/$_" eq $deb }
@cached_debs;
unlink "$options->{root}/$deb"
or error "cannot unlink $deb: $!";
}
}
}
}
# - we can build chroots without apt (for example from buildinfo
# files)
#
# - we do not need to install additional packages like
# apt-transport-* or ca-certificates inside the chroot
#
# - we do not not need additional key material inside the chroot
#
# - we can make use of file:// and copy://
#
# The DPkg::Install::Recursive::force=true workaround can be
# dropped after this issue is fixed:
# https://salsa.debian.org/apt-team/apt/-/merge_requests/178
#
# We could also move the dpkg call to the outside and run dpkg with
# --root but this would only make sense in situations where there
# is no dpkg inside the chroot.
if (!$options->{dryrun}) {
run_chroot(
sub {
@ -3016,8 +2991,19 @@ sub run_install() {
. " chroot...";
run_apt_progress({
ARGV => [
@{$chrootcmd}, 'apt-get',
'--yes', 'install'
'apt-get',
'-o',
'Dir::Bin::dpkg=env',
'-o',
'DPkg::Options::=--unset=TMPDIR',
'-o',
'DPkg::Options::=dpkg',
'-o',
'DPkg::Install::Recursive::force=true',
'-o',
"DPkg::Chroot-Directory=$options->{root}",
'--yes',
'install'
],
PKGS => $pkgs_to_install,
});
@ -3081,8 +3067,15 @@ sub run_cleanup() {
# apt since 1.6 creates the auxfiles directory. If apt inside the
# chroot is older than that, then it will not know how to clean it.
if (-e "$options->{root}/var/lib/apt/lists/auxfiles") {
rmdir "$options->{root}/var/lib/apt/lists/auxfiles"
or die "cannot rmdir /var/lib/apt/lists/auxfiles: $!";
remove_tree("$options->{root}/var/lib/apt/lists/auxfiles",
{ error => \my $err });
if (@$err) {
for my $diag (@$err) {
my ($file, $message) = %$diag;
if ($file eq '') { warning "general error: $message"; }
else { warning "problem unlinking $file: $message"; }
}
}
}
}
@ -3135,6 +3128,7 @@ sub run_cleanup() {
or error "cannot unlink /etc/machine-id: $!";
open my $fh, '>', "$options->{root}/etc/machine-id"
or error "failed to open(): $!";
print $fh "uninitialized";
close $fh;
}
}
@ -4163,6 +4157,8 @@ sub get_sourceslist_by_suite {
or error "cannot open $distro_info: $!";
my $i = 0;
my $matching_version;
my @releases;
my $today = POSIX::strftime "%Y-%m-%d", localtime;
while (my $line = <$fh>) {
chomp($line);
$i++;
@ -4185,6 +4181,11 @@ sub get_sourceslist_by_suite {
if ($i == 1) {
next;
}
if ( scalar @cells > 4
and $cells[4] =~ m/^\d\d\d\d-\d\d-\d\d$/
and $cells[4] lt $today) {
push @releases, $cells[0];
}
if (lc $cells[1] eq $suite or lc $cells[2] eq $suite) {
$matching_version = $cells[0];
last;
@ -4194,9 +4195,15 @@ sub get_sourceslist_by_suite {
if (defined $matching_version and $matching_version >= 11) {
$bullseye_or_later = 1;
}
if ($suite eq "stable" and $releases[-1] >= 11) {
$bullseye_or_later = 1;
}
} else {
# neither libdistro-info-perl nor distro-info-data is installed
if (any { $_ eq $suite } ('bullseye', 'bookworm', 'trixie')) {
if (
any { $_ eq $suite }
('stable', 'bullseye', 'bookworm', 'trixie')
) {
$bullseye_or_later = 1;
}
}
@ -4291,11 +4298,14 @@ sub main() {
# lxc-usernsexec -- lxc-unshare -s 'MOUNT|PID|UTSNAME|IPC' ...
# but without needing lxc
if ($ARGV[0] eq "--unshare-helper") {
if (!test_unshare_userns(1)) {
if ($EFFECTIVE_USER_ID != 0 && !test_unshare_userns(1)) {
exit 1;
}
my @idmap = read_subuid_subgid;
my $pid = get_unshare_cmd(
my @idmap = ();
if ($EFFECTIVE_USER_ID != 0) {
@idmap = read_subuid_subgid;
}
my $pid = get_unshare_cmd(
sub {
0 == system @ARGV[1 .. $#ARGV] or error "system failed: $?";
},
@ -4483,12 +4493,6 @@ sub main() {
$options->{variant} = 'important';
}
if ($options->{variant} eq 'essential'
and scalar @{ $options->{include} } > 0) {
warning "cannot install extra packages with variant essential because"
. " apt is missing";
}
# fakeroot is an alias for fakechroot
if ($options->{mode} eq 'fakeroot') {
$options->{mode} = 'fakechroot';
@ -4651,13 +4655,8 @@ sub main() {
error "unknown mode: $options->{mode}";
}
# By default, mount is not used. This is so that mounting is skipped if the
# user supplies --skip=check/canmount. This only gets enabled if
# CAP_SYS_ADMIN and unshare --mount are available in root mode.
$options->{canmount} = 0;
if (any { $_ eq 'check/canmount' } @{ $options->{skip} }) {
info "skipping check/canmount as requested";
} elsif ($options->{mode} eq 'root') {
$options->{canmount} = 1;
if ($options->{mode} eq 'root') {
# It's possible to be root but not be able to mount anything.
# This is for example the case when running under docker.
# Mounting needs CAP_SYS_ADMIN which might not be available.
@ -4674,11 +4673,15 @@ sub main() {
0 == syscall &SYS_capget, $hdrp, $datap
or error "capget failed: $!";
my ($effective, undef) = unpack "LLLLLL", $datap;
if (($effective >> $CAP_SYS_ADMIN) & 1 == 1) {
# we have CAP_SYS_ADMIN, and thus can mount
$options->{canmount} = 1;
} else {
error "root mode requires mount which requires CAP_SYS_ADMIN";
if (($effective >> $CAP_SYS_ADMIN) & 1 != 1) {
warning
"cannot mount because CAP_SYS_ADMIN is not in the effective set";
$options->{canmount} = 0;
}
if (0 == syscall &SYS_prctl, $PR_CAPBSET_READ, $CAP_SYS_ADMIN) {
warning
"cannot mount because CAP_SYS_ADMIN is not in the bounding set";
$options->{canmount} = 0;
}
# To test whether we can use mount without actually trying to mount
# something we try unsharing the mount namespace. If this is allowed,
@ -4688,15 +4691,26 @@ sub main() {
# we get 'cannot change root filesystem propagation' when running
# mmdebstrap inside a chroot for which the root of the chroot is not
# its own mount point.
if (0 == system 'unshare --mount --propagation unchanged -- true') {
$options->{canmount} = 1;
} else {
if (0 != system 'unshare --mount --propagation unchanged -- true') {
# if we cannot unshare the mount namespace as root, then we also
# cannot mount
error "root mode requires mount but unshare --mount failed";
warning "cannot mount because unshare --mount failed";
$options->{canmount} = 0;
}
}
if (any { $_ eq $options->{mode} } ('root', 'unshare')) {
if (system('mount --version>/dev/null') != 0) {
warning "cannot execute mount";
$options->{canmount} = 0;
}
}
# we can only possibly mount in root and unshare mode
if (none { $_ eq $options->{mode} } ('root', 'unshare')) {
$options->{canmount} = 0;
}
my @architectures = ();
foreach my $archs (@{ $options->{architectures} }) {
foreach my $arch (split /[,\s]+/, $archs) {
@ -4989,7 +5003,6 @@ sub main() {
'--ignore-time-conflict', '--no-options',
'--no-default-keyring', '--homedir',
$gpghome, '--no-auto-check-trustdb',
'--trust-model', 'always'
);
my ($ret, $message);
{
@ -5349,7 +5362,9 @@ sub main() {
# in unshare and root mode, other users than the current user need to
# access the rootfs, most prominently, the _apt user. Thus, make the
# temporary directory world readable.
if (any { $_ eq $options->{mode} } ('unshare', 'root')) {
if (any { $_ eq $options->{mode} } ('unshare', 'root')
or
($EFFECTIVE_USER_ID == 0 and $options->{mode} eq 'chrootless')) {
chmod 0755, $options->{root} or error "cannot chmod root: $!";
}
} elsif ($format eq 'directory') {
@ -5420,12 +5435,14 @@ sub main() {
my @idmap;
# for unshare mode the rootfs directory has to have appropriate
# permissions
if ($options->{mode} eq 'unshare') {
if ($EFFECTIVE_USER_ID != 0 and $options->{mode} eq 'unshare') {
@idmap = read_subuid_subgid;
# sanity check
if ( scalar(@idmap) != 2
|| $idmap[0][0] ne 'u'
|| $idmap[1][0] ne 'g') {
|| $idmap[1][0] ne 'g'
|| !length $idmap[0][2]
|| !length $idmap[1][2]) {
error "invalid idmap";
}
@ -5514,6 +5531,11 @@ sub main() {
);
# tar2sqfs and genext2fs do not support extended attributes
if ($format eq "squashfs") {
# tar2sqfs supports user.*, trusted.* and security.* but not system.*
# https://bugs.debian.org/988100
# lib/sqfs/xattr/xattr.c of https://github.com/AgentD/squashfs-tools-ng
# https://github.com/AgentD/squashfs-tools-ng/issues/83
# https://github.com/AgentD/squashfs-tools-ng/issues/25
warning("tar2sqfs does not support extended attributes"
. " from the 'system' namespace");
push @taropts, '--xattrs', '--xattrs-exclude=system.*';
@ -5988,10 +6010,11 @@ option happens to be an existing file, then its contents are pasted into the
chroot's sources.list. This can be used to supply a deb822 style
sources.list. If I<MIRROR> is C<-> then standard input is pasted into the
chroot's sources.list. More than one mirror can be specified and are appended
to the chroot's sources.list in the given order. If any mirror contains a
https URI, then the packages apt-transport-https and ca-certificates will be
installed inside the chroot. If any mirror contains a tor+xxx URI, then the
apt-transport-tor package will be installed inside the chroot.
to the chroot's sources.list in the given order. If you specify a https or tor
I<MIRROR> and you want the chroot to be able to update itself, don't forget to
also install the ca-certificates package, the apt-transport-https package for
apt versions less than 1.5 and/or the apt-transport-tor package using the
B<--include> option, as necessary.
The optional I<TARGET> argument can either be the path to a directory, the path
to a tarball filename, the path to a squashfs image, the path to an ext2 image,
@ -6005,8 +6028,7 @@ symbolic name (eg, unstable, testing, stable, oldstable). Any suite name that
works with apt on the given mirror will work. If no I<SUITE> was specified,
then a single I<MIRROR> C<-> is added and thus the information of the desired
suite has to come from standard input as part of a valid apt sources.list file.
If mmdebstrap is instructed to retrieve packages from multiple releases, then
the value of the I<SUITE> argument will be used to determine which apt index to
The value of the I<SUITE> argument will be used to determine which apt index to
use for finding out the set of C<Essential:yes> packages and/or the set of
packages with the right priority for the selected variant. See the section
B<VARIANTS> for more information.
@ -6153,9 +6175,7 @@ option depends on the selected variant. The B<extract> and B<custom> variants
install no packages by default, so for these variants, the packages specified
by this option will be the only ones that get either extracted or installed by
dpkg, respectively. For all other variants, apt is used to install the
additional packages. The B<essential> variant does not include apt and thus,
the include option will only work when the B<chrootless> mode is selected and
thus apt from the outside can be used. Package names are directly passed to
additional packages. Package names are directly passed to
apt and thus, you can use apt features like C<pkg/suite>, C<pkg=version>,
C<pkg-> or use a glob or regex for C<pkg>. See apt(8) for the supported
syntax. The option can be specified multiple times and the packages are
@ -6449,9 +6469,9 @@ B<chroot(1)>.
All package sets also include the direct and indirect hard dependencies (but
not recommends) of the selected package sets. The variants B<minbase>,
B<buildd> and B<->, resemble the package sets that debootstrap would install
with the same I<--variant> argument. If multiple releases are passed as apt
sources to B<mmdebstrap>, then the release with a name matching the I<SUITE>
argument will be used to determine the C<Essential:yes> and priority values.
with the same I<--variant> argument. The release with a name matching the
I<SUITE> argument will be used to determine the C<Essential:yes> and priority
values.
=over 8
@ -6686,8 +6706,6 @@ Upon startup, several checks are carried out, like:
=item * which mode to use and whether prerequisites are met
=item * if you are root, check whether you have the ability to mount. This check requires the C<unshare> program from the C<util-linux> package and can be disabled by using B<--skip=check/canmount>.
=item * whether the requested architecture can be executed (requires arch-test) using qemu binfmt_misc support. This requires arch-test and can be disabled using B<--skip=check/qemu>
=item * how the apt sources can be assembled from I<SUITE>, I<MIRROR> and B<--components> and/or from standard input as deb822 or one-line format and whether the required GPG keys exist.
@ -6700,7 +6718,7 @@ Upon startup, several checks are carried out, like:
=item B<setup>
The following tasks are carried out:
The following tasks are carried out unless B<--skip=setup> is used:
=over 4
@ -6725,7 +6743,7 @@ Run B<--setup-hook> options and all F<setup*> scripts in B<--hook-dir>.
=item B<update>
Runs C<apt-get update> using the temporary apt configuration file created in
the B<setup> step.
the B<setup> step. This can be disabled using B<--skip=update>.
=item B<download>
@ -6741,8 +6759,7 @@ the required priority.
=item B<extract>
Extract the downloaded packages into the rootfs. This step is not carried out
in chrootless mode if the variant is not B<extract>.
Extract the downloaded packages into the rootfs.
=item B<extract-hook>
@ -6782,7 +6799,7 @@ This step is not carried out in B<extract> mode.
=item B<cleanup>
Performs cleanup tasks like:
Performs cleanup tasks, unless B<--skip=cleanup> is used:
=over 4
@ -6963,6 +6980,14 @@ Create a system that can be used with docker:
root
$ sudo docker rmi debian
Create a system that can be used with podman:
$ mmdebstrap unstable | podman import - debian
[...]
$ podman run --network=none -it --rm debian whoami
root
$ podman rmi debian
=head1 ENVIRONMENT VARIABLES
=over 8
@ -7069,7 +7094,13 @@ Therefore, until this dpkg limitation is fixed, a default dpkg configuration is
recommended on machines running B<mmdebstrap>. If you are using B<mmdebstrap>
as the non-root user, then as a workaround you could run C<chmod 600
/etc/dpkg/dpkg.cfg.d/*> so that the config files are only accessible by the
root user.
root user. See Debian bug #808203.
The C<file://> URI type cannot be used to install the essential packages. This
is because B<mmdebstrap> uses dpkg to install the packages that apt places into
F</var/cache/apt/archives> but with C<file://> apt will not copy the files even
with C<--download-only>. Use C<copy://> instead, which is equivalent to
C<file://> but copies the archives into F</var/cache/apt/archives>.
With apt versions before 2.1.16, setting C<[trusted=yes]> or
C<Acquire::AllowInsecureRepositories "1"> to allow signed archives without a

@ -46,7 +46,10 @@ def main():
description="""\
Filters a tarball on standard input by the same rules as the dpkg --path-exclude
and --path-include options and writes resulting tarball to standard output. See
dpkg(1) for information on how these two options work in detail.
dpkg(1) for information on how these two options work in detail. Since this is
meant for filtering tarballs storing a rootfs, notice that paths must be given
as /path and not as ./path even though they might be stored as such in the
tarball.
Similarly, filter out unwanted pax extended headers. This is useful in cases
where a tool only accepts certain xattr prefixes. For example tar2sqfs only

Loading…
Cancel
Save