Commit Graph

1858 Commits

Author SHA1 Message Date
Miloslav Trmač
7c40e85968 Fix image ID query
Read the full one, not the truncated one

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-01-23 00:11:24 +01:00
Miloslav Trmač
11ee6c4f90 Revert "Use the config digest to compare images loaded/pulled using different methods"
This reverts commit 1d7ec1ef5f.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2025-01-23 00:11:24 +01:00
Daniel J Walsh
6565bde6e8 Add --no-hostname option
Fixes: https://github.com/containers/podman/issues/25002

Also add the ability to inspect containers for
UseImageHosts and UseImageHostname.

Finally fixed some bugs in handling of --no-hosts for Pods,
which I descovered.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2025-01-15 06:51:32 -05:00
openshift-merge-bot[bot]
b4ef95590b Merge pull request #24868 from rhatdan/kube
Kube volumes can not contain _
2025-01-07 01:23:05 +00:00
openshift-merge-bot[bot]
0642bb1c25 Merge pull request #24861 from Luap99/debian-fixes
Some debian test fixes
2024-12-19 11:42:58 +00:00
Daniel J Walsh
ecd882f9f7 Kube volumes can not container _
Need to substiture all _ to - for k8s support.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2024-12-18 09:07:57 -05:00
Paul Holzinger
f2f6eb88e9 test/system: fix "podman play --build private registry" error
When running this test on a system without unqualifiedsearch registries
it will fail with a different error causing the test to fail. to avoid
that case define our own registries.conf that defines quay.io as
registry. This should make the test pass in the debian env.

Signed-off-by: Paul Holzinger <git@holzinger.dev>
2024-12-17 17:20:28 +01:00
Paul Holzinger
153a975888 shell completion: respect CONTAINERS_REGISTRIES_CONF
Found in debian testing where by default there are no unqualified search
registries installed. As such the test failed as the FIXME said. Now
there is no need for the test to assume anything.

Instead set our own config via CONTAINERS_REGISTRIES_CONF then we can
do exact matches, except that env was not read in the shell completion
code so move some code around to make it read the var in the same way as
podman login/logout.

Signed-off-by: Paul Holzinger <git@holzinger.dev>
2024-12-17 16:29:40 +01:00
Daniel J Walsh
8b23e6d408 When generating host volumes for k8s, force to lowercase
Fixes: https://github.com/containers/podman/issues/16542

Kubernetes only allows lower case persistent volume names.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2024-12-16 11:22:22 -05:00
Jan Rodák
8f1266c717 Fix overwriting of LinuxResources structure in the database
with defaults values when changes configuration with podman update.

The new LinuxResource structure does not represent the current unchanged configuration, which was not affected by the change.

Fixes: https://issues.redhat.com/browse/RUN-2375

Signed-off-by: Jan Rodák <hony.com@seznam.cz>
2024-12-04 13:16:32 +01:00
Mario Loriedo
0d3a653c30 Fix podman info with multiple imagestores
The command `podman info` returned only one imagestore in
`store.graphOptions.<driver>.imagestore` even if multiple
image stores were configured.

This change replaces the field `<driver>.imagestore` with
the field `<driver>.additionalImageStores`, that instead
of a string is an array of strings and that includes all
the configured additional image stores.

Fix https://github.com/containers/storage/issues/2094

Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
2024-12-02 15:37:16 +00:00
openshift-merge-bot[bot]
b3c02684fd Merge pull request #24701 from giuseppe/stats-ignore-no-cgroups
stats: ignore errors from containers without cgroups
2024-11-28 15:08:08 +00:00
Giuseppe Scrivano
6673f5c202 stats: ignore errors from containers without cgroups
Now `podman stats --all` ignores failures from a container that has no
cgroups.

Closes: https://github.com/containers/podman/issues/24632

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2024-11-28 15:19:04 +01:00
Miloslav Trmač
6f85808707 Clarify the reason for skip_if_remote
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-11-27 21:26:11 +01:00
Miloslav Trmač
39e08c3ffa Sanity-check that the test is really using partial pulls
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-11-27 21:26:11 +01:00
Miloslav Trmač
5ff496ea2b Fix apparent typos in zstd:chunked tests
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-11-27 19:56:48 +01:00
Ygal Blum
13affe96d6 Quadlet - Use = sign when setting the pull arg for build
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
2024-11-22 15:06:50 -05:00
openshift-merge-bot[bot]
d85ac938e6 Merge pull request #24442 from Honny1/change-healthcheck-config-via-podman-update
Configure HealthCheck with `podman update`
2024-11-22 15:57:30 +00:00
Jan Rodák
a1249425bd Configure HealthCheck with podman update
New flags in a `podman update` can change the configuration of HealthCheck when the container is started, without having to restart or recreate the container.

This can help determine why a given container suddenly started failing HealthCheck without interfering with the services it provides. For example, reconfigure HealthCheck to keep logs longer than the usual last X results, store logs to other destinations, etc.

Fixes: https://issues.redhat.com/browse/RHEL-60561

Signed-off-by: Jan Rodák <hony.com@seznam.cz>
2024-11-19 19:44:14 +01:00
Ed Santiago
97ed067d1a CI: --image-volume test: robustify
Test is failing on 1mt because of differences between 'stat'
command output and /proc/mounts. Solution: compare stat %t
(hex filesystem type), not %T (human-readable). This should
match no matter what kernel version or version of stat on
host/container.

Fixes: #24611

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-19 10:03:55 -07:00
Ed Santiago
1c77ee6fc5 CI: system tests: parallelize 010
Final cleanup. Has been working fine in #23257 for weeks.
Not much gain here, but every little bit helps.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-13 04:14:57 -07:00
Ed Santiago
969417711d system tests: safer install_kube_template()
Previous version was badly broken: it relied on 'make'
rebuilding a file under cwd, which is a no-no; and, in
the case where we don't have a source directory, just
blindly hoped that there'd be a system-installed .service
file with the correct path to podman.

Solution:
  . if running in source directory, run sed directly into
    destination service file in $UNIT_DIR. This is ugly
    duplication of a line in Makefile.

  . if NOT running in a source directory, check $PODMAN:
    . if it's /usr/bin/podman, continue. Include a warning
      that will be shown only on test failure.
    . otherwise skip, because we don't know what we're testing

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-11 10:44:32 -07:00
openshift-merge-bot[bot]
ee5b8de70d Merge pull request #24413 from giuseppe/add-test-zstd-chunked
tests: add basic zstd:chunked system test
2024-11-08 14:36:06 +00:00
openshift-merge-bot[bot]
a1c1ae62e7 Merge pull request #24340 from l0rd/ssh-knownhosts-test
New `system connection add` test
2024-11-08 13:24:46 +00:00
Giuseppe Scrivano
30a82cad7a test: add zstd:chunked system tests
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2024-11-08 12:39:07 +01:00
Ed Santiago
fbbfd07463 kube SIGINT system test: fix race in timeout handling
Up to now this test has been run using:

    PODMAN_TIMEOUT=2 run_podman kube play ...

...and this gives podman time to start the pod before getting
the signal.

When run in parallel, under heavy load, the above command seems
to time out before podman has gotten its act together. Weird
things happen, like weird exit status and (most crucially)
zombie containers.

Solution: wait for container to actually start before we kill it.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-07 11:01:08 -07:00
Mario Loriedo
cbf1d7fcae Avoid printing PR text to stdout in system test
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
2024-11-07 17:48:27 +01:00
Paul Holzinger
fb3a0e93a8 test/system: add regression test for TZDIR local issue
Regression test for #23550. Setting the TZDIR env should make no
difference for the local timezone as this is not a real timezone name
that is resolved from that directory.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-11-07 10:39:15 +01:00
Daniel J Walsh
6346a11b09 AdditionalSupport for SubPath volume mounts
Add support for inspecting Mounts which include SubPaths.

Handle SubPaths for kubernetes image volumes.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2024-11-06 10:10:26 -05:00
Ed Santiago
2c01264568 CI: systests: workaround for parallel podman-stop flake
Just bump up a timeout when running parallel, because of high load.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-11-04 10:45:14 -07:00
Paul Holzinger
d633824a95 Instrument cleanup tracer to log weird volume removal flake
Debug for #23913, I though if we have no idea which process is nuking
the volume then we need to figure this out. As there is no reproducer
we can (ab)use the cleanup tracer. Simply trace all unlink syscalls to
see which process deletes our special named volume. Given the volume
name is used as path on the fs and is deleted on volume rm we should
know exactly which process deleted it the next time hopefully.

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-30 18:50:07 +01:00
openshift-merge-bot[bot]
3a7e1deed4 Merge pull request #24390 from edsantiago/safename-070
CI: make 070-build.bats use safe image names
2024-10-28 14:41:28 +00:00
openshift-merge-bot[bot]
2cbb2e8c42 Merge pull request #24392 from edsantiago/parallelize-520
CI: parallelize 520-checkpoint tests
2024-10-28 13:49:13 +00:00
Ed Santiago
41a82c9a95 CI: parallelize 450-interactive system tests
This has been running reliably for weeks in #23275

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-28 07:03:29 -06:00
Ed Santiago
10d056cc5e CI: parallelize 520-checkpoint tests
This has been running reliably for weeks in #23275

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-28 07:02:51 -06:00
Ed Santiago
e6b7e4ff84 CI: make 070-build.bats use safe image names
In preparation for maybe some day being able to run build tests
in parallel.

SUPER IMPORTANT NOTE! BUILD TESTS CANNOT BE PARALLELIZED YET!
buildah, when run in parallel, barfs with:

    race: parallel builds: copying...committing...creating... layer not known

Until this is fixed, podman-build can never be run in parallel.
See https://github.com/containers/buildah/issues/5674

This PR is simply cleaning things up so, if/when that day comes,
the ensuing parallelize PR will be short & sweet.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-28 06:58:26 -06:00
openshift-merge-bot[bot]
0962a1e1bf Merge pull request #24352 from edsantiago/systemd-leak-cleanup
System tests: clean up unit file leaks
2024-10-28 12:07:27 +00:00
Paul Holzinger
64516e1b8f test/system: add podman network reload test to distro gating
The recent fedora kernel 6.11.4 has a problem with ipv6 networks [1].
This is not a podman bug at all but rather a kernel regression. I can
reproduce the issue easily by running this test.

Given many users were hit by this add it to the distro level gating
which runs in the fedora openQA framework and then we should catch a
bad kernel like this hopefully in the future and prevent it from going
into stable.

[1] https://github.com/containers/podman/issues/24374

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-28 11:51:43 +01:00
Ed Santiago
743a0d49eb System tests: clean up unit file leaks
Quadlet tests and some systemd tests leak unit files, as
reported by 'systemctl list-units --failed'. Clean them up.

Signed-off-by: Ed Santiago <santiago@redhat.com>
2024-10-28 04:45:04 -06:00
Paul Holzinger
6069cdda00 healthcheck: do not leak statup service
The startup service is special because we have to transition from
startup to the normal unit. And in order to do so we kill ourselves (as
we are run as part of the service). This means we always exited 1 which
causes systemd to keep us failure and not remove the transient unit
unless "reset-failed" is called. As there is no process around to do
that we cannot really do this, thus make us exit(0) which makes more
sense.

Of course we could try to reset-failed the unit later but the code for
that seems more complicated than that.

Add a new test from Ed that ensures we check for all healthcheck units
not just the timer to avoid leaks. I slightly modified it to provide a
better error on leaks.

Fixes: 0bbef4b830 ("libpod: rework shutdown handler flow")
Fixes: #24351

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-25 13:47:59 +02:00
Jan Rodák
afedb83917 Add Startup HealthCheck configuration to the podman inspect
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
2024-10-24 13:49:51 +02:00
David Gibson
5b131b8273 test/system: Fix spurious "duplicate tests" failures in pasta tests
As an internal consistency check, the pasta tests check for duplicated test
cases by grepping a log file for a parsed test id.  However it uses
grep -F for the purpose which will not perform an exact match, but a
substring match.  There are some tests which generate an id which is a
substring of the id for other tests, so when test order is randomised, this
can cause a spurious failure.  This can happen in practice when running
the test in parallel with very high concurrency (e.g. -j 100).

Fix this by adding the -x option to grep, which only checks for full line
exact matches.

Fixes: https://github.com/containers/podman/issues/24342

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2024-10-23 14:02:53 +11:00
Miloslav Trmač
6fd0e227b4 Improve "podman load - from URL"
Don't assume that the loaded image will be deduplicated
with the server image.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-10-22 19:36:14 +02:00
Miloslav Trmač
77ef28c14f Try to repair c/storage after removing an additional image store
The additional image store feature assumes that images / layers
in the additional store never go away, while we do remove it after
this test. Try to repair the store.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-10-22 19:36:03 +02:00
Miloslav Trmač
1d7ec1ef5f Use the config digest to compare images loaded/pulled using different methods
Historically, non-schema1 images had a deterministic image ID == config digest.
With zstd:chunked, we don't want to deduplicate layers pulled by consuming the
full tarball and layers partially pulled based on TOC, because we can't cheaply
ensure equivalence; so, image IDs for images where a TOC was used differ.

To accommodate that, compare images using their configs digests, not using image IDs.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-10-22 19:36:02 +02:00
Miloslav Trmač
bf8f2b5551 Simplify the additional store test
When looking up the current-store image ID, do that
from the same output where we verify that the ID is from the
current store, instead of listing images twice.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-10-22 19:15:46 +02:00
Miloslav Trmač
3bc6072142 Fix the store choice in "podman pull image with additional store"
The test got the stores RW status backwards.

Before zstd:chunked, both image IDs should be the same, so this used
to make no difference.

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-10-22 19:15:46 +02:00
Giuseppe Scrivano
94878af151 test: set soft ulimit
when the current soft limit is higher than the new value, ulimit fails
to set the hard limit as (tested on Rawhide):

[root@rawhide ~]# ulimit -n -H 1048575
-bash: ulimit: open files: cannot modify limit: Invalid argument

to avoid the problem, set also the soft limit:

[root@rawhide ~]# ulimit -n -H
12345678
[root@rawhide ~]# ulimit -n -H 1048575
-bash: ulimit: open files: cannot modify limit: Invalid argument
[root@rawhide ~]# ulimit -n -SH 1048575
[root@rawhide ~]# ulimit -n -H
1048575

commit 71d5ee0e04 introduced the issue.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2024-10-22 12:05:07 +02:00
Miloslav Trmač
fdc9feea0e Fix 330-corrupt-images.bats in composefs test runs
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2024-10-18 23:44:04 +02:00
Paul Holzinger
57b022782b quadlet: ensure user units wait for the network
As documented in the issue there is no way to wait for system units from
the user session[1]. This causes problems for rootless quadlet units as
they might be started before the network is fully up. TWhile this was
always the case and thus was never really noticed the main thing that
trigger a bunch of errors was the switch to pasta.

Pasta requires the network to be fully up in order to correctly select
the right "template" interface based on the routes. If it cannot find a
suitable interface it just fails and we cannot start the container
understandingly leading to a lot of frustration from users.

As there is no sign of any movement on the systemd issue we work around
here by using our own user unit that check if the system session
network-online.target it ready.

Now for testing it is a bit complicated. While we do now correctly test
the root and rootless generator since commit ada75c0bb8 the resulting
Wants/After= lines differ between them and there is no logic in the
testfiles themself to say if root/rootless to match specifics. One idea
was to use `assert-key-is-rootless/root` but that seemed like more
duplication for little reason so use a regex and allow both to make it
pass always. To still have some test coverage add a check in the system
test to ask systemd if we did indeed have the right depdendencies where
we can check for exact root/rootless name match.

[1] https://github.com/systemd/systemd/issues/3312

Fixes #22197

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
2024-10-18 11:43:48 +02:00