containers/podman/pull/17186 and containers/podman/pull/17201 have been
merged at roughly the same time. Both work fine in isolation but the
new kube test breaks in combination.
Fix the IPC kube test to make CI healthy.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
- Use filepath.Join(podmanTest.TempDir, "any") instead of "/tmp/any"
- Add generatePolicyFile() to avoid the hardcording of "keyPath": "tmp/key.gpg"
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
The Device, Type, Copy and Options keys are now supported in
quadlet .volume files. This allows users to create filesystem
based volumes with quadlets .volume files.
Signed-off-by: Ingo Becker <ingo@orgizm.net>
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Drop support for remote use-cases when `.containerignore` or
`.dockerignore` is a symlink pointing to arbitrary location on host.
Signed-off-by: Aditya R <arajan@redhat.com>
The test was added in commit 1424f0958f, it can flake because the
attach test needs the message in the log. On slow CI systems this can
take longer. Add a retry logic which checks the container log every
second for up to 5 seconds. That should be plenty of time.
Fixes#17204
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
If the image being used has a user set that is a positive
integer greater than 0, then set the securityContext.runAsNonRoot
to true for the container in the generated kube yaml.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
After https://github.com/containers/netavark/pull/452 `netavark` is
incharge of deciding `custom_dns_servers` if any so lets honor that and
libpod should not set these manually.
This also ensures docker parity
Podman populates container's `/etc/resolv.conf` with custom DNS servers ( specified via `--dns` or `dns_server` in containers.conf )
even when container is connected to a network where `dns_enabled` is `true`.
Current behavior does not matches with docker, hence following commit ensures that podman only populates custom DNS server when container is not connected to any network where DNS is enabled and for the cases where `dns_enabled` is `true`
the resolution for custom DNS server will happen via ( `aardvark-dns` or `dnsname` ).
Reference: https://docs.docker.com/config/containers/container-networking/#dns-services
Closes: containers#16172
Signed-off-by: Aditya R <arajan@redhat.com>
`default` is already used as network mode, i.e. podman run --network
default will choose the default mode not a network named `default`.
We already block names from other network modes, default was forgotten.
Fixes#17169
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Output from podman system service, on system tests, is
being saved... it just hasn't been collected as an artifact.
Start collecting it. And, remove obsolete-unused-misleading
code that made me think it _was_ being collected.
Also: log system-service output for bud tests, and set
log-level to info per suggestion from @Luap99
Signed-off-by: Ed Santiago <santiago@redhat.com>
July 2022: test was flaking on new VM images. We needed new
images, so I filed #15014 and skipped the test.
January 2023: no attention from anyone, so I'll try bumping up
a dd timeout from 10s to 30s. But in the interim, the test
has broken: it used to expect "Containerfile" in output (this
was deliberately added in #13655)... but #16810 changed that
so Containerfile no longer appears. @flouthoc argues that
this too is deliberate (#17059). Okay, so let's change the
test then. All I care about is not adding more regressions.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Unify the functions used to detect rootless to "isRootless()".
This function can detect to join the user namespace by mistake.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
...and instrument with 'podman ps'es for debugging failures.
Test flakes pretty regularly in Fedora gating. If the increased
timeout doesn't help, at least we should be able to see if the
container is stopping or failed or something.
Signed-off-by: Ed Santiago <santiago@redhat.com>
to not give a false sense of security since these are not a security
mechanism but a hook to run arbitrary code before executing a
command.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Until Podman v4.3, privileged rootfull containers would expose all the
host devices to the container while rootless ones would exclude
`/dev/ptmx` and `/dev/tty*`.
When 5a2405ae1b ("Don't mount /dev/tty* inside privileged containers
running systemd") landed, rootfull containers started excluding all the
`/dev/tty*` devices when the container would be running in systemd
mode, reducing the disparity between rootless and rootfull containers
when running in this mode.
However, this commit regressed some legitimate use cases: exposing
non-virtual-terminal tty devices (modems, arduinos, serial
consoles, ...) to the container, and the regression was addressed in
f4c81b0aa5 ("Only prevent VTs to be mounted inside privileged
systemd containers").
This now calls into question why all tty devices were historically
prevented from being shared to the rootless non-privileged containers.
A look at the podman git history reveals that the code was introduced
as part of ba430bfe5e ("podman v2 remove bloat v2"), and obviously
was copy-pasted from some other code I couldn't find.
In any case, we can easily guess that this check was put for the same
reason 5a2405ae1b was introduced: to prevent breaking the host
environment's consoles. This also means that excluding *all* tty
devices is overbearing, and should instead be limited to just virtual
terminals like we do on the rootfull path.
This is what this commit does, thus making the rootless codepath behave
like the rootfull one when in systemd mode.
This leaves `/dev/ptmx` as the main difference between the two
codepath. Based on the blog post from the then-runC maintainer[1] and
this Red Hat bug[2], I believe that this is intentional and a needed
difference for the rootless path.
Closes: #16925
Suggested-by: Fabian Holler <mail@fholler.de>
Signed-off-by: Martin Roukala (né Peres) <martin.roukala@mupuf.org>
[1]: https://www.cyphar.com/blog/post/20160627-rootless-containers-with-runc
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=501718
Increase the loop range from 5 to 20 to make sure we give the service
enough time to transition to inactive. Other tests have the same range
with 0.5 seconds sleeps, so I expect the new value to be sufficient and
consistent.
Fixes: #17093
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Make sure that the specs of containers generated by `kube play` are
correctly completed. They have not before which surfaced in default
environment variables not being set.
Fixes: #17016
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The StoppedByUser variable indicates that the container was
requested to stop by a user. It's used to prevent restart policy
from firing (so that a restart=always container won't restart if
the user does a `podman stop`. The problem is we were setting it
*very* late in the stop() function. Originally, this was fine,
but after the changes to add the new Stopping state, the logic
that triggered restart policy was firing before StoppedByUser was
even set - so the container would still restart.
Setting it earlier shouldn't hurt anything and guarantees that
checks will see that the container was stopped manually.
Fixes#17069
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The kube-play test file was a rat's nest of long complicated
yaml strings all differing only slightly. Clean it up, by
adding a helper function with optional parameters. The
helper is ugly, but the actual test code (the important
stuff) is cleaner.
Signed-off-by: Ed Santiago <santiago@redhat.com>
While manually playing with --service-container, I encountered a number
of too verbose logs. For instance, there's no need to error-log when
the service-container has already been stopped.
For testing, add a new kube test with a multi-pod YAML which will
implicitly show that #17024 is now working.
Fixes: #17024
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>