All `[]string`s in containers.conf have now been migrated to attributed
string slices which require some adjustments in Buildah and Podman.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
There is a potential race condition we are seeing where
we are seeing a message about a removed container which
could be caused by a non mounted container, this change
should clarify which is causing it.
Also if the container does not exists, just warn the user
instead of reporting an error, not much the user can do.
Fixes: https://github.com/containers/podman/issues/19702
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Updated the error message to suggest user to use --replace option to instruct Podman to replace the existsing external container with a newly created one.
closes#16759
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
When a userns and netns is used we need to let the runtime create the
netns otherwise the netns is not owned by the right userns and thus
the capabilities would not be correct.
The current restart logic tries to reuse the netns which is fine if no
userns is used but when one is used we setup a new netns (which is
correct) but forgot to cleanup the old netns. This resulted in leaked
network namespaces and because no teardown was ever called leaked ipam
assignments, thus a quickly restarting container will run out of ip
space very fast.
Fixes#18615
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
when running as a service, the c.state.Mounted flag could get out of
sync if the container is cleaned up through the cleanup process.
To avoid this, always check if the mountpoint is really present before
skipping the mount.
[NO NEW TESTS NEEDED]
Closes: https://github.com/containers/podman/issues/17042
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When a container is created and it is part of a pod, we ensure the pod
cgroup exists so limits can be applied on the pod cgroup.
Closes: https://github.com/containers/podman/issues/19175
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We'd otherwise emit the start event much after the actual start of the
container when --sdnotify=healthy. I missed adding the change to commit
0cfd12786fd1.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add a new "healthy" sdnotify policy that instructs Podman to send the
READY message once the container has turned healthy.
Fixes: #6160
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This file has not been present in BSD systems since 2.9.1 BSD and as far
as I remember /proc/mounts has never existed on BSD systems.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
Handle more TOCTOUs operating on listed images. Also pull in
containers/common/pull/1520 and containers/common/pull/1522 which do the
same on the internal layer tree.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2216700
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Most of the code moved there so if from there and remove it here.
Some extra changes are required here. This is a bit of a mess. The pipe
handling makes this a bit more difficult.
[NO NEW TESTS NEEDED] This is just a rework, existing tests must pass.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
My PR[1] to remove PostConfigureNetNS is blocked on other things I want
to split this change out. It reduces the complexity when generating
/etc/hosts and /etc/resolv.conf as now we always write this file after
we setup the network. this means we can get the actual ip from the netns
which is important.
[NO NEW TESTS NEEDED] This is just a rework.
[1] https://github.com/containers/podman/pull/18468
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The current way of bind mounting the host timezone file has problems.
Because /etc/localtime in the image may exist and is a symlink under
/usr/share/zoneinfo it will overwrite the targetfile. That confuses
timezone parses especially java where this approach does not work at
all. So we end up with an link which does not reflect the actual truth.
The better way is to just change the symlink in the image like it is
done on the host. However because not all images ship tzdata we cannot
rely on that either. So now we do both, when tzdata is installed then
use the symlink and if not we keep the current way of copying the host
timezone file in the container to /etc/localtime.
Also note that we need to rebuild the systemd image to include tzdata in
order to test this as our images do not contain the tzdata by default.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2149876
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use a shared helper instead of copy&pasting the handling
of cleanupErr EIGHT times.
This changes the wording of logged error text, and the error
in one case, a bit.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
[NO NEW TESTS NEEDED]
... because testing this would require us to intentionally
create an inconsistent state, which should ideally not be possible...
(and because at this point I don't even know what the reported failure
was.)
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
In rootFsSize(), instead of calculating the size of the diff for every
layer of the container's base image, ask the storage library for the sum
of the values it recorded when it first wrote those layers.
In a similar fashion, teach rwSize() to use the library's
ContainerSize() method instead of trying to roll its own.
Replace calls to pkg/util.SizeOfPath() with calls to
github.com/containers/storage/pkg/directory.Size(), which does the same
thing.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
fix some issues with the handling of errors, we print an error only
when there is already one set to be returned. Also the first error is
not printed, since it is reported back to the caller of the function.
Improve some messages with more context that can be helpful when
things go wrong.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Commit 1ab833fb73 improved the situation but it is still not enough.
If you run short lived containers with --restart=always podman is
basically permanently restarting them. To only way to stop this is
podman stop. However podman stop does not do anything when the
container is already in a not running state. While this makes sense we
should still mark the container as explicitly stopped by the user.
Together with the change in shouldRestart() which now checks for
StoppedByUser this makes sure the cleanup process is not going to start
it back up again.
A simple reproducer is:
```
podman run --restart=always --name test -d alpine true
podman stop test
```
then check if the container is still running, the behavior is very
flaky, it took me like 20 podman stop tries before I finally hit the
correct window were it was stopped permanently.
With this patch it worked on the first try.
Fixes#18259
[NO NEW TESTS NEEDED] This is super flaky and hard to correctly test
in CI. MY ginkgo v2 work seems to trigger this in play kube tests so
that should catch at least some regressions. Also this may be something
that should be tested at podman test days by users (#17912).
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
add a function to securely mount a subpath inside a volume. We cannot
trust that the subpath is safe since it is beneath a volume that could
be controlled by a separate container. To avoid TOCTOU races between
when we check the subpath and when the OCI runtime mounts it, we open
the subpath, validate it, bind mount to a temporary directory and use
it instead of the original path.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When a userns is set we setup the network after the bind mounts, at the
point where resolv.conf is generated we do not yet know the subnet.
Just like the other dns servers for bridge networks we need to add the
ip later in completeNetworkSetup()
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2182052
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As described in #17777, the `restart` on-failure action did not behave
correctly when the health check is being run by a transient systemd
unit. It ran just fine when being executed outside such a unit, for
instance, manually or, as done in the system tests, in a scripted
fashion.
There were two issue causing the `restart` on-failure action to
misbehave:
1) The transient systemd units used the default `KillMode=cgroup` which
will nuke all processes in the specific cgroup including the recently
restarted container/conmon once the main `podman healthcheck run`
process exits.
2) Podman attempted to remove the transient systemd unit and timer
during restart. That is perfectly fine when manually restarting the
container but not when the restart itself is being executed inside
such a transient unit. Ultimately, Podman tried to shoot itself in
the foot.
Fix both issues by moving the restart logic in the cleanup process.
Instead of restarting the container, the `healthcheck run` will just
stop the container and the cleanup process will restart the container
once it has turned unhealthy.
Fixes: #17777
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This contains the implementation of (most) container functions,
with stubs for all pod and volume functions. Presently accessed
via environment variable only for testing purposes.
Signed-off-by: Matt Heon <mheon@redhat.com>
always use the direct mapping when writing the mappings for an
idmapped mount. crun was previously using the reverse mapping, which
is not correct and it is being addressed here:
https://github.com/containers/crun/pull/1147
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
* Utils must support higher level API to create Tar with chrooted into
directory
* Volume export: use TarwithChroot instead of Tar so we can make sure no
symlink can be exported by tar if it exists outside of the source
directory.
* container export: use chroot and Tar instead of Tar so we can make sure no
symlink can be exported by tar if it exists outside of the mointPoint.
[NO NEW TESTS NEEDED]
[NO TESTS NEEDED]
Race needs combination of external/in-container mechanism which is hard to repro in CI.
Closes: BZ:#2168256
CVE: https://access.redhat.com/security/cve/CVE-2023-0778
Signed-off-by: Aditya R <arajan@redhat.com>
The StoppedByUser variable indicates that the container was
requested to stop by a user. It's used to prevent restart policy
from firing (so that a restart=always container won't restart if
the user does a `podman stop`. The problem is we were setting it
*very* late in the stop() function. Originally, this was fine,
but after the changes to add the new Stopping state, the logic
that triggered restart policy was firing before StoppedByUser was
even set - so the container would still restart.
Setting it earlier shouldn't hurt anything and guarantees that
checks will see that the container was stopped manually.
Fixes#17069
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
do not allow removing containers that are in the stopping state,
otherwise it can lead to a race condition where a "podman rm" removes
the container from the storage while another process is stopping the
same container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2155828
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This means we store things like config.json and the secret files
also on tmpfs, lowering wear on disk and leaving less stuff on disk
on an unclean shutdown.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
This allows use to use STDOUT directly without having to call open
again, also this makes the export API endpoint much more performant
since it no longer needs to copy to a temp file.
I noticed that there was no export API test so I added one.
And lastly opening /dev/stdout will not work on windows.
Fixes#16870
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This should simplify the db logic. We no longer need a extra db bucket
for the netns, it is still supported in read only mode for backwards
compat. The old version required us to always open the netns before we
could attach it to the container state struct which caused problem in
some cases were the netns was no longer valid.
Now we use the netns as string throughout the code, this allow us to
only open it when needed reducing possible errors.
[NO NEW TESTS NEEDED] Existing tests should cover it and it is only a
flake so hard to reproduce the error.
Fixes#16140
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We should have done this much earlier, most of the times CNI networks
just mean networks so I changed this and also fixed some function
names. This should make it more clear what actually refers to CNI and
what is just general network backend stuff.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Startup healthchecks are similar to K8S startup probes, in that
they are a separate check from the regular healthcheck that runs
before it. If the startup healthcheck fails repeatedly, the
associated container is restarted.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When restarting a container, clean up the healthcheck state by removing
the old log on disk. Carrying over the old state can lead to various
issues, for instance, in a wrong failing streak and hence wrong
behaviour after the restart.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2144754
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Make sure to wait for the container to exit after kill. While the
cleanup process will take care eventually of transitioning the state, we
need to give a guarantee to the user to leave the container in the
expected state once the (kill) command has finished.
The issue could be observed in a flaking test (#16142) where
`podman rm -f -t0` failed because the preceding `podman kill`
left the container in "running" state which ultimately confused
the "stop" backend.
Note that we should only wait for the container to exit when SIGKILL is
being used. Other signals have different semantics.
[NO NEW TESTS NEEDED] as I do not know how to reliably reproduce the
issue. If #16142 stops flaking, we are good.
Fixes: #16142
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Fix the "stop" on-failure action by not removing the transient systemd
timer and service during container stop. Removing the service will
in turn cause systemd to terminate the Podman process attempting to
stop the container and hence leave it in the "stopping" state.
Instead move the removal into the restart sequence.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Package `io/ioutil` was deprecated in golang 1.16, preventing podman from
building under Fedora 37. Fortunately, functionality identical
replacements are provided by the packages `io` and `os`. Replace all
usage of all `io/ioutil` symbols with appropriate substitutions
according to the golang docs.
Signed-off-by: Chris Evich <cevich@redhat.com>
Restart the health-check timers instead of starting them. This will
surpress annoying errors stating that an already running timer cannot be
started anymore.
Also make sure that the transient units/timers are stopped and removed
when stopping a container.
Fixes: #15691
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
`os.ReadDir` was added in Go 1.16 as part of the deprecation of `ioutil`
package. It is a more efficient implementation than `ioutil.ReadDir`.
Reference: https://pkg.go.dev/io/ioutil#ReadDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Podman adds an Error: to every error message. So starting an error
message with "error" ends up being reported to the user as
Error: error ...
This patch removes the stutter.
Also ioutil.ReadFile errors report the Path, so wrapping the err message
with the path causes a stutter.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
podman update allows users to change the cgroup configuration of an existing container using the already defined resource limits flags
from podman create/run. The supported flags in crun are:
this command is also now supported in the libpod api via the /libpod/containers/<CID>/update endpoint where
the resource limits are passed inthe request body and follow the OCI resource spec format
–memory
–cpus
–cpuset-cpus
–cpuset-mems
–memory-swap
–memory-reservation
–cpu-shares
–cpu-quota
–cpu-period
–blkio-weight
–cpu-rt-period
–cpu-rt-runtime
-device-read-bps
-device-write-bps
-device-read-iops
-device-write-iops
-memory-swappiness
-blkio-weight-device
resolves#15067
Signed-off-by: Charlie Doern <cdoern@redhat.com>
If we get a SIGTERM immediately after Conmon starts but before we
record its PID in the database, we end up leaking a Conmon and
associated OCI runtime process. Inhibit shutdown using the logic
we originally wrote to prevent similar issues during container
creation to prevent this problem.
[NO NEW TESTS NEEDED] No real way to test this I can think of.
Fixes#15557
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The O_PATH flag is a recent addition to the open syscall and is not
present in darwin or in FreeBSD releases before 13.1. The constant is
not present in the FreeBSD version of x/sys/unix since that package
supports FreeBSD 12.3 and later.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>