Init containers are meant to exit early before other containers are
started. Thus stopping the infra container in such case is wrong.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
If we manage to init/start a container successfully we should unset any
previously stored state errors. Otherwise a user might be confused why
there is an error in the state about some old error even though the
container works/runs.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
if idmap is specified for a volume, reverse the mappings when copying
up from the container, so that the original permissions are maintained.
Closes: https://github.com/containers/podman/issues/23467
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The network cleanup can handle it when it is killed half way through as
it spits out a bunch of error in that case on the next cleanup attempt.
Try to avoid getting into such a state and ignore sigterm during this
section.
Of course we stil can get SIGKILL so we should work on fixing the
underlying problems in network cleanup but let's see if this helps us
with the CI flakes in the meantime.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
when a --rootfs is specified with idmap, always use the specified
rootfs since we need a new mount on top of the original directory.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The current code did something like this:
lock()
getState()
unlock()
if state != running
lock()
getState() == running -> error
unlock()
This of course is wrong because between the first unlock() and second
lock() call another process could have modified the state. This meant
that sometimes you would get a weird error on start because the internal
setup errored as the container was already running.
In general any state check without holding the lock is incorrect and
will result in race conditions. As such refactor the code to combine
both StartAndAttach and Attach() into one function that can handle both.
With that we can move the running check into the locked code.
Also use typed error for this specific error case then the callers can
check and ignore the specific error when needed. This also allows us to
fix races in the compat API that did a similar racy state check.
This commit changes slightly how we output the result, previously a
start on already running container would never print the id/name of the
container which is confusing and sort of breaks idempotence. Now it will
include the output except when --all is used. Then it only reports the
ids that were actually started.
Fixes#23246
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
if the current user is not mapped into the new user namespace, use an
intermediate mount to allow the mount point to be accessible instead
of opening up all the parent directories for the mountpoint.
Closes: https://github.com/containers/podman/issues/23028
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This fixes a regression added in commit 4fd84190b8, because the name was
overwritten by the createTimer() timer call the removeTransientFiles()
call removed the new timer and not the startup healthcheck. And then
when the container was stopped we leaked it as the wrong unit name was
in the state.
A new test has been added to ensure the logic works and we never leak
the system timers.
Fixes#22884
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When an empty volume is mounted into a container, Docker will
chown that volume appropriately for use in the container. Podman
does this as well, but there are differences in the details. In
Podman, a chown is presently a one-and-done deal; in Docker, it
will continue so long as the volume remains empty. Mount into a
dozen containers, but never add content, the chown occurs every
time. The chown is also linked to copy-up; it will always occur
when a copy-up occurred, despite the volume now not being empty.
This PR changes our logic to (mostly) match Docker's.
For some reason, the chowning also stops if the volume is chowned
to root at any point. This feels like a Docker bug, but as they
say, bug for bug compatible.
In retrospect, using bools for NeedsChown and NeedsCopyUp was a
mistake. Docker isn't actually tracking this stuff; they're just
doing a copy-up and permissions change unconditionally as long as
the volume is empty. They also have the two linked as one
operation, seemingly, despite happening at very different times
during container init. Replicating that in our stateful system is
nontrivial, hence the need for the new CopiedUp field. Basically,
we never want to chown a volume with contents in it, except if
that data is a result of a copy-up that resulted from mounting
into the current container. Tracking who did the copy-up is the
easiest way to do this.
Fixes#22571
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
wait for the healthy status on the thread where the container lock is
held. Otherwise, if it is performed from a go routine, a different
thread is used (since the runtime.LockOSThread() call doesn't have any
effect), causing pthread_mutex_unlock() to fail with EPERM.
Closes: https://github.com/containers/podman/issues/22651
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This reverts commit 909ab594191ce964529398bcf7600edff9540d71.
The workaround was added almost 5 years ago to workaround an issue
with old conmon releases. It is safe to assume such ancient conmon
releases are not used anymore.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The scenario for inducing this is as follows:
1. Start a container with a long stop timeout and a PID1 that
ignores SIGTERM
2. Use `podman stop` to stop that container
3. Simultaneously, in another terminal, kill -9 `pidof podman`
(the container is now in ContainerStateStopping)
4. Now kill that container's Conmon with SIGKILL.
5. No commands are able to move the container from Stopping to
Stopped now.
The cause is a logic bug in our exit-file handling logic. Conmon
being dead without an exit file causes no change to the state.
Add handling for this case that tries to clean up, including
stopping the container if it still seems to be running.
Fixes#19629
Signed-off-by: Matt Heon <mheon@redhat.com>
Systemd dislikes it when we rapidly create and remove a transient
unit. Solution: If we change the name every time, it's different
enough that systemd is satisfied and we stop having errors trying
to restart the healthcheck.
Generate a random 32-bit integer, and add it (formatted as hex)
to the end of the unit name to do this. As a result, we now have
to store the unit name in the database, but it does make
backwards compat easy - if the unit name in the DB is empty, we
revert to the old behavior because the timer was created by old
Podman.
Should resolve RHEL-26105
Signed-off-by: Matt Heon <mheon@redhat.com>
This is something Docker does, and we did not do until now. Most
difficult/annoying part was the REST API, where I did not really
want to modify the struct being sent, so I made the new restart
policy parameters query parameters instead.
Testing was also a bit annoying, because testing restart policy
always is.
Signed-off-by: Matt Heon <mheon@redhat.com>
The logic here is more complex than I would like, largely due to
the behavior of `podman inspect` for running containers. When a
container is running, `podman inspect` will source as much as
possible from the OCI spec used to run that container, to grab
up-to-date information on things like devices. We don't want to
change this, it's definitely the right behavior, but it does make
updating a running container inconvenient: we have to rewrite the
OCI spec as part of the update to make sure that `podman inspect`
will read the correct resource limits.
Also, make update emit events. Docker does it, we should as well.
Signed-off-by: Matt Heon <mheon@redhat.com>
Always teardown the network, trying to reuse the netns has caused
a significant amount of bugs in this code here. It also never worked
for containers with user namespaces. So once and for all simplify this
by never reusing the netns. Originally this was done to have a faster
restart of containers but with netavark now we are much faster so it
shouldn't be that noticeable in practice. It also makes more sense to
reconfigure the netns as it is likely that the container exited due
some broken network state in which case reusing would just cause more
harm than good.
The main motivation for this change was the pasta change to use
--dns-forward by default. As the restarted contianer had no idea what
nameserver to use as pasta just kept running.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Conmon writes the exit file and oom file (if container
was oom killed) to the persist directory. This directory
is retained across reboots as well.
Update podman to create a persist-dir/ctr-id for the exit
and oom files for each container to be written to. The oom
state of container is set after reading the files
from the persist-dir/ctr-id directory.
The exit code still continues to read the exit file from
the exits directory.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Moving from Go module v4 to v5 prepares us for public releases.
Move done using gomove [1] as with the v3 and v4 moves.
[1] https://github.com/KSubedi/gomove
Signed-off-by: Matt Heon <mheon@redhat.com>
We were preserving ContainerStateExited, which is better than
nothing, but definitely not correct. A container that ran at any
point during the last boot should be moved to Exited state to
preserve the fact that they were run at least one. This means we
have to convert Running, Stopped, Stopping, Paused containers to
exited as well.
Signed-off-by: Matt Heon <mheon@redhat.com>
Currently we deadlock in the slirp4netns setup code as we try to
configure an non exissting netns. The problem happens because we tear
down the netns in the userns case correctly since commit bbd6281ecc but
that introduces this slirp4netns problem. The code does a proper new
network setup later so we should only use the short cut when not in a
userns.
Fixes#21477
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This field drags in a dependency on CNI and thereby blocks us from disabling CNI
support via a build tag
[NO NEW TESTS NEEDED]
Signed-off-by: Dan Čermák <dcermak@suse.com>
If we get an error chowning a file or directory to a UID/GID pair
for something like ENOSUP or EPERM, then we should ignore as long as the UID/GID
pair on disk is correct.
Fixes: https://github.com/containers/podman/issues/20801
[NO NEW TESTS NEEDED]
Since this is difficult to test and existing tests should be sufficient
to ensure no regression.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We were ignoreing relabel requests on certain unsupported
file systems and not on others, this changes to consistently
logrus.Debug ENOTSUP file systems.
Fixes: https://github.com/containers/podman/discussions/20745
Still needs some work on the Buildah side.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
All `[]string`s in containers.conf have now been migrated to attributed
string slices which require some adjustments in Buildah and Podman.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
There is a potential race condition we are seeing where
we are seeing a message about a removed container which
could be caused by a non mounted container, this change
should clarify which is causing it.
Also if the container does not exists, just warn the user
instead of reporting an error, not much the user can do.
Fixes: https://github.com/containers/podman/issues/19702
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Updated the error message to suggest user to use --replace option to instruct Podman to replace the existsing external container with a newly created one.
closes#16759
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
When a userns and netns is used we need to let the runtime create the
netns otherwise the netns is not owned by the right userns and thus
the capabilities would not be correct.
The current restart logic tries to reuse the netns which is fine if no
userns is used but when one is used we setup a new netns (which is
correct) but forgot to cleanup the old netns. This resulted in leaked
network namespaces and because no teardown was ever called leaked ipam
assignments, thus a quickly restarting container will run out of ip
space very fast.
Fixes#18615
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
when running as a service, the c.state.Mounted flag could get out of
sync if the container is cleaned up through the cleanup process.
To avoid this, always check if the mountpoint is really present before
skipping the mount.
[NO NEW TESTS NEEDED]
Closes: https://github.com/containers/podman/issues/17042
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When a container is created and it is part of a pod, we ensure the pod
cgroup exists so limits can be applied on the pod cgroup.
Closes: https://github.com/containers/podman/issues/19175
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We'd otherwise emit the start event much after the actual start of the
container when --sdnotify=healthy. I missed adding the change to commit
0cfd12786fd1.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add a new "healthy" sdnotify policy that instructs Podman to send the
READY message once the container has turned healthy.
Fixes: #6160
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This file has not been present in BSD systems since 2.9.1 BSD and as far
as I remember /proc/mounts has never existed on BSD systems.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
Handle more TOCTOUs operating on listed images. Also pull in
containers/common/pull/1520 and containers/common/pull/1522 which do the
same on the internal layer tree.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2216700
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Most of the code moved there so if from there and remove it here.
Some extra changes are required here. This is a bit of a mess. The pipe
handling makes this a bit more difficult.
[NO NEW TESTS NEEDED] This is just a rework, existing tests must pass.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>