Do not save the container each for changing the state and for removing
running exec sessions. Saving the container is expensive and avoiding
the redundant save makes `container rm` 1.2 times faster on my
workstation.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Loading container states speed things up when listing all containers but
it comes with a price tag for many other call paths. Hence, make
loading the state conditional to allow for keeping `podman ps` fast
without other commands regressing in performance.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Every time I look at a container-removal issue I wonder why the
container isn't locked directly here, so let's add a comment here.
I am not sure whether I would be better if callers took care of
locking but for now the comment will safe the future me and probably
other readers some time.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
follow-up to 6886e80b45caae27dda81a9b44d8dd179c414580
when "podman -rm -f" is used on a container in "stopping" state, also
make sure it is terminated before removing it from the local storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Do not allow for removing the service container unless all associated
pods have been removed. Previously, the service container could be
removed when all pods have exited which can lead to a number of issues.
Now, the service container is treated like an infra container and can
only be removed along with the pods.
Also make sure that a pod is unlinked from the service container once
it's being removed.
Fixes: #16964
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When the new `events_container_create_inspect_data` option is enabled in
containers.conf set the `ContainersInspectData` event field for each
container-create event.
The data was requested for the purpose of auditing (e.g., intrusion
detection).
Jira: https://issues.redhat.com/browse/RUN-1702
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Including the RawInput in the API output is meaningless.
Fixes: #16497
[NO NEW TESTS NEEDED]
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
Remove the container/pod ID file along with the container/pod. It's
primarily used in the context of systemd and are not useful nor needed
once a container/pod has ceased to exist.
Fixes: #16387
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
there is already the same check when using cgroupfs, but not when
using the systemd cgroup backend. The check is needed to avoid a
confusing error from the OCI runtime.
Closes: https://github.com/containers/podman/issues/16376
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We added the concept of image volumes in 2.2.0, to support
inspecting an image from within a container. However, this is a
strictly read-only mount, with no modification allowed.
By contrast, the new `image` volume driver creates a c/storage
container as its underlying storage, so we have a read/write
layer. This, in and of itself, is not especially interesting, but
what it will enable in the future is. If we add a new command to
allow these image volumes to be committed, we can now distribute
volumes - and changes to them - via a standard OCI image registry
(which is rather new and quite exciting).
Future work in this area:
- Add support for `podman volume push` (commit volume changes and
push resulting image to OCI registry).
- Add support for `podman volume pull` (currently, we require
that the image a volume is created from be already pulled; it
would be simpler if we had a dedicated command that did the
pull and made a volume from it)
- Add support for scratch images (make an empty image on demand
to use as the base of the volume)
- Add UOR support to `podman volume push` and
`podman volume pull` to enable both with non-image volume
drivers
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Originally, during pod removal, we locked every container in the
pod at once, did a number of validity checks to ensure everything
was safe, and then removed all the containers in the pod.
A deadlock was recently discovered with this approach. In brief,
we cannot lock the entire pod (or much more than a single
container at a time) without causing a deadlock. As such, we
converted to an approach where we just looped over each container
in the pod, removing them individually. Unfortunately, this
removed a lot of the validity checking of the earlier approach,
allowing for a lot of unintended bad things. Infra containers
could be removed while containers in the pod still depended on
them, for example.
There's no easy way to do validity checks while in a simple loop,
so I implemented a version of our graph-traversal logic that
currently handles pod start. This version acts in the reverse
order of startup: startup starts from containers which depend on
nothing and moves outwards, while removal acts on containers which
have nothing depend on them and moves inwards. By doing graph
traversal, we can guarantee that nothing is removed while
something that depends on it still exists - so the infra
container should be the last thing in a pod that is removed, for
example.
In the (unlikely) case that a graph of the pod's containers
cannot be built (most likely impossible without database editing)
the old method of pod removal has been retained to ensure that
even misbehaving pods can be forcibly evicted from the state.
I'm fairly confident that this resolves the problem, but there
are a lot of assumptions around dependency structure built into
the original pod removal code and I am not 100% sure I have
captured all of them.
Fixes#15526
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
In view of https://github.com/containers/storage/pull/1337, do this:
for f in $(git grep -l stringid.GenerateNonCryptoID | grep -v '^vendor/'); do
sed -i 's/stringid.GenerateNonCryptoID/stringid.GenerateRandomID/g' $f;
done
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Podman adds an Error: to every error message. So starting an error
message with "error" ends up being reported to the user as
Error: error ...
This patch removes the stutter.
Also ioutil.ReadFile errors report the Path, so wrapping the err message
with the path causes a stutter.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
In case of a hard OS shutdown, containers may have a "removing"
state after a reboot, and an attempt to remove Pods with such
containers is unsuccessful:
error freeing lock for container ...: no such file or directory
[NO NEW TESTS NEEDED]
Signed-off-by: Mikhail Khachayants <tyler92@inbox.ru>
This mount has never been standard on FreeBSD, preferring to use /tmp or
/var/tmp optionally with tmpfs to ensure data is lost on a reboot.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
When a kube yaml has a volume set as empty dir, podman
will create an anonymous volume with the empty dir name and
attach it to the containers running in the pod. When the pod
is removed, the empy dir volume created is also removed.
Add tests and docs for this as well.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
the new version of runc has the same check in place and it
automatically resume the container if it is paused. So when Podman
tries to resume it again, it fails since the container is not in the
paused state.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2100740
[NO NEW TESTS NEEDED] the CI doesn't use a new runc on cgroup v1 systems.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This bug was introduced in https://github.com/containers/podman/pull/8906.
When we use 'podman rm/restart/stop/kill etc...' command to
the container running with --rm, the OCI runtime directory
remains at /run/<runtime name> (root user) or
/run/user/<user id>/<runtime name> (rootless user).
This bug could cause other bugs.
For example, when we checkpoint the container running with
--rm (podman checkpoint --export) and restore it
(podman restore --import) with crun, error message
"Error: OCI runtime error: crun: container `<container id>`
already exists" is outputted.
This error is caused by an attempt to restore the container with
the same container ID as the remaining OCI runtime's container ID.
Therefore, I fix that the cleanupRuntime() function runs to
remove the OCI runtime directory,
even if the container has already been removed by --rm option.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
Libpod requires that all volumes are stored in the libpod db. Because
volume plugins can be created outside of podman, it will not show all
available plugins. This podman volume reload command allows users to
sync the libpod db with their external volume plugins. All new volumes
from the plugin are also created in the libpod db and when a volume from
the db no longer exists it will be removed if possible.
There are some problems:
- naming conflicts, in this case we only use the first volume we found.
This is not deterministic.
- race conditions, we have no control over the volume plugins. It is
possible that the volumes changed while we run this command.
Fixes#14207
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
* Replace "setup", "lookup", "cleanup", "backup" with
"set up", "look up", "clean up", "back up"
when used as verbs. Replace also variations of those.
* Improve language in a few places.
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
Support running `podman play kube` in systemd by exploiting the
previously added "service containers". During `play kube`, a service
container is started before all the pods and containers, and is stopped
last. The service container communicates its conmon PID via sdnotify.
Add a new systemd template to dispatch such k8s workloads. The argument
of the template is the path to the k8s file. Note that the path must be
escaped for systemd not to bark:
Let's assume we have a `top.yaml` file in the home directory:
```
$ escaped=$(systemd-escape ~/top.yaml)
$ systemctl --user start podman-play-kube@$escaped.service
```
Closes: https://issues.redhat.com/browse/RUN-1287
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Removing exec sessions is guaranteed to evict them from the DB,
but in the case of a zombie process (or similar) it may error and
block removal of the container. A subsequent run of `podman rm`
would succeed (because the exec sessions have been purged from
the DB), which is potentially confusing to users. So let's just
continue, instead of erroring out, if removing exec sessions
fails.
[NO NEW TESTS NEEDED] I wouldn't want to spawn a zombie in our
test VMs even if I could.
Fixes#14252
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
The errcheck linter makes sure that errors are always check and not
ignored by accident. It spotted a lot of unchecked errors, mostly in the
tests but also some real problem in the code.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
`--mount` should allow setting driver specific options using
`volume-opt` when `type=volume` is set.
This ensures parity with docker's `volume-opt`.
Signed-off-by: Aditya R <arajan@redhat.com>
Following commit ensures that podman generates a valid event on `podman
container rename` where event specifies that it is a rename event and
container name swtichted to the latest name.
Signed-off-by: Aditya R <arajan@redhat.com>
This primarily served to protect us against shutting down the
Libpod runtime while operations (like creating a container) were
happening. However, it was very inconsistently implemented (a lot
of our longer-lived functions, like pulling images, just didn't
implement it at all...) and I'm not sure how much we really care
about this very-specific error case?
Removing it also removes a lot of potential deadlocks, which is
nice.
[NO NEW TESTS NEEDED]
Signed-off-by: Matthew Heon <mheon@redhat.com>
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists. Since this is an expected state, we should not complain
about it.
Fixes: https://github.com/containers/podman/issues/12808
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Checkpoint/restore pod tests are not running with an older runc and now
that runc 1.1.0 appears in the repositories it was detected that the
tests were failing. This was not detected in CI as CI was not using runc
1.1.0 yet.
Signed-off-by: Adrian Reber <areber@redhat.com>
when running on NFS, a RemoveAll could cause EBUSY because of some
unlinked files that are still kept open and "silly renamed" to
.nfs$ID.
This is only half of the fix, as conmon needs to be fixed too.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2040379
Related: https://github.com/containers/conmon/pull/319
[NO NEW TESTS NEEDED] as it requires NFS as the underlying storage.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We should not check if the network supports dns when we create a
container with network aliases. This could be the case for containers
created by docker-compose for example if the dnsname plugin is not
installed or the user uses a macvlan config where we do not support dns.
Fixes#12972
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Support removing the entire pod when --depend is used on an infra
container. --all now implies --depend to properly support removing all
containers and not error out when hitting infra containers.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>