Adding the journald configuration broke decoding the default
libpod.conf, because it was after the [runtimes] table (and was
being interpreted as a member of the table, and not the larger
config). We can't easily fix this on the TOML side, so our best
bet is to move it above the table and add a comment to try and
make sure this doesn't happen again.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
If the systemd development files are not present on the system which
builds podman, then `podman events` will error on runtime creation.
Beside this, a warning will be printed when compiling podman.
This commit mainly exists because projects which depend on libpod
would not need the podman event support and therefore do not need to
rely on the systemd headers.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
When using CGroupfs, we see races during pod removal between
removing the CGroup and the cleanup process starting (in the
CGroup, thus preventing removal).
The simplest way to avoid this is to prevent the forking of the
cleanup process. Conveniently, we can do this via the CGroup that
we already created for Conmon - we just need to update the PID
limit to 0, which completely inhibits new forks.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Buildah no longer updates the create time of single-action images
(e.g. `FROM ...` with no other instructions. This isn't a bug (it
matches Docker's behavior), but it broke one of our tests.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Instead of rewriting the logic, reuse the standard logic we use
for removing containers, which is much better tested.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
allow the user to define a remote host and remote username for their
remote podman sessions. this is then feed to the varlink "bridge" as
the ssh credentials and endpoint.
Signed-off-by: baude <bbaude@redhat.com>
Ensure that, if an error occurs somewhere along the way when we
remove a pod, it's preserved until the end and returned, even as
we continue to remove the pod.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Removing a pod must first removal all containers in the pod.
Libpod requires the state to remain consistent at all times, so
references to a deleted pod must all be cleansed first.
Pods can have many containers in them. We presently iterate
through all of them, and if an error occurs trying to clean up
and remove any single container, we abort the entire operation
(but cannot recover anything already removed - pod removal is not
an atomic operation).
Because of this, if a removal error occurs partway through, we
can end up with a pod in an inconsistent state that is no longer
usable. What's worse, if the error is in the infra container, and
it's persistent, we get zombie pods - completely unable to be
removed.
When we saw some of these same issues with containers not in
pods, we modified the removal code there to aggressively purge
containers from the database, then try to clean up afterwards.
Take the same approach here, and make cleanup errors nonfatal.
Once we've gone ahead and removed containers, we need to see
pod deletion through to the end - we'll log errors but keep
going.
Also, fix some other small things (most notably, we didn't make
events for the containers removed).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
the remote-podman checkpoint and restore commands were done some time
ago but for some reason not added to the container subcommand
Signed-off-by: baude <bbaude@redhat.com>
After a reboot, when we refresh Podman's state, we retrieved the
lock from the fresh SHM instance, but we did not mark it as
allocated to prevent it being handed out to other containers and
pods.
Provide a method for marking locks as in-use, and use it when we
refresh Podman state after a reboot.
Fixes#2900
Signed-off-by: Matthew Heon <matthew.heon@pm.me>