Use network slirp4netns for the registry container to work around a
pasta regression (#23517). This should be revert once it is fixed in
pasta and included in our CI images.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use Schedule "afterInstallExecute" (instead of the
default "afterInstallValidate") in the Windows
installer MajorUpgrade element. That avoid
overriding eventual users changes to the podman
machine configuration file created by the
installer.
Fixes#23502
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
It's too difficult to keep the podman-machine image up-to-date.
And, we can't use the cache on Mac/Windows, so if quay is down
we're hosed no matter what.
Add a "nocache" mechanism to install_test_configs() and use that
in machine test setup.
Signed-off-by: Ed Santiago <santiago@redhat.com>
if idmap is specified for a volume, reverse the mappings when copying
up from the container, so that the original permissions are maintained.
Closes: https://github.com/containers/podman/issues/23467
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
BATS teardown logs are unreadable, making it almost impossible
to see tiny "Leaked this-or-that" messages.
Solution: new _run_podman_quiet() helper, replaces run_podman
in a small number of cases within teardown. Clunky, and
duplicative, sorry.
New helper for leak_check, basically spits out warnings (and
bumps error count) if it sees any output whatsoever from
individual "podman XXX ls" commands.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The network cleanup can handle it when it is killed half way through as
it spits out a bunch of error in that case on the next cleanup attempt.
Try to avoid getting into such a state and ignore sigterm during this
section.
Of course we stil can get SIGKILL so we should work on fixing the
underlying problems in network cleanup but let's see if this helps us
with the CI flakes in the meantime.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fix up a couple of versions in comments in the
pkg/api/server/register_images.go file. Based on comments
from #23440
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
When using service containers and play kube we create a complicated set
of dependencies.
First in a pod all conmon/container cgroups are part of one slice, that
slice will be removed when the entire pod is stopped resulting in
systemd killing all processes that were part in it.
Now the issue here is around the working of stopPodIfNeeded() and
stopIfOnlyInfraRemains(), once a container is cleaned up it will check
if the pod should be stopped depending on the pod ExitPolicy. If this is
the case it wil stop all containers in that pod. However in our flaky
test we calle podman pod kill which logically killed all containers
already. Thus the logic now thinks on cleanup it must stop the pod and
calls into pod.stopWithTimeout(). Then there we try to stop but because
all containers are already stopped it just throws errors and never gets
to the point were it would call Cleanup(). So the code does not do
cleanup and eventually calls removePodCgroup() which will cause all
conmon and other podman cleanup processes of this pod to be killed.
Thus the podman container cleanup process was likely killed while
actually trying to the the proper cleanup which leaves us in a bad
state.
Following commands such as podman pod rm will try to the cleanup again
as they see it was not completed but then fail as they are unable to
recover from the partial cleanup state.
Long term network cleanup needs to be more robust and ideally should be
idempotent to handle cases were cleanup was killed in the middle.
Fixes#21569
Signed-off-by: Paul Holzinger <pholzing@redhat.com>