Use Schedule "afterInstallExecute" (instead of the
default "afterInstallValidate") in the Windows
installer MajorUpgrade element. That avoid
overriding eventual users changes to the podman
machine configuration file created by the
installer.
Fixes#23502
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
It's too difficult to keep the podman-machine image up-to-date.
And, we can't use the cache on Mac/Windows, so if quay is down
we're hosed no matter what.
Add a "nocache" mechanism to install_test_configs() and use that
in machine test setup.
Signed-off-by: Ed Santiago <santiago@redhat.com>
BATS teardown logs are unreadable, making it almost impossible
to see tiny "Leaked this-or-that" messages.
Solution: new _run_podman_quiet() helper, replaces run_podman
in a small number of cases within teardown. Clunky, and
duplicative, sorry.
New helper for leak_check, basically spits out warnings (and
bumps error count) if it sees any output whatsoever from
individual "podman XXX ls" commands.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Fix up a couple of versions in comments in the
pkg/api/server/register_images.go file. Based on comments
from #23440
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
When using service containers and play kube we create a complicated set
of dependencies.
First in a pod all conmon/container cgroups are part of one slice, that
slice will be removed when the entire pod is stopped resulting in
systemd killing all processes that were part in it.
Now the issue here is around the working of stopPodIfNeeded() and
stopIfOnlyInfraRemains(), once a container is cleaned up it will check
if the pod should be stopped depending on the pod ExitPolicy. If this is
the case it wil stop all containers in that pod. However in our flaky
test we calle podman pod kill which logically killed all containers
already. Thus the logic now thinks on cleanup it must stop the pod and
calls into pod.stopWithTimeout(). Then there we try to stop but because
all containers are already stopped it just throws errors and never gets
to the point were it would call Cleanup(). So the code does not do
cleanup and eventually calls removePodCgroup() which will cause all
conmon and other podman cleanup processes of this pod to be killed.
Thus the podman container cleanup process was likely killed while
actually trying to the the proper cleanup which leaves us in a bad
state.
Following commands such as podman pod rm will try to the cleanup again
as they see it was not completed but then fail as they are unable to
recover from the partial cleanup state.
Long term network cleanup needs to be more robust and ideally should be
idempotent to handle cases were cleanup was killed in the middle.
Fixes#21569
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This commit was automatically cherry-picked
by buildah-vendor-treadmill v0.3
from the buildah vendor treadmill PR, #13808
/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
> The git commit message from that PR is below. Please review it,
> edit as necessary, then remove this comment block.
\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changes since 2024-05-21:
* document --compat-volumes
* Fix conflict caused by Ed's local-registry PR in buildah
Signed-off-by: Ed Santiago <santiago@redhat.com>
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
Split the table to three based on the expected outcome
Use helper functions to reduce the amount of parameter required in each entry
Remove the service name override code
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>