runlabel` should be `podman container runlabel`. Note: I am not fixing
past references before this on purpose.
Fixes#22871
Signed-off-by: Brent Baude <bbaude@redhat.com>
The e2e tests already depend on skopeo anyway and pulling a over 300
MB image is not helpful for flakes but most importantly we see ENOSPC
flakes. I see them around the skopeo test so I assume the big image is
pushing the tmpfs limits so other tests running in parallel can start
failing because of it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The path mentioned above is linked in the sysadmin
article on running podman inside containers. The content
has since been moved and users are getting a 404 there now.
Add the path back with a readme pointing to the new location
of the content.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
The expectation with --cgroups=disabled is that the current cgroup is
used by the container.
Currently the --cgroups=disabled is passed directly to the OCI
runtime, but it doesn't stop Podman from creating a new cgroup when it
doesn't own the current one.
Closes: https://github.com/containers/podman/issues/20910
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
the condition doesn't work when the runtime to use is specified
through its absolute path as the error message contains that.
Simplify the check and just look for "read from the init process".
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Another new-VM import from
https://github.com/containers/automation_images/pull/338
...because the usual conflict dealio in that repo. This
should mostly be a NOP. All the major work was done in #22706.
Signed-off-by: Ed Santiago <santiago@redhat.com>
podman.msi GUI has a radio-button to select WSL or Hyper-V
The checkbox in podman.msi GUI allow the user to specify if
the machine provider installation (WSL or Hyper-V) should
be part of podman installation or not.
podman-setup.exe supports 2 new variables: MachineProvider
(valid values are `wsl` and `hyperv`) and HyperVCheckbox
(valid values are `0` and `1`)
Installation creates the configuration file
`99-podman-machine-provider.conf` under folder
`%APPDATA\containers\containers.conf.d` with the selected
machine provider
Cirrus CI `win_installer_task` tests the installation with
both `hyperv` and `wsl` and verifies the configuration.
Uninstallation is tested too.
Note that podman-setup.exe GUI doesn't allow to choose the
provider yet. See https://github.com/containers/podman/issues/22492
Signed-off-by: Mario Loriedo <mario.loriedo@gmail.com>
The command does not react on sigterm, so kube down needs to wait 10s.
To fix it first use a command that does but also write the yaml
directly instead of doing the podman create && kube generate dance.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
use a command that stops on SIGTERM not sleep, that way the tests can
continue to use podman kube down without waiting for the full stop
timeout every time.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This test is by far the slowest one taking over minute, the reason is
that it is checking every single podman command for shell completions.
The test is useful but it does not need to check the "..." argument 3
times. Test a second time to make sure not only the first arg is
completed. This change makes it about 15 seconds faster.
Long term we should get this test out of the main system tests together
with other cli only tests as they do not need to run on each OS, etc...
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The current logic used podman logs I don't understand way, all we care
about is the container output and we can just read the same with a
attached podman run, of course we have to move it into the background
but it did the some with logs.
This also allows us to remove the extra log-driver checks and because
podman logs seems to be much slower than the extra run we safe over 10s
with this change.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use only one retry and a short stop timeout to speed them up. I am not
sure if this will cause flakes, I have not seen any after trying for
some time so I think this works just as well. And is about 2-3 seconds
faster for both tests.
If it does start to flake we can revert this commit again or write the
test differently.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Do not wait 5 seconds, just stop the container directly.
This speeds up the test by more than 4 seconds.
One could make the case here that we want to check podman wait but
there are so many other podman wait tests that it should not matter.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Instead of iterating over all tmp dirs and creating test containers for
each one we can just pass all files to one touch call. With that we have
to create much less containers while still checking the same thing. This
speeds up the test by about 4 seconds.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The test used sleep to synchronize log output between both containers
which is slow. There is actually no way to guarantee the ordering on
the reading side so just remove the sleep's and check the the lines
within the same container are in the right order.
Trying to preserve the orignal ordering is just not possible if we speed
up the test as it would flake to often.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is no reason for this check to wait 4 seconds for the container to
run, instead make sure to have a running process and then stop it
directly with -t0 not have any delay.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As agreed in Planning meeting of 2024-03-20, Podman 5.x will
drop support for cgroups v1 and for runc. Make it so.
CI images built in https://github.com/containers/automation_images/pull/338
Signed-off-by: Ed Santiago <santiago@redhat.com>
This container did not react to sigterm thus we always waited 10s for it
to stop. Also do not wait 2s for the logs instead use a retry loop.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The test does a normal stop on a command that does not react to sigterm.
As I cannot fix the system stop logic use a command which does. This
safes us 10s as it no longer waits for the timeout.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Both tests take 10s longer than they need to because they run the sleep
command int he container which does not react to sigterm, as such podman
waits 10s before killing it with sigkill.
To fix it just stop them with podman rm -fa -t0 to avoid the wait and do
not use podman kube down as we cannot set a timeout there. podman kube
down is still covered in many other tests so this is not an issue.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
IMO it is not important to cover each case with each sdnotify policy, to
speed them up we run all the exit code cases only once just twice for
each policy while switching the sdnotify policy between each case. This
way we safe 50% of runs and should still have sufficient coverage.
Before it took around 24 seconds, with this it is around 12 seconds now.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is really no point in waiting 10s for the kill, let's use 2 this
should be good enough to observe the timing.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This test waits 15 seconds to send sigterm for no good reason, we can
just make the timeout shorter. Also make sure the podman command quit on
sigterm by looking for the output message.
While at it fix the tests to use $PODMAN_TMPDIR not /tmp and define the
yaml in the test instead of using the podman create && podman kube
generate && podman rm way to create the yaml as it is a bit slower as we
have to call three podman commands for it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>