When a user specifies a invalid connection in CONTAINER_CONNECTION then
podman should return a proper error saying so. Currently it ignored the
error and in rootFlags() just exited early with defining any flags. This
caused a panic then when trying to use the flags later.
In order to address this first store the connection error in the
PodmanConfig struct and not abort right away during flag setup. This is
important as the user might have specified a flag with a valid remote
connection. As such we check all flags and only when none were given we
return the connection error.
Also while at it I noticed that the default connection reported via
podman --help was wrong as it only used the old containers.conf field
for it and did not consider the podman-connections.json default.
New regression tests have been added to make sure it behaves correctly.
This fixes the problem reported in the PR #22997.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add a test to verify that restoring a container in a Pod works when
the `container restore --pod` option is used with Pod *name* (this
functionality was previously limited to support only full Pod ID).
Signed-off-by: Radostin Stoyanov <rstoyanov@fedoraproject.org>
PR #22821 (CI speedup) was overly aggressive in one kube test.
It's flaking. Bump up timeout from 3s to 4.
Signed-off-by: Ed Santiago <santiago@redhat.com>
At the end of all tests always check for leaks. That should make us more
robust against adding tests at the end that would leak stuff otherwise.
TODO: something seems wrong with bats when returning an error in
teardown_suite(), it prints a warning:
bats warning: Executed <NUM+1> instead of expected <NUM> tests
And also the output is formatted weirdly in this case where the podman
args are split over multiple lines.
But the test fails as expected so I don't think it is a problem.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
While these are not really slow they still take about 100-250ms if I
time this locally. Given they are run for every test this adds up
quickly. Looking at CI logs I can see the timings for skipped
tests are all in 600ms range. So I think it is safe to assume that these
functions need to get faster.
We have over 670 test cases currently so we talk about over 400s spend
in these functions in CI. This allows for big gains.
Now overall this is a tricky trade of, while all tests should cleanup
after themselves there is no guarantee for that as such errors can be
leaked into other tests making debugging much harder. To work at least a
bit against this teardown checks if the test was successful and only
skips the podman commands bases on that. Without it a single flake could
cause all following tets to fail.
As such this commit does the proper setup once one suite start then only
after a test failed.
In order for this to work at all we have to fix all leaks first, see
previous commits. And then for the future keep a very strong eye on
this during reviews.
Also add a PODMAN_BATS_LEAK_CHECK option
By default test must cleanup themselves and to speed up CI we no longer
do any cleanup in teardown by default. However there is still many cases
where we might have to debug a leak so add a new PODMAN_BATS_LEAK_CHECK
env option that can be set and should cause teardown to fail if the test
did not cleanup properly.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Remove leaking containers and remove unessesary push/pull args. For push
it tries to push an image as argument which makes no sense and for pull
we try to pull argument as image which is also wrong.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This is the same as what --squash-all is doing, and we already support
--squash with --layers=true since this is the default.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If a container was stopped and we try to start it before we called
cleanup it tried to reuse the network which caused a panic as the pasta
code cannot deal with that. It is also never correct as the netns must
be created by the runtime in case of custom user namespaces used. As
such the proper thing is to clean the netns up first.
Also change a e2e test to report better errors. It is not directly
related to this chnage but it failed on v1 of this patch so we noticed
the ugly error message it produced. Thanks to Ed for the fix.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This fixes a regression added in commit 4fd84190b8, because the name was
overwritten by the createTimer() timer call the removeTransientFiles()
call removed the new timer and not the startup healthcheck. And then
when the container was stopped we leaked it as the wrong unit name was
in the state.
A new test has been added to ensure the logic works and we never leak
the system timers.
Fixes#22884
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The buildah buil kill trick is bad as we have to sleep and wait to aboid
flakes which takes time. Instead it is possible to redo this build part
manually with buildah commands. It is not trival and harder to
understand but it safes 2-3s so I think it is worth it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
First, as root don't wait 5s for the timeout, 1s is enough. Also switch
to use the curl --max-time option instead, that way we know we do not
kill curl before it had the chance to do anything possibly.
Second, combine podman inspect commands into one. This makes the test
faster by over one second as we safe a bunch of podman commands.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Another case of contianer does not exit with SIGTERM so we waste 10s.
Now because our contianer reacts to sigterm and exits 0 the systemd unit
status changed to inactive from failed.
And most importantly add Notify=yes because the socat call always failed
as the default is to not leak the notify socket into the container.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It is not clear at all why the count of 30 was choosen, this seems a
lot and of course takes quite a while. The test takes over 16s in CI.
To speed it up reduce the count to 10. I think this should still be good
enough to ensure there are no races IMO.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It makes the test a bit uglier but I cannot see a good way to sped this
up otherwise. I chnaged the created test to only start/stop the
contianer once instead of every test case iteration. This makes it about
2s faster locally.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Overall just combine several container runs into one. Every RUN
instruction will run a new container which is quite expensive so chain
the commands together. The same for podman run's.
I could have combined a bit more but I think this leaves it still
readable. This speeds up the test about 4s locally from 8s before.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use podman version over podman info because info has to query a lot of
internal state, e.g. contianer and image count, so it is slower than a
simple info. This speeds the test up by about 600ms locally.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Issue #11825 was fixed a long time ago. Also we no longer test
cni/dnsname so there is really no point in having this.
Speeds up the test by 1 second.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Another case of contianer does not exit with SIGTERM so we waste 10s.
Now because our contianer reacts to sigterm and exits 0 the systemd
unit status changed to inactive from failed.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
We don't have two loop twice for the stat call we can just stat both
dirs at once. This means we only have to create half of the containers
so the test is twice as fast.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Testing `podman system check` requires that we have a way to
intentionally introduce storage corruptions. Add a hidden `podman
testing` command that provides the necessary internal logic in
subcommands. Stub out the tunnel implementation for now.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
debian's man (5) hostname page states "The file should contain a single newline-terminated hostname
string."
[NO NEW TESTS NEEDED]
fix#22729
Signed-off-by: Bo Wang <wangbob@uniontech.com>
The e2e tests already depend on skopeo anyway and pulling a over 300
MB image is not helpful for flakes but most importantly we see ENOSPC
flakes. I see them around the skopeo test so I assume the big image is
pushing the tmpfs limits so other tests running in parallel can start
failing because of it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The expectation with --cgroups=disabled is that the current cgroup is
used by the container.
Currently the --cgroups=disabled is passed directly to the OCI
runtime, but it doesn't stop Podman from creating a new cgroup when it
doesn't own the current one.
Closes: https://github.com/containers/podman/issues/20910
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
the condition doesn't work when the runtime to use is specified
through its absolute path as the error message contains that.
Simplify the check and just look for "read from the init process".
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The command does not react on sigterm, so kube down needs to wait 10s.
To fix it first use a command that does but also write the yaml
directly instead of doing the podman create && kube generate dance.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
use a command that stops on SIGTERM not sleep, that way the tests can
continue to use podman kube down without waiting for the full stop
timeout every time.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>