This PR is a mishmash of updates needed so that the hyperv provider can
begin to passd the machine e2e tests.
Summary as follows:
* Added custom error handling for machine errors so that all providers
can generate the same formatted error messages. The ones implemented
thus far are needed for the basic and init tests. More will come as
they are identified.
* Vendored new libhvee for better memory inspection. The memory type
changed from uint32 to uint64.
* Some machine e2e tests used linux-specific utilities to check various
error conditions and messages (like pgrep). Those were made into
functions and implemented on an operating system level.
[NO NEW TESTS NEEDED]
Signed-off-by: Brent Baude <bbaude@redhat.com>
commit cf364703fc3f94cd759cc683e3ab9083e8ecc324 changed the way
/sys/fs/cgroup is mounted when there is not a netns and it now honors
the ro flag. The mount was created using a bind mount that is a
problem when using a cgroup namespace, fix that by mounting a fresh
cgroup file system.
Closes: https://github.com/containers/podman/issues/20073
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
If you are running a quadlet with anonymous volumes, then the volume
will leak ever time you restart the service. This change will
cause the volume to be removed.
Fixes: https://github.com/containers/podman/issues/20070
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Commit 3 of 3: make tests pass.
This is the tricky one requiring manual effort. For the most part,
all I did was replace ALPINE/"alpine" with CITEST_IMAGE so we
don't get "Pulling..." messages. Also added warning-message checks
to two truncation tests
Signed-off-by: Ed Santiago <santiago@redhat.com>
Commit 2 of 3:
- rewrite all but one commands, from "generate kube" to "kube g".
- remove "podman generate kube" from all It()s.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Commit 3 of 3, and this one's a doozy. Sorry.
The main problem was that "kube play" re-pulled images.
To solve that, I changed PullPolicy to "missing" and,
where possible, replaced alpine/busybox with CITEST_IMAGE
because that one seems to be cached better? I couldn't
figure out why, but even without the PullPolicy change
everything worked better with CITEST_IMAGE. And it's
a better image to use anyway.
Other lesser changes (like adding "-q") as needed.
Also:
- in four tests that use "replica", we can't use ExitCleanly()
because of a run-time warning. Add a check for that warning.
- remove a workaround for a long-closed issue (c/storage 1232)
Signed-off-by: Ed Santiago <santiago@redhat.com>
Commit 2 of 3:
- rewrite all commands but one, from "play kube" to "kube play".
Considered renaming the file but no, maybe later.
- remove "podman play kube" from all It()s. "Podman kube play" is
already in the Description; unnecessary redundancy is unnecessary.
Signed-off-by: Ed Santiago <santiago@redhat.com>
podman could benefit from stronger typing with some of our methods and
functions where uint64s, for example, are used because the unit of
measurement is unknown. Also, the need to convert between storage units
is critical in podman and this package supports easy conversion as
needed.
to start, we implement the storage units (bytes, KiB, MiB, and GiB)
only.
Signed-off-by: Brent Baude <bbaude@redhat.com>
Introduce a powershell script that mirrors Makefile capbilities on
Windows.
Syntax: ./winmake target [options]
[NO NEW TESTS NEEDED]
Signed-off-by: Ashley Cui <acui@redhat.com>
The --syslog flag has not been passed to the cleanup process (i.e.,
conmon's exit args) complicating debugging quite a bit.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The test started to fail in gating and on workstations. It turned out
that pushing the test image to the registry recompresses it which in
turn may change the digest. The digest now started to change; computing
it depends on the toolchain so the test passed before by pure luck it
seems.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The network list compat API requires us to include all containers with
their ip addresses for the selected networks. Because we have no network
-> container mapping in the db we have to go through all containers
every time. However the old code did it in the most ineffective way
possible, it quered the containers from the db for each individual
network. The of course is extremely expensive. Now the other expensive
call is calling Inspect() on the container each time. Inspect does for
more than we need.
To fix this we fist query containers only once for the API call, then
replace the inspect call with directly accessing the network status.
This will speed things up a lot!
The reported scenario includes 100 containers and 25 networks,
previously it took 1.5s for the API call not it takes 24ms, that is a
more than a 62x improvement. (tested with curl)
[NO NEW TESTS NEEDED] We have no timing tests.
Fixes#20035
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
cli flags couldn't override the active-destination when env variables were set. As a remedy, the precedence of cli flags has been changed.
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
First that function claims to deep copy but then actually return the
original state so it does not work correctly, given that there are no
users just remove it instead of fixing it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Configure packit to automatically notify relevant Cockpit team members
when one of the "cockpit-revdeps" tests fails.
[NO NEW TESTS NEEDED] - This is test configuration.
Signed-off-by: Martin Pitt <mpitt@redhat.com>
Commit 1 of 2.
More easy ones: test files that either work with ExitCleanly()
or require very, very simple tweaks.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Part of RUN-1906.
Followup to #19878 (check stderr in system tests): allow_warnings()
and require_warning() functions to make sure no unexpected messages
fall through the cracks.
Signed-off-by: Ed Santiago <santiago@redhat.com>
The closed issue & PR lock is working fine, but it has a built-in
50-item limit. The limit is not configurable. Since there are
tens-of-thousands of issues/prs to go through, 50-per-day could take
almost a year. Speed things up 24x by running the job every hour
instead of daily.
Signed-off-by: Chris Evich <cevich@redhat.com>