This one has been a thorn in my side: it's a podman-log issue,
but not remote, so I _almost_ retitled #16132 (removing "remote").
Nope, it's a bug in the tests themselves. One solution would be to
podman-wait, but I see no reason for logs to be involved, so I
went with podman start -a instead. This removes the k8s-log stuff
which is no longer necessary. Cleanup all around.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Check for the directory /run/systemd/system, this is described in
sd_booted(3). Reading /proc/1/comm will fail when /proc is mounted
with the `hidepid=2` option.
[NO NEW TESTS NEEDED]
Fixes#16022
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Lack of proper testing possibility for github actions and lack of
script-testing by me, allowed several flaws through into 'main'. Fix
the problems and manually test the scripts to make sure they're working.
Note: Also revert the stupid SHA-based action-pinning back to normal,
human-readable version numbers. The value of using SHAs in the name of
improved "security" is real, but the value of human-readability and
ease of maintenance is greater.
Signed-off-by: Chris Evich <cevich@redhat.com>
There is no equivalent equivalent on FreeBSD and this causes lint
failures when packaging.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
All the other Windows tasks depend on access to a podman-remote build
from the Alt. Arch. `Windows Cross` task. Re-arrange the test-skipping
call to never skip here only.
Signed-off-by: Chris Evich <cevich@redhat.com>
With a seemingly ever growing list of cirrus-cron jobs running on
release branches, there are bound to be some hiccups. Sometimes a lot
of them. Normally any failures require a human to eyeball the logs
and/or manually re-run the job to see if it was simply a flake. This
doesn't take long, but can be distracting and compounds over time.
Attempt to alleviate some maintainer burden by using a new github action
workflow to perform **one** automatic re-run on any failed builds. This
task is scheduled an hour prior to a second failure check, and generation
of notification e-mail for review.
Note: If there are no failures, due to the auto. re-run or luck, no
e-mail is generated. If this proves useful in this repo, I intend to
re-use this workflow for other repo's cirrus-cron jobs.
Signed-off-by: Chris Evich <cevich@redhat.com>
Inline scripts make github-action workflow YAML harder to read/maintain.
Relocate the e-mail formation script to a dedicated file. This also
permits better input-validation and re-use of a common `err()` function.
Signed-off-by: Chris Evich <cevich@redhat.com>
This workflow was originally crafted to be (somehow) reused with
different scripts. That never happened and the extra indirection is
confusing and hard to maintain. Remove it.
Signed-off-by: Chris Evich <cevich@redhat.com>
As far as I can tell there is no reason to use apk in these tests. They
just build an image and check for it and never use the installed binary.
Network calls are always unstable and therefore should be avoided when
possible, this ensures no/less flakes.
Fixes#16391
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Remove the container/pod ID file along with the container/pod. It's
primarily used in the context of systemd and are not useful nor needed
once a container/pod has ceased to exist.
Fixes: #16387
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
It's important/useful to have all VM images built around the same time
as it prevents tooling/dependency divergence and therefore simpler
debugging.
Signed-off-by: Chris Evich <cevich@redhat.com>
--insecure and --verbose flags for docker compatibility
--tls-verify for syntax compatibility and allow users to inspect
manifests at remote Container Registiries without requiring tls.
Helps fix: https://github.com/containers/podman/issues/14917
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
there is already the same check when using cgroupfs, but not when
using the systemd cgroup backend. The check is needed to avoid a
confusing error from the OCI runtime.
Closes: https://github.com/containers/podman/issues/16376
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We have CI tests running in netavark mode when CNI is desired.
Add a new .cirrus.yml envariable, CI_DESIRED_NETWORK, which
we then force-check in e2e and system tests. Simple copy/paste
of #14912 (the RUNTIME check) with manual s/RUNTIME/NETWORK/
and other minor changes.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add new troubleshooting tip:
Podman run fails with "Error: unrecognized namespace mode keep-id:uid=1000,gid=1000 passed"
Update the troubleshooting tips:
"Passed-in devices or files can't be accessed in rootless container (UID/GID mapping problem)"
and
"Container creates a file that is not owned by the user's regular UID"
to use
"--userns keep-id:uid=$uid,gid=$gid"
instead of the command-line options --uidmap and --gidmap
Co-authored-by: Tom Sweeney <tsweeney@redhat.com>
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
This was a horrible one. I basically went with the podman-run
version, with a few minor changes. See PR for discussion of
diff review.
podman-build is not included here, it is too different.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Building Ubuntu VM images is temporarily broken due to dependency
problems on (missing) netavaro for the (required) podman package.
Signed-off-by: Chris Evich <cevich@redhat.com>
When I first enabled buildah-bud tests under podman-remote (#9887),
I got one aspect all wrong: I added a podman-remote() helper function
to match the podman() one. Turns out it's never actually called,
even when $PODMAN_BINARY=podman-remote, because functions/aliases
don't work that way.
The way it works is, those few cases in which bud.bats runs
podman are not magically remapped to podman-remote, they use
the podman() function. That's where we need to check if
we're using podman-remote, and that's where we need to
remove the registry-and-rootdir options.
With this fix, we can reenable two previously-skipped bud tests.
Signed-off-by: Ed Santiago <santiago@redhat.com>