`podman-remote` does not support `--events-backend`, which overrides a
log driver. When `--events-backend` is necessary in a test for
`podman-remote`, the test should be skipped.
We don't need to fix the other cases with
`_additional_events_backend()` because `_log_test_follow()` already has
the same skipping logic and `_log_test_multi()` always skips a test when
testing `podman-remote`.
Signed-off-by: Hironori Shiina <shiina.hironori@fujitsu.com>
Up - do not fail if volume already exists, use the existing one
Down - allow the user to remove the volume by passing --force
Add tests
Update the documentation
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
It looks like #16132 was my fault: a missing 'wait' for a container
to exit. Let's see if this fixes the flake.
And, while poking through flake logs, I found another missing wait.
And... in wait_for_output(), address a potential race.
Signed-off-by: Ed Santiago <santiago@redhat.com>
This one has been a thorn in my side: it's a podman-log issue,
but not remote, so I _almost_ retitled #16132 (removing "remote").
Nope, it's a bug in the tests themselves. One solution would be to
podman-wait, but I see no reason for logs to be involved, so I
went with podman start -a instead. This removes the k8s-log stuff
which is no longer necessary. Cleanup all around.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Check for the directory /run/systemd/system, this is described in
sd_booted(3). Reading /proc/1/comm will fail when /proc is mounted
with the `hidepid=2` option.
[NO NEW TESTS NEEDED]
Fixes#16022
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is no equivalent equivalent on FreeBSD and this causes lint
failures when packaging.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
All the other Windows tasks depend on access to a podman-remote build
from the Alt. Arch. `Windows Cross` task. Re-arrange the test-skipping
call to never skip here only.
Signed-off-by: Chris Evich <cevich@redhat.com>
With a seemingly ever growing list of cirrus-cron jobs running on
release branches, there are bound to be some hiccups. Sometimes a lot
of them. Normally any failures require a human to eyeball the logs
and/or manually re-run the job to see if it was simply a flake. This
doesn't take long, but can be distracting and compounds over time.
Attempt to alleviate some maintainer burden by using a new github action
workflow to perform **one** automatic re-run on any failed builds. This
task is scheduled an hour prior to a second failure check, and generation
of notification e-mail for review.
Note: If there are no failures, due to the auto. re-run or luck, no
e-mail is generated. If this proves useful in this repo, I intend to
re-use this workflow for other repo's cirrus-cron jobs.
Signed-off-by: Chris Evich <cevich@redhat.com>
Inline scripts make github-action workflow YAML harder to read/maintain.
Relocate the e-mail formation script to a dedicated file. This also
permits better input-validation and re-use of a common `err()` function.
Signed-off-by: Chris Evich <cevich@redhat.com>
This workflow was originally crafted to be (somehow) reused with
different scripts. That never happened and the extra indirection is
confusing and hard to maintain. Remove it.
Signed-off-by: Chris Evich <cevich@redhat.com>
As far as I can tell there is no reason to use apk in these tests. They
just build an image and check for it and never use the installed binary.
Network calls are always unstable and therefore should be avoided when
possible, this ensures no/less flakes.
Fixes#16391
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Remove the container/pod ID file along with the container/pod. It's
primarily used in the context of systemd and are not useful nor needed
once a container/pod has ceased to exist.
Fixes: #16387
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
It's important/useful to have all VM images built around the same time
as it prevents tooling/dependency divergence and therefore simpler
debugging.
Signed-off-by: Chris Evich <cevich@redhat.com>
--insecure and --verbose flags for docker compatibility
--tls-verify for syntax compatibility and allow users to inspect
manifests at remote Container Registiries without requiring tls.
Helps fix: https://github.com/containers/podman/issues/14917
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
there is already the same check when using cgroupfs, but not when
using the systemd cgroup backend. The check is needed to avoid a
confusing error from the OCI runtime.
Closes: https://github.com/containers/podman/issues/16376
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We have CI tests running in netavark mode when CNI is desired.
Add a new .cirrus.yml envariable, CI_DESIRED_NETWORK, which
we then force-check in e2e and system tests. Simple copy/paste
of #14912 (the RUNTIME check) with manual s/RUNTIME/NETWORK/
and other minor changes.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Add new troubleshooting tip:
Podman run fails with "Error: unrecognized namespace mode keep-id:uid=1000,gid=1000 passed"
Update the troubleshooting tips:
"Passed-in devices or files can't be accessed in rootless container (UID/GID mapping problem)"
and
"Container creates a file that is not owned by the user's regular UID"
to use
"--userns keep-id:uid=$uid,gid=$gid"
instead of the command-line options --uidmap and --gidmap
Co-authored-by: Tom Sweeney <tsweeney@redhat.com>
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
Building Ubuntu VM images is temporarily broken due to dependency
problems on (missing) netavaro for the (required) podman package.
Signed-off-by: Chris Evich <cevich@redhat.com>