Unfortunately on a number of occasions, Podman has been released
officially with a `-dev` suffix in the version number. Assist in
catching this mistake at release time by the addition of a simple
conditional test. Note that it must be positively enabled by a
magic env. var. before executing the system tests.
Ref. original PR: https://github.com/containers/podman/pull/26540
Signed-off-by: Chris Evich <cevich@redhat.com>
This commit re-vendors the module from a temporary source, and moves to
an earlier, patched version to address CVE-2025-22869. Prior to this
commit, building podman fails due to platform dependence on golang 1.18
- the version currently used to build for RHEL.
In the future, it is intended that the RHEL platform will migrate to a
newer golang toolchain. This will enable re-vendoring the crypto module
again back to the authoritative upstream source. Thus removing the need
for the temporary fork.
Resolves: RHEL-81300 RHEL-81322
Signed-off-by: Chris Evich <cevich@redhat.com>
The Fedora-37 CI VMs used prior to f8bca0f closely matched RHEL-8.8
which is the intended destination of this v4.4.1-rhel release branch.
Importantly this change, along with one or more future commits
(53a8ef8..b9110a1) lead to downstream build failures on RHEL 8.8,
and reproduce using the original Fedora-37 CI VMs. In other words,
leaving the F37 CI VMs in place would have allowed these failures
to be caught during upstream rather than downstream testing.
Signed-off-by: Chris Evich <cevich@redhat.com>
Update cirrus.yml to the latest image based of 5.4-rhel, then disable
validate as there no point for it when we do backports. And only
perform a single build on the f41.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As discussed at the cabal October 8, 2024 we have no need for these
tests on RHEL branches. The work to maintain them is higher than it is
worth it. We also do not test RHEL but rather some outdated frozen
fedora image build from the time we created the branch.
Therefore we gain little value from them especially as all the internal
Red Hat QE is testing it anyways again on the proper RHEL builds.
So simply delete all the stuff we no longer need:
- alt builds, no point in windows/macos testing and other arches
- all the functional tests
- the build success task (not needed as there is nothing after it
anymore)
- the swagger task, we do not use the swagger from the rhel branches
Fixes: https://issues.redhat.com/browse/RUN-2315
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Checking for the mountdir is not relevent, a recent c/storage change[1] no
longer deletes the mount point directory so the check will cause a false
positive. findmnt exits 1 when the given path is not a mountpoint so
let's use that to check.
[1] 3f2e81abb3
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
I'm not exactly sure what is happening here, but this call
```
result := podmanTest.Podman([]string{"images", "-q", "-f", "reference=quay.io/libpod/*"})
```
in the test/e2e/images-test.go in this test
```
It("podman images filter reference", func()
```
is now sending back 10 instead of 9 objects. This was a change
that @edsantiago also made in https://github.com/containers/podman/pull/21356
After the other adjustments I made to the tests to right them,
this seemed to be the last issue.
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
The scenario for inducing this is as follows:
1. Start a container with a long stop timeout and a PID1 that
ignores SIGTERM
2. Use `podman stop` to stop that container
3. Simultaneously, in another terminal, kill -9 `pidof podman`
(the container is now in ContainerStateStopping)
4. Now kill that container's Conmon with SIGKILL.
5. No commands are able to move the container from Stopping to
Stopped now.
The cause is a logic bug in our exit-file handling logic. Conmon
being dead without an exit file causes no change to the state.
Add handling for this case that tries to clean up, including
stopping the container if it still seems to be running.
Fixes#19629
Addresses: https://issues.redhat.com/browse/ACCELFIX-250
Signed-off-by: Matt Heon <mheon@redhat.com>
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
PR #22540 removed the job that generated the (useless) benchmark data
generation. However it neglected to stop addition of the data to the
output artifacts. Fix this.
Signed-off-by: Chris Evich <cevich@redhat.com>
Older versions of podman machine do not support being run against the
latest version of the machine VM images. As there is no built-in
provision to pin older machine VM image versions, these tests will
simply fail forever. Disable them.
Also cleanup a long since disabled task.
Signed-off-by: Chris Evich <cevich@redhat.com>
It seems certain test infrastructure prevents cloning repo which
contains symlink outside of the repo itself, generate symlink for such
test by the testsuite itself just before running test and remove it when
test is completed.
Signed-off-by: Aditya R <arajan@redhat.com>
(cherry picked from commit 607aff55fa1a3b80328e8010049380728fde1d62)
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
When we walk the /dev tree we need to lookup all device paths. Now in
order to get the major and minor version we have to actually stat each
device. This can again fail of course. There is at least a race between
the readdir at stat call so it must ignore ENOENT errors to avoid
the race condition as this is not a user problem. Second, we should
also not return other errors and just log them instead, returning an
error means stopping the walk and returning early which means inspect
fails with an error which would be bad.
Also there seems to be cases were ENOENT will be returned all the time,
e.g. when a device is forcefully removed. In the reported bug this is
triggered with iSCSI devices.
Because the caller does already lookup the device from the created map
it reports a warning there if the device is missing on the host so it
is not a problem to ignore a error during lookup here.
[NO NEW TESTS NEEDED] Requires special device setup to trigger
consistentlyand we cannot do that in CI.
Original Fixed https://issues.redhat.com/browse/RHEL-11158
This fixes: https://issues.redhat.com/browse/RHEL-20488
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
As the title says, buming to Buildah v1.29.3 to address:
CVE-2024-1753
https://issues.redhat.com/browse/RHEL-26762 and probably another card
TBD
[NO NEW TESTS NEEDED]
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
Cherry pick from #20329
Addresses: https://issues.redhat.com/browse/RHEL-14744 and
https://issues.redhat.com/browse/RHEL-14743
When containers are created with a named volume it can deadlock because
the create logic tried to lock all volumes in a loop, this is fine if it
only ever creates a single container at any given time. However because
we multiple containers can be created at the same time they can cause a
deadlock between the volumes. This is because the order of the loop is
not stable, in fact it is based on the order of how the volumes were
specified on the cli.
So if you create two containers at the same time with
`-v vol1:/dir2 -v vol2:/dir2` and the other one with
`-v vol2:/dir2 -v vol1:/dir1` then there is chance for a deadlock.
Now one solution could be to order the volumes to prevent the issue but
the reason for holding the lock is dubious. The goal was to prevent the
volume from being removed in the meantime. However that could still
have happend before we acquired the lock so it didn't protect against
that.
Both boltdb and sqlite already prevent us from adding a container with
volumes that do not exists due their internal consistency checks.
Sqlite even uses FOREIGN KEY relationships so the schema will prevent us
from doing anything wrong.
The create code currently first checks if the volume exists and if not
creates it. I have checked that the db will guarantee that this will not
work:
Boltdb: `no volume with name test2 found in database when adding container xxx: no such volume`
Sqlite: `adding container volume test2 to database: FOREIGN KEY constraint failed`
Keep in mind that this error is normally not seen, only if the volume is
removed between the volume exists check and adding the container in the
db this messages will be seen wich is an acceptable race and a
pre-existing condition anyway.
[NO NEW TESTS NEEDED] Race condition, hard to test in CI.
Fixes#20313
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>
Drop support for remote use-cases when `.containerignore` or
`.dockerignore` is a symlink pointing to arbitrary location on host.
Signed-off-by: Aditya R <arajan@redhat.com>
As the title says. Bump golang.org/x/net to v0.13.0.
Addresses: https://issues.redhat.com/browse/OCPBUGS-17313
CVE-2023-3978
[NO NEW TESTS NEEDED]
Signed-off-by: tomsweeneyredhat <tsweeney@redhat.com>