Ensure that we are actually looking up the service container
ID and actually removing it during kube teardown for the --wait
use case. This ensures that we don't have a service container waiting
around in removing state before we return from kube play in the remote
case.
[NO NEW TESTS NEEDED]
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
The `UserNS` key will replace the `RemapGid`, `RemapUid`, `RemapUidSize`
and `RemapUsers` options which are therefore marked as deprecated by
this commit.
Closes#17984
Signed-off-by: Cedric Staniewski <cedric@gmx.ca>
The "podman pull by digest and list --all" e2e test pulls an image using
a tagged reference when an image with the same ID is already present in
a read-only additional image store.
This causes a new image record to be created in read-write storage.
The test then removes this entry, pulls the image again using a digested
reference, and then expects the image to not have any tagged names in it
when it goes to look at it again.
Newer containers/storage will ensure that at the point when the
read-write image record is created, that it includes all of the data
items and naming information from the read-only copy of the image, so
that this information doesn't appear to be lost.
Change the test to use "untag" instead of "rmi", which should pass with
either the older or newer containers/storage.
The test is checking that `podman images` doesn't choke when it
encounters a digested name attached to an image, so the difference in
behavior between containers/storage versions is irrelevant.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Principally because 'make completion' fails if we transitively
bring in a new cobra, but also, none of the other tests are
meaningful under the treadmill.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Since commit f250560a8043 the play kube command uses its own network.
this is racy be design because we create the network followed by
creating/running pod/containers. This means in the meantime another
prune or reset process could wipe out the network config because we have
to share the network config directory by design in the test.
The problem is we only have one host netns which is shared between
tests. If the network config dir is not shared we cannot make conflict
checks for interface names and ip address. This results in different
tests trying to use the same interface and/or ip address which will
cause runtime failures in CNI and netavark.
The only solution I see is to make sure only the reset/prune tests are
using a custom network dir. This makes sure they do not wipe configs
that are otherwise required by other parallel running tests.
Fixes#17946
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
At the time of making this commit, the package `github.com/ghodss/yaml`
is no longer actively maintained.
`sigs.k8s.io/yaml` is a permanent fork of `ghodss/yaml` and is actively
maintained by Kubernetes SIG.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
When creating storage for a container using ID maps, read the ID maps
that are assigned to the container from the returned container
structure, rather than from the options structure that we passed to the
storage library, which it previously modified in error.
[NO NEW TESTS NEEDED]
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Kube generate on pods was not checking for any underscores
in the pod name so was creating a kube yaml with an invalid
pod name when there were underscores present.
The hostname for the pod is set to the podname by default. There
is no need to set that to the container's name or the pod name
again in the generated yaml. So removed that field unless a hostname
was set for the container by the user.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
...mostly just test code that wasn't doing the required waits.
My first approach in the kube-play test was to add "--wait".
Bit mistake! The --wait flag, counterintuitively and counter to
documentation, actually destroys all pods+containers+everything
on exit. (Or tries -- see #17803). Since this violates POLA
and is undocumented, I include here a fix to the man page.
Despite my best intentions, I can't reasonably check every single
test for missing waits, especially in kube-play where failing
containers will get retried forever so we can't wait. We'll
just have to fix flakes as we see them.
Fixes: #17958Fixes: #18071
Signed-off-by: Ed Santiago <santiago@redhat.com>
In GCP, user specified VM names are required upon creation. Cirrus-CI
generates helpful names containing the task-ID. Unfortunately in EC2
the VM ID's are auto-generated, and special permissions are required
to allow secondary setting of a `Name` tag. Since this permission has
been granted, enable the `experimental` flag on EC2 tasks so that cirrus
can update VM name-tags. This is especially useful in troubleshooting
orphaned VMs.
Ref:
https://github.com/containers/podman/issues/18065#issuecomment-1497779159
Signed-off-by: Chris Evich <cevich@redhat.com>
Fixes: https://github.com/containers/podman/issues/18040
If the `build_aarch64` task happens to fail for any reason, it will
cause the `curl` command in the `clone_script` for the aarch64 system
test tasks to throw a 404. This is because the
`local_system_test_aarch64_task` depends on `build` not `build_aarch64`.
As discovered in another issue long ago, the Cirrus API depends on doing
some dependency-resolution magic to function properly. Fix this by
correcting the dependencies.
Signed-off-by: Chris Evich <cevich@redhat.com>