Instead of using the container's mountpoint as the base of the
chroot and indexing from there by the volume directory, instead
use the full path of what we want to copy as the base of the
chroot and copy everything in it. This resolves the bug, ends up
being a bit simpler code-wise (no string concatenation, as we
already have the full path calculated for other checks), and
seems more understandable than trying to resolve things on the
destination side of the copy-up.
Fixes#9354
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Podman -s crashes when the user specifies the '{{ .Size }}` format
on the podman ps command, without specifying the --size option.
This PR will stop the crash and print out a logrus.Error stating that
the caller should add the --size option.
Fixes: https://github.com/containers/podman/issues/9408
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
if the current process could not be moved to a different systemd
cgroup do not raise a warning but debug message.
[NO TESTS NEEDED]
Closes: https://github.com/containers/podman/issues/9353
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
We recieved an issue with an image that was built with
entrypoint=[""]
This blows up on Podman, but works on Docker.
When we setup the OCI Runtime, we should drop
entrypoint if it is == [""]
https://github.com/containers/podman/issues/9377
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Currently if the host shares container storage with a container
running podman, the podman inside of the container resets the
storage on the host. This can cause issues on the host, as
well as causes the podman command running the container, to
fail to unmount /dev/shm.
podman run -ti --rm --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
Since podman is volume mounting in the graphroot, it will add a flag to
/run/.containerenv to tell podman inside of container whether to reset storage or not.
Since the inner podman is running inside of the container, no reason to assume this is a fresh reboot, so if "container" environment variable is set then skip
reset of storage.
Also added tests to make sure /run/.containerenv is runnig correctly.
Fixes: https://github.com/containers/podman/issues/9191
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This one is rather bizarre because it triggers only on some
systems. I've included a CI test, for example, but I'm 99% sure
we use images in CI that have volumes over empty directories, and
the earlier patch to change copy-up implementation passed CI
without complaint.
I can reproduce this on a stock F33 VM, but that's the only place
I have been able to see it.
Regardless, the issue: under certain as-yet-unidentified
environmental conditions, the copier.Get method will return an
ENOENT attempting to stream a directory that is empty. Work
around this by avoiding the copy altogether in this case.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Make sure to not set an empty $HOME for containers and let it default to
"/".
https://github.com/containers/crun/pull/599 is required to fully
address #9378.
Partially-Fixes: #9378
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The `images/create` endpoint should always attempt to pull a newer
image. Previously, the local images was used which is not compatible
with Docker and caused issues in the Gitlab CI.
Fixes: #9232
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
when creating a pod with --infra-image and using a untagged image for
the infra-image (none/none), the lookup for the image's name was
creating a panic.
Fixes: #9374
Signed-off-by: baude <bbaude@redhat.com>
Make sure that Podman's default OCI runtime is passed to Buildah in
`podman build`. In theory, Podman and Buildah should use the same
defaults but the projects move at different speeds and it turns out
we caused a regression in v3.0.
Fixes: #9365
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The error message when failing to create an image engine unconditionally
pointed to the Podman socket which is quite confusing when running
locally.
Move the error message to the point where the first ping to the service
fails.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Fixes: https://github.com/containers/podman/issues/9290
Currently we still have hard coded --isolation=chroot for podman-remote build.
Implement missing arguments for podman build
Implements
--jobs, --disable-compression, --excludes
Fixes:
MaxPullPushRetries
RetryDuration
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
When we stop a container we are printing the full id,
this does not match Docker behaviour or the start behavior.
We should be printing the users rawInput when we successfully
stop the container.
Fixes: https://github.com/containers/podman/issues/9386
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Currently podman is always chowning the WORKDIR to root:root
This PR will return if the WORKDIR already exists.
Fixes: https://github.com/containers/podman/issues/9387
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The unit generation accidentally escaped the %t in the pod id file path.
This is a regression caused by #9178. This was not caught by the tests
because the test itself was wrong. It used a full path instead of the
systemd variable %t like the actual code does.
Fixes#9373
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
The logic in the e2e test for multiple network aliases is indicating the
test should wait for the containerized nginx to be ready. As this may
take some time, the test does an exponential backoff starting at 2050ms.
Fix the logic by removing the `Expect(...)` call during the exponential
backoff. Otherwise, the test errors immediately.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The timestamps of some images must have changed changing the number of
expected filtered images. The test conditions seem fragile but for now
it's more important to get CI back.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>