On darwin arm64, we need to set the location of the ovmf vars. It should be put into the imageDir (also known as as dataDir). But because qemu determines the image path late in Init(), the image path is set something like a stream marker.
Fixes#20361
[NO NEW TESTS NEEDED]
Signed-off-by: Brent Baude <bbaude@redhat.com>
* rootful: NanoCpus needs to set more than 10000000 on cgroups v1.
* rootless: Resource limits that include NanoCPUs are not supported and ignored.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
libimage did not walk thte layers correctly which was probably
inherited by old Podman code. Fix that by vendoring in the
corresponding changes in c/common.
Fixes: #20375
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When trying to connect a container to a network and the connection
already exists, an error should only be raised if the container is
already running (or is in the `ContainerStateCreated` transition)
to mimic the behavior of Docker as described here:
https://github.com/containers/podman/pull/15516#issuecomment-1229265942
For running and connected containers 403 is returned which fixes#20365
Signed-off-by: Philipp Fruck <dev@p-fruck.de>
In case a future maintainer asks "why" all these weird looking
four-letter architectures are present here and in CI.
Signed-off-by: Chris Evich <cevich@redhat.com>
If you change this option all the containers disappear from the default
connection and socket. Thus it is required to recreate the resources.
Sharing between root and rootless is not possible for various reasons.
Fixes#19936
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When a userns and netns is used we need to let the runtime create the
netns otherwise the netns is not owned by the right userns and thus
the capabilities would not be correct.
The current restart logic tries to reuse the netns which is fine if no
userns is used but when one is used we setup a new netns (which is
correct) but forgot to cleanup the old netns. This resulted in leaked
network namespaces and because no teardown was ever called leaked ipam
assignments, thus a quickly restarting container will run out of ip
space very fast.
Fixes#18615
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Allow users to specify
podman-remote top $cid -eo "pid comm"
or
podman-remote top $cid -eo pid,comm
Fixes: https://github.com/containers/podman/issues/19176
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
didid# new file: test/system/085-top.bats
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
(buildah PR 5084). Should actually have been added as a bud.bats
test in that PR, but I didn't catch it in time.
Also, remove an obsolete bud-tests skip
Signed-off-by: Ed Santiago <santiago@redhat.com>
When people report issues, we often ask for the result of `podman info`.
However, if the problem is the remote connection, it will error out with
no information at all. This PR at least will report client information
before disclosing the connection error. For example on Windows:
> .\bin\windows\podman.exe info
client:
OS: windows/amd64
provider: hyperv
version: 4.8.0-dev
host: null
Satisfies: RUN-1720
Signed-off-by: Brent Baude <bbaude@redhat.com>
Implements a shared `GetLock` function for virtualization providers. Returns
a pointer to a lockfile used for serializing write operations.
[NO NEW TESTS NEEDED]
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
If init fails, or if a SIGINT is sent during init, podman machine should remove all files and configs
created during the init. This includes config jsons, image files, ssh
id's, and system connections. On Windows, the VM instances are also
unregistered.
Signed-off-by: Ashley Cui <acui@redhat.com>
This fixes a regression caused by commit 7e6e267329, unfortunately this
was not caught during review as for some reason this works fine rootless
and only fails as root.
Because we set the systemd log level to notice in order to hide the unit
started/stopped messages to prevent spamming the journal the issue is
that this now also causes systemd to ignore the events we write to
journald as we also send them as info level.
To fix this we simply send health_status events now on notice level. I
decided against sending all events on notice as I think info is fine for
them. Whenever the notice level is right is of course debatable but
given it may contain the unhealthy message I think having this a notice
should be ok.
The main reason this made it through testing is because we do not rely
on the systemd unit to fire healthchecks in the tests as this is flaky.
There is one test were we rely on it though and I added a check there
to make sure events are displayed correctly when trigger via systemd.
Fixes#20342
Signed-off-by: Paul Holzinger <pholzing@redhat.com>