libimage did not walk thte layers correctly which was probably
inherited by old Podman code. Fix that by vendoring in the
corresponding changes in c/common.
Fixes: #20375
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When trying to connect a container to a network and the connection
already exists, an error should only be raised if the container is
already running (or is in the `ContainerStateCreated` transition)
to mimic the behavior of Docker as described here:
https://github.com/containers/podman/pull/15516#issuecomment-1229265942
For running and connected containers 403 is returned which fixes#20365
Signed-off-by: Philipp Fruck <dev@p-fruck.de>
In case a future maintainer asks "why" all these weird looking
four-letter architectures are present here and in CI.
Signed-off-by: Chris Evich <cevich@redhat.com>
If you change this option all the containers disappear from the default
connection and socket. Thus it is required to recreate the resources.
Sharing between root and rootless is not possible for various reasons.
Fixes#19936
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When a userns and netns is used we need to let the runtime create the
netns otherwise the netns is not owned by the right userns and thus
the capabilities would not be correct.
The current restart logic tries to reuse the netns which is fine if no
userns is used but when one is used we setup a new netns (which is
correct) but forgot to cleanup the old netns. This resulted in leaked
network namespaces and because no teardown was ever called leaked ipam
assignments, thus a quickly restarting container will run out of ip
space very fast.
Fixes#18615
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Allow users to specify
podman-remote top $cid -eo "pid comm"
or
podman-remote top $cid -eo pid,comm
Fixes: https://github.com/containers/podman/issues/19176
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
didid# new file: test/system/085-top.bats
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
(buildah PR 5084). Should actually have been added as a bud.bats
test in that PR, but I didn't catch it in time.
Also, remove an obsolete bud-tests skip
Signed-off-by: Ed Santiago <santiago@redhat.com>
When people report issues, we often ask for the result of `podman info`.
However, if the problem is the remote connection, it will error out with
no information at all. This PR at least will report client information
before disclosing the connection error. For example on Windows:
> .\bin\windows\podman.exe info
client:
OS: windows/amd64
provider: hyperv
version: 4.8.0-dev
host: null
Satisfies: RUN-1720
Signed-off-by: Brent Baude <bbaude@redhat.com>
Implements a shared `GetLock` function for virtualization providers. Returns
a pointer to a lockfile used for serializing write operations.
[NO NEW TESTS NEEDED]
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
If init fails, or if a SIGINT is sent during init, podman machine should remove all files and configs
created during the init. This includes config jsons, image files, ssh
id's, and system connections. On Windows, the VM instances are also
unregistered.
Signed-off-by: Ashley Cui <acui@redhat.com>
This fixes a regression caused by commit 7e6e267329, unfortunately this
was not caught during review as for some reason this works fine rootless
and only fails as root.
Because we set the systemd log level to notice in order to hide the unit
started/stopped messages to prevent spamming the journal the issue is
that this now also causes systemd to ignore the events we write to
journald as we also send them as info level.
To fix this we simply send health_status events now on notice level. I
decided against sending all events on notice as I think info is fine for
them. Whenever the notice level is right is of course debatable but
given it may contain the unhealthy message I think having this a notice
should be ok.
The main reason this made it through testing is because we do not rely
on the systemd unit to fire healthchecks in the tests as this is flaky.
There is one test were we rely on it though and I added a check there
to make sure events are displayed correctly when trigger via systemd.
Fixes#20342
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
readthedocs has moved to a new build configuration and our builds are
failing because we have exceeded the grace time. this pr puts in
compliance with their build system and should allow automatic builds of
our documentation to resume
Signed-off-by: Brent Baude <bbaude@redhat.com>
When containers are created with a named volume it can deadlock because
the create logic tried to lock all volumes in a loop, this is fine if it
only ever creates a single container at any given time. However because
we multiple containers can be created at the same time they can cause a
deadlock between the volumes. This is because the order of the loop is
not stable, in fact it is based on the order of how the volumes were
specified on the cli.
So if you create two containers at the same time with
`-v vol1:/dir2 -v vol2:/dir2` and the other one with
`-v vol2:/dir2 -v vol1:/dir1` then there is chance for a deadlock.
Now one solution could be to order the volumes to prevent the issue but
the reason for holding the lock is dubious. The goal was to prevent the
volume from being removed in the meantime. However that could still
have happend before we acquired the lock so it didn't protect against
that.
Both boltdb and sqlite already prevent us from adding a container with
volumes that do not exists due their internal consistency checks.
Sqlite even uses FOREIGN KEY relationships so the schema will prevent us
from doing anything wrong.
The create code currently first checks if the volume exists and if not
creates it. I have checked that the db will guarantee that this will not
work:
Boltdb: `no volume with name test2 found in database when adding container xxx: no such volume`
Sqlite: `adding container volume test2 to database: FOREIGN KEY constraint failed`
Keep in mind that this error is normally not seen, only if the volume is
removed between the volume exists check and adding the container in the
db this messages will be seen wich is an acceptable race and a
pre-existing condition anyway.
[NO NEW TESTS NEEDED] Race condition, hard to test in CI.
Fixes#20313
Signed-off-by: Paul Holzinger <pholzing@redhat.com>