(buildah PR 5084). Should actually have been added as a bud.bats
test in that PR, but I didn't catch it in time.
Also, remove an obsolete bud-tests skip
Signed-off-by: Ed Santiago <santiago@redhat.com>
When people report issues, we often ask for the result of `podman info`.
However, if the problem is the remote connection, it will error out with
no information at all. This PR at least will report client information
before disclosing the connection error. For example on Windows:
> .\bin\windows\podman.exe info
client:
OS: windows/amd64
provider: hyperv
version: 4.8.0-dev
host: null
Satisfies: RUN-1720
Signed-off-by: Brent Baude <bbaude@redhat.com>
Implements a shared `GetLock` function for virtualization providers. Returns
a pointer to a lockfile used for serializing write operations.
[NO NEW TESTS NEEDED]
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>
If init fails, or if a SIGINT is sent during init, podman machine should remove all files and configs
created during the init. This includes config jsons, image files, ssh
id's, and system connections. On Windows, the VM instances are also
unregistered.
Signed-off-by: Ashley Cui <acui@redhat.com>
This fixes a regression caused by commit 7e6e267329, unfortunately this
was not caught during review as for some reason this works fine rootless
and only fails as root.
Because we set the systemd log level to notice in order to hide the unit
started/stopped messages to prevent spamming the journal the issue is
that this now also causes systemd to ignore the events we write to
journald as we also send them as info level.
To fix this we simply send health_status events now on notice level. I
decided against sending all events on notice as I think info is fine for
them. Whenever the notice level is right is of course debatable but
given it may contain the unhealthy message I think having this a notice
should be ok.
The main reason this made it through testing is because we do not rely
on the systemd unit to fire healthchecks in the tests as this is flaky.
There is one test were we rely on it though and I added a check there
to make sure events are displayed correctly when trigger via systemd.
Fixes#20342
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
readthedocs has moved to a new build configuration and our builds are
failing because we have exceeded the grace time. this pr puts in
compliance with their build system and should allow automatic builds of
our documentation to resume
Signed-off-by: Brent Baude <bbaude@redhat.com>
When containers are created with a named volume it can deadlock because
the create logic tried to lock all volumes in a loop, this is fine if it
only ever creates a single container at any given time. However because
we multiple containers can be created at the same time they can cause a
deadlock between the volumes. This is because the order of the loop is
not stable, in fact it is based on the order of how the volumes were
specified on the cli.
So if you create two containers at the same time with
`-v vol1:/dir2 -v vol2:/dir2` and the other one with
`-v vol2:/dir2 -v vol1:/dir1` then there is chance for a deadlock.
Now one solution could be to order the volumes to prevent the issue but
the reason for holding the lock is dubious. The goal was to prevent the
volume from being removed in the meantime. However that could still
have happend before we acquired the lock so it didn't protect against
that.
Both boltdb and sqlite already prevent us from adding a container with
volumes that do not exists due their internal consistency checks.
Sqlite even uses FOREIGN KEY relationships so the schema will prevent us
from doing anything wrong.
The create code currently first checks if the volume exists and if not
creates it. I have checked that the db will guarantee that this will not
work:
Boltdb: `no volume with name test2 found in database when adding container xxx: no such volume`
Sqlite: `adding container volume test2 to database: FOREIGN KEY constraint failed`
Keep in mind that this error is normally not seen, only if the volume is
removed between the volume exists check and adding the container in the
db this messages will be seen wich is an acceptable race and a
pre-existing condition anyway.
[NO NEW TESTS NEEDED] Race condition, hard to test in CI.
Fixes#20313
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add a new `compatMode` parameter to libpod's pull endpoint. If set, the
streamed JSON payload is identical to the one of the Docker compat
endpoint and allows for a smooth integration into existing tooling such
as podman-py and Podman Desktop, some of which already have code for
rendering the compat progress data.
We may add a libpod-specific parameter in the future which will stream
differnt progress data.
Fixes: issues.redhat.com/browse/RUN-1936?
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Break out the code for pulling images via the compat API. The goal is to
make this code shareable between the compat and libpod API to allow for
a "compat mode" in the libpod pull endpoint.
[NO NEW TESTS NEEDED] as it should not change behavior.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Use sqlite as default but for upgrades it will still use boltdb to avoid
breaking anyone. This is done by checking if the boltdb file already
exists and if it does then we have to use it.
I added a e2e test to check the new logic and removed the system test
for it, the problem with the system test is that we share the storage
dir there so all following commands without --db-backend would try to
use boltdb as a single --db-backend boltdb command will create the file
and then all folllwing commands will use it because of the backwards
compat. In e2e tests each test uses their own --root so it is not an
issue there.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>