When containers are created with a named volume it can deadlock because
the create logic tried to lock all volumes in a loop, this is fine if it
only ever creates a single container at any given time. However because
we multiple containers can be created at the same time they can cause a
deadlock between the volumes. This is because the order of the loop is
not stable, in fact it is based on the order of how the volumes were
specified on the cli.
So if you create two containers at the same time with
`-v vol1:/dir2 -v vol2:/dir2` and the other one with
`-v vol2:/dir2 -v vol1:/dir1` then there is chance for a deadlock.
Now one solution could be to order the volumes to prevent the issue but
the reason for holding the lock is dubious. The goal was to prevent the
volume from being removed in the meantime. However that could still
have happend before we acquired the lock so it didn't protect against
that.
Both boltdb and sqlite already prevent us from adding a container with
volumes that do not exists due their internal consistency checks.
Sqlite even uses FOREIGN KEY relationships so the schema will prevent us
from doing anything wrong.
The create code currently first checks if the volume exists and if not
creates it. I have checked that the db will guarantee that this will not
work:
Boltdb: `no volume with name test2 found in database when adding container xxx: no such volume`
Sqlite: `adding container volume test2 to database: FOREIGN KEY constraint failed`
Keep in mind that this error is normally not seen, only if the volume is
removed between the volume exists check and adding the container in the
db this messages will be seen wich is an acceptable race and a
pre-existing condition anyway.
[NO NEW TESTS NEEDED] Race condition, hard to test in CI.
Fixes#20313
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use sqlite as default but for upgrades it will still use boltdb to avoid
breaking anyone. This is done by checking if the boltdb file already
exists and if it does then we have to use it.
I added a e2e test to check the new logic and removed the system test
for it, the problem with the system test is that we share the storage
dir there so all following commands without --db-backend would try to
use boltdb as a single --db-backend boltdb command will create the file
and then all folllwing commands will use it because of the backwards
compat. In e2e tests each test uses their own --root so it is not an
issue there.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The libpod containers create endpoint wasn't checking whether
the image existed before creating the container. If the image
doesn't exist, it should return a 404 status code but it was
failing and returning a 500 status code.
This fix matches the behavior of the compat endpoint.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
when running as a service, the c.state.Mounted flag could get out of
sync if the container is cleaned up through the cleanup process.
To avoid this, always check if the mountpoint is really present before
skipping the mount.
[NO NEW TESTS NEEDED]
Closes: https://github.com/containers/podman/issues/17042
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Add support for adding podman level arguments before subcommand
Add specific key for Containers Conf Modules
Global arguments are added for both start and stop commands
Adjust testing environment
Add tests
Add to man page
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
As requested in containers/podman/issues/20000, add a `privileged` field
to the containers table in containers.conf. I was hesitant to add such
a field at first (for security reasons) but I understand that such a
field can come in handy when using modules - certain workloads require a
privileged container.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Creates a wrapper around the Qemu command line implementation to prevent
the need to hard-code the different command line options in Init and
Start.
Signed-off-by: Jake Correnti <jakecorrenti+github@proton.me>