add a function to securely mount a subpath inside a volume. We cannot
trust that the subpath is safe since it is beneath a volume that could
be controlled by a separate container. To avoid TOCTOU races between
when we check the subpath and when the OCI runtime mounts it, we open
the subpath, validate it, bind mount to a temporary directory and use
it instead of the original path.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
It turns out the restart is _not_ a stop+start but keeps certain
resources open and is subject to some timeouts that may differ across
distributions' default settings.
[NO NEW TESTS NEEDED] as I have absolutely no idea how to reliably cause
the failure/flake/race.
Also ignore ENOENTS of the CID file when removing a container which has
been identified of actually fixing #17607.
Fixes: #17607
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When a userns is set we setup the network after the bind mounts, at the
point where resolv.conf is generated we do not yet know the subnet.
Just like the other dns servers for bridge networks we need to add the
ip later in completeNetworkSetup()
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2182052
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
`Ping()` requires the DB lock, so we had to move it into a transaction
to fix#17859. Since we try to access the DB directly afterwards, I
prefer to let that fail instead of paying the cost of a transaction
which would lock the DB for _all_ processes.
[NO NEW TESTS NEEDED] as it's a hard to reproduce race.
Fixes: #17859
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
If a container with an ID starting with "db1" exists, and a
container named "db1" also exists, and they are different
containers - if I run `podman inspect db1` the container named
"db1" should be inspected, and there should not be an error that
multiple containers matched the name or id "db1". This was
already handled by BoltDB, and now is properly managed by SQLite.
Fixes#17905
Signed-off-by: Matt Heon <mheon@redhat.com>
The DB config is a single-row table, and the first Podman process
to run against the database creates it. However, there was a race
where multiple Podman processes, started simultaneously, could
try and write it. Only the first would succeed, with subsequent
processes failing once (and then running correctly once re-ran),
but it was happening often in CI and deserves fixing.
[NO NEW TESTS NEEDED] It's a CI flake fix.
Signed-off-by: Matt Heon <mheon@redhat.com>
SQLite developers consider it a misfeature [1], and after turning it on,
we saw a new set of flakes. Let's turn it off and trust the developers
[1] that WAL mode is sufficient for our purposes.
Turning the shared cache off also makes the DB smaller and faster.
[NO NEW TESTS NEEDED]
[1] https://sqlite.org/forum/forumpost/1f291cdca4
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The symptoms in #17859 indicate that setting the PRAGMAs in individual
EXECs outside of a transaction can lead to concurrency issues and
failures when the DB is locked. Hence set all PRAGMAs when opening
the connection. Move them into individual constants to improve
documentation and readability.
Further make transactions exclusive as #17859 also mentions an error
that the DB is locked during a transaction.
[NO NEW TESTS NEEDED] - existing tests cover the code.
Fixes: #17859
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
<MH: Cherry-picked on top of my branch>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
I was searching the SQLite docs for a fix, but apparently that
was the wrong place; it's a common enough error with the Go
frontend for SQLite that the fix is prominently listed in the API
docs for go-sqlite3. Setting cache mode to 'shared' and using a
maximum of 1 simultaneous open connection should fix.
Performance implications of this are unclear, but cache=shared
sounds like it will be a benefit, not a curse.
[NO NEW TESTS NEEDED] This fixes a flake with concurrent DB
access.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
As described in #17777, the `restart` on-failure action did not behave
correctly when the health check is being run by a transient systemd
unit. It ran just fine when being executed outside such a unit, for
instance, manually or, as done in the system tests, in a scripted
fashion.
There were two issue causing the `restart` on-failure action to
misbehave:
1) The transient systemd units used the default `KillMode=cgroup` which
will nuke all processes in the specific cgroup including the recently
restarted container/conmon once the main `podman healthcheck run`
process exits.
2) Podman attempted to remove the transient systemd unit and timer
during restart. That is perfectly fine when manually restarting the
container but not when the restart itself is being executed inside
such a transient unit. Ultimately, Podman tried to shoot itself in
the foot.
Fix both issues by moving the restart logic in the cleanup process.
Instead of restarting the container, the `healthcheck run` will just
stop the container and the cleanup process will restart the container
once it has turned unhealthy.
Fixes: #17777
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Transient mode means the DB should not persist, so instead of
using the GraphRoot we should use the RunRoot instead.
Signed-off-by: Matt Heon <mheon@redhat.com>
Two main changes:
- The transient state tests relied on BoltDB paths, change to
make them agnostic
- The volume code in SQLite wasn't retrieving and setting the
volume plugin for volumes that used one.
Signed-off-by: Matt Heon <mheon@redhat.com>
Return more sensible errors than SQLite's embedded constraint
failure ones. Should fix a number of integration tests.
Signed-off-by: Matt Heon <mheon@redhat.com>
When streaming events, prevent returning duplicates after a log rotation
by marking a beginning and an end for rotated events. Before starting to
stream, get a timestamp while holding the event lock. The timestamp
allows for detecting whether a rotation event happened while reading the
log file and to skip all events between the begin and end rotation
event.
In an ideal scenario, we could detect rotated events by enforcing a
chronological order when reading and skip those detected to not be more
recent than the last read event. However, events are not always
_written_ in chronological order. While this can be changed, existing
event files could not be read correctly anymore.
Fixes: #17665
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
On cgroup v1 we need to mount only the systemd named hierarchy as
writeable, so we configure the OCI runtime to mount /sys/fs/cgroup as
read-only and on top of that bind mount /sys/fs/cgroup/systemd.
But when we use a private cgroupns, we cannot do that since we don't
know the final cgroup path.
Also, do not override the mount if there is already one for
/sys/fs/cgroup/systemd.
Closes: https://github.com/containers/podman/issues/17727
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Currently Podman prevents SELinux container separation,
when running within a container. This PR adds a new
--security-opt label=nested
When setting this option, Podman unmasks and mountsi
/sys/fs/selinux into the containers making /sys/fs/selinux
fully exposed. Secondly Podman sets the attribute
run.oci.mount_context_type=rootcontext
This attribute tells crun to mount volumes with rootcontext=MOUNTLABEL
as opposed to context=MOUNTLABEL.
With these two settings Podman inside the container is allowed to set
its own SELinux labels on tmpfs file systems mounted into its parents
container, while still being confined by SELinux. Thus you can have
nested SELinux labeling inside of a container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
On FreeBSD, c.config.Spec.Linux is not populated - in this case, we can
assume that the container is not using a pid namespace.
[NO NEW TESTS NEEDED]
Signed-off-by: Doug Rabson <dfr@rabson.org>
Add a hidden flag to set the database backend and plumb it into
podman-info. Further add a system test to make sure the flag and the
info output are working properly.
Note that the test may need to be changed once we settled on how
to test the sqlite backend in CI.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
There's a unique constraint in the table, so we shouldn't add the same
volume more than once to the same container.
[NO NEW TESTS NEEDED] as it fixes an existing one.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
A partial name match is tricky as we want it to be fast but also make
sure there's only one partial match iff there's no full one.
[NO NEW TESTS NEEDED] as it fixes a system test.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
I wasn't able to find a way to get error-checks working with the sqlite3
library with the time at hand.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add a way to keep play kube running in the foreground and terminating all pods
after receiving a a SIGINT or SIGTERM signal. The pods will also be
cleaned up after the containers in it have exited.
If an error occurrs during kube play, any resources created till the
error point will be cleane up also.
Add tests for the various scenarios.
Fixes#14522
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
A number of fixes for pod creation and removal.
The important part is that matching partial IDs requires a trailing `%`
for SQL to interpret it as a wildcard. More information at
https://www.sqlitetutorial.net/sqlite-like/
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Allow to replace existing exit codes. A container may be started and
stopped multiple times etc.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>