The reserved annotation io.podman.annotations.volumes-from is made public to let user define volumes-from to have one container mount volumes of other containers.
The annotation format is: io.podman.annotations.volumes-from/tgtCtr: "srcCtr1:mntOpts1;srcCtr2:mntOpts;..."
Fixes: containers#16819
Signed-off-by: Vikas Goel <vikas.goel@gmail.com>
if the target mount path already exists and the container uses a user
namespace, correctly map the target UID/GID to the host values before
attempting a chown.
Closes: https://github.com/containers/podman/issues/21608
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Conmon writes the exit file and oom file (if container
was oom killed) to the persist directory. This directory
is retained across reboots as well.
Update podman to create a persist-dir/ctr-id for the exit
and oom files for each container to be written to. The oom
state of container is set after reading the files
from the persist-dir/ctr-id directory.
The exit code still continues to read the exit file from
the exits directory.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Moving from Go module v4 to v5 prepares us for public releases.
Move done using gomove [1] as with the v3 and v4 moves.
[1] https://github.com/KSubedi/gomove
Signed-off-by: Matt Heon <mheon@redhat.com>
We were preserving ContainerStateExited, which is better than
nothing, but definitely not correct. A container that ran at any
point during the last boot should be moved to Exited state to
preserve the fact that they were run at least one. This means we
have to convert Running, Stopped, Stopping, Paused containers to
exited as well.
Signed-off-by: Matt Heon <mheon@redhat.com>
When interface_name attribute in containers.conf file is set to "device", then set interface names inside containers same as the network_interface names of the respective network.
The change applies to macvlan and ipvlan networks only. The interface_name attribute value has no impact on any other types of networks.
If the interface name is set in the user request, then that takes precedence.
Fixes: #21313
Signed-off-by: Vikas Goel <vikas.goel@gmail.com>
This mirrors how the Docker API handles things, allowing us to be
more compatible with Docker and more verbose on the Libpod API.
Stats are given as per network interface in the container, but
still aggregated for `podman stats` and `podman pod stats`
display (so the CLI does not change, only the Libpod and Compat
APIs).
Signed-off-by: Matt Heon <mheon@redhat.com>
During system shutdown, Podman should go down gracefully, meaning
that we have time to spawn cleanup processes which remove any
containers set to autoremove. Unfortunately, this isn't always
the case. If we get a SIGKILL because the system is going down
immediately, we can't recover from this, and the autoremove
containers are not removed.
However, we can pick up any leftover autoremove containers when
we refesh the DB state, which is the first thing Podman does
after a reboot. By detecting any autoremove containers that have
actually run (a container that was created but never run doesn't
need to be removed) at that point and removing them, we keep the
fresh boot clean, even if Podman was terminated abnormally.
Fixes#21482
[NO NEW TESTS NEEDED] This requires a reboot to realistically
test.
Signed-off-by: Matt Heon <mheon@redhat.com>
Currently we deadlock in the slirp4netns setup code as we try to
configure an non exissting netns. The problem happens because we tear
down the netns in the userns case correctly since commit bbd6281ecc but
that introduces this slirp4netns problem. The code does a proper new
network setup later so we should only use the short cut when not in a
userns.
Fixes#21477
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Podman v5 will not support cgroups-v1. This commit will print a warning
if it detects a cgroups-v1 system. The warning can be hidden by setting
envvar `PODMAN_CGROUPSV1_WARNING`.
This warning is patched out for RHEL 9 builds as cgroups-v1 will still
be supported on RHEL 9 systems.
Resolves: https://issues.redhat.com/browse/RUN-1957
[NO NEW TESTS NEEDED]
Co-authored-by: Ed Santiago <santiago@redhat.com>
Co-authored-by: Sascha Grunert <sgrunert@redhat.com>
Co-authored-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
The current field separator comma of the inspect annotation conflicts with the mount options of --volumes-from as the mount options itself can be comma separated.
Signed-off-by: Vikas Goel <vikas.goel@gmail.com>
The update to runc broke creation of devices for containers in
the pod cgroup. We don't support the device cgroup for pods at
present, so just disable it for now, resolving the issue.
Thanks to Giuseppe for finding this one!
[NO NEW TESTS NEEDED] This is a fix for broken tests
Signed-off-by: Matt Heon <mheon@redhat.com>
When inspecting a container that does not define any health check, the health field should return nil. This matches docker behavior.
Signed-off-by: Ashley Cui <acui@redhat.com>
This is one of the breaking changes in Podman 5.0: removing the
ability to create new instances of the old Bolt database. This
does not remove support for the database entirely, as existing
Bolt databases will still be usable, but all new installs will
use SQLite after this point - if Bolt is forced by config, we'll
just error.
We don't have plans to outright remove the Bolt code. If that
were to happen, it'd be Podman 6.0 at least, and a significant
enough change it'd warrant a lot of discussion and planning. We
do intend to start winding down support of BoltDB, though, and
new features may be added only to SQLite from here on.
I have added an escape hatch via an undocumented environment
variable that allows us to continue testing BoltDB in CI (and, if
necessary, locally) but I don't want this to be used for any
purpose except continued testing of the old DB to ensure we don't
break it.
Signed-off-by: Matt Heon <mheon@redhat.com>
When preparing container inspection output, ensure we actually have masked paths to work with.
These will only be available on Linux, which is no longer always true as we also support FreeBSD now.
Fixes#21117
Signed-off-by: Ben Cooksley <bcooksley@kde.org>
Initial impetus was #20958 (ps --format .Label abc). This is
a complicated solution to a simple-seeming problem.
The problem: .Label is a cobra *function*, something I did not
know about nor handle.
Solution: recognize cobra functions. Switch to __complete,
not __completeNoDesc, so we can see the number of arguments
required. Invent new man-page format for documenting functions.
And, finally, start enforcing how functions (and cobra structs)
are documented.
This discovered a never-used completion function, .Recycle(),
in podman-events. Remove it.
[NO NEW TESTS NEEDED] - the .go change is an excision of dead code.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Before this, for some special Podman commands (system reset,
system migrate, system renumber), Podman would create a first
Libpod runtime to do initialization and flag parsing, then stop
that runtime and create an entirely new runtime to perform the
actual task. This is an artifact of the pre-Podman 2.0 days, when
there was almost no indirection between Libpod and the CLI, and
we only used one runtime because we didn't need a second runtime
for flag parsing and basic init.
This system was clunky, and apparently, very buggy. When we
migrated to SQLite, some logic was introduced where we'd select a
different database location based on whether or not Libpod's
StaticDir was manually set - which differed between the first
invocation of Libpod and the second. So we'd get a different
database for some commands (like `system reset`) and they would
not be able to see existing containers, meaning they would not
function properly.
The immediate cause is obviously the SQLite behavior, but I'm
certain there's a lot more baggage hiding behind this multiple
Libpod runtime logic, so let's just refactor it out. It doesn't
make sense, and complicates the code. Instead, make Reset,
Renumber, and Migrate methods of the libpod Runtime. For Reset
and Renumber, we can shut the runtime down afterwards to achieve
the desired effect (no valid runtime after). Then pipe all of
them through the ContainerEngine so cmd/podman can access them.
As part of this, remove the SystemEngine part of pkg/domain. This
was supposed to encompass these "special" commands, but every
command in SystemEngine is actually a ContainerEngine command.
Reset, Renumber, Migrate - they all need a full Libpod and access
to all containers. There's no point to a separate engine if it
just wraps Libpod in the exact same way as ContainerEngine. This
consolidation saves us a bit more code and complexity.
Signed-off-by: Matt Heon <mheon@redhat.com>
It looks like we had some logic for this from #10789 but it does
not appear to have ever worked; we can't pull external containers
out of the DB, so the ContainerRm call failed unconditionally.
Instead, just handle them in Libpod when we're removing images.
We're removing every image, so setting Force when removing images
should get rid of all external containers. It's a little later in
the process than the current (nonfunctional) solution is but I
can't think of a reason why that would be bad.
[NO NEW TESTS NEEDED] We do not currently test `system reset`.
We should probably reevaluate that at some point this year.
Fixes https://issues.redhat.com/browse/RHEL-21261
Signed-off-by: Matt Heon <mheon@redhat.com>
Cut is a cleaner & more performant api relative to SplitN(_, _, 2) added in go 1.18
Previously applied this refactoring to buildah:
https://github.com/containers/buildah/pull/5239
Signed-off-by: Philip Dubé <philip@peerdb.io>
Conditional expression duplicates the
code above, therefore, remove it
Found by Linux Verification Center (linuxtesting.org) with SVACE.
[NO NEW TESTS NEEDED]
Signed-off-by: Egor Makrushin <emakrushin@astralinux.ru>
* Add BaseHostsFile to container configuration
* Do not copy /etc/hosts file from host when creating a container using Docker API
Signed-off-by: Gavin Lam <gavin.oss@tutamail.com>
This field drags in a dependency on CNI and thereby blocks us from disabling CNI
support via a build tag
[NO NEW TESTS NEEDED]
Signed-off-by: Dan Čermák <dcermak@suse.com>
Like stated in [PR for crun](https://github.com/containers/crun/pull/1372)
that HostID is what being mapped here, so we should be checking `HostID` instead of `ContainerID`. `v.ContainerID` here is the id of owner of files on filesystem, that can be totally unrelated to the uid maps.
Signed-off-by: Karuboniru <yanqiyu01@gmail.com>
Use the new rootlessnetns logic from c/common, drop the podman code
here and make use of the new much simpler API.
ref: https://github.com/containers/common/pull/1761
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
add a new option --preserve-fd that allows to specify a list of FDs to
pass down to the container.
It is similar to --preserve-fds but it allows to specify a list of FDs
instead of the maximum FD number to preserve.
--preserve-fd and --preserve-fds are mutually exclusive.
It requires crun since runc would complain if any fd below
--preserve-fds is not preserved.
Closes: https://github.com/containers/podman/issues/20844
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When Podman starts, it checks a number of critical runtime paths
against stored values in the database to make sure that existing
containers are not broken by a configuration change. We recently
made some changes to this logic to make our handling of the some
options more sane (StaticDir in particular was set based on other
passed options in a way that was not particularly sane) which has
made the logic more sensitive to paths with symlinks. As a simple
fix, handle symlinks properly in our DB vs runtime comparisons.
The BoltDB bits are uglier because very, very old Podman versions
sometimes did not stuff a proper value in the database and
instead used the empty string. SQLite is new enough that we don't
have to worry about such things.
Fixes#20872
Signed-off-by: Matt Heon <mheon@redhat.com>