Fixes: https://github.com/containers/podman/issues/25002
Also add the ability to inspect containers for
UseImageHosts and UseImageHostname.
Finally fixed some bugs in handling of --no-hosts for Pods,
which I descovered.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
New flags in a `podman update` can change the configuration of HealthCheck when the container is started, without having to restart or recreate the container.
This can help determine why a given container suddenly started failing HealthCheck without interfering with the services it provides. For example, reconfigure HealthCheck to keep logs longer than the usual last X results, store logs to other destinations, etc.
Fixes: https://issues.redhat.com/browse/RHEL-60561
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Previously, we didn't bother including exposed ports in the
container config when creating a container with --net=host. Per
Docker this isn't really correct; host-net containers are still
considered to have exposed ports, even though that specific
container can be guaranteed to never use them.
We could just fix this for host container, but we might as well
make it generic. This patch unconditionally adds exposed ports to
the container config - it was previously conditional on a network
namespace being configured. The behavior of `podman inspect` with
exposed ports when using `--net=container:` has also been
corrected. Previously, we used exposed ports from the container
sharing its network namespace, which was not correct. Now, we use
regular port bindings from the namespace container, but exposed
ports from our own container.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
These flags can affect the output of the HealtCheck log. Currently, when a container is configured with HealthCheck, the output from the HealthCheck command is only logged to the container status file, which is accessible via `podman inspect`.
It is also limited to the last five executions and the first 500 characters per execution.
This makes debugging past problems very difficult, since the only information available about the failure of the HealthCheck command is the generic `healthcheck service failed` record.
- The `--health-log-destination` flag sets the destination of the HealthCheck log.
- `none`: (default behavior) `HealthCheckResults` are stored in overlay containers. (For example: `$runroot/healthcheck.log`)
- `directory`: creates a log file named `<container-ID>-healthcheck.log` with JSON `HealthCheckResults` in the specified directory.
- `events_logger`: The log will be written with logging mechanism set by events_loggeri. It also saves the log to a default directory, for performance on a system with a large number of logs.
- The `--health-max-log-count` flag sets the maximum number of attempts in the HealthCheck log file.
- A value of `0` indicates an infinite number of attempts in the log file.
- The default value is `5` attempts in the log file.
- The `--health-max-log-size` flag sets the maximum length of the log stored.
- A value of `0` indicates an infinite log length.
- The default value is `500` log characters.
Add --health-max-log-count flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-max-log-size flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
Add --health-log-destination flag
Signed-off-by: Jan Rodák <hony.com@seznam.cz>
add a new flag that allows to override the pull options configured in
the storage.conf file.
e.g.: --pull-option="enable_partial_images=false" can be specified to
Podman to disable partial pulls even if enabled.
Leave it as a hidden configuration flag for now since the API itself
is marked as experimental in c/storage.
Currently c/storage doesn't honor the overrides, being fixed with
https://github.com/containers/storage/pull/1966
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Image volumes (the `--mount type=image,...` kind, not the
`podman volume create --driver image ...` kind - it's strange
that we have two) are needed for our automount scheme, but the
request is that we mount only specific subpaths from the image
into the container. To do that, we need image volume subpath
support. Not that difficult code-wise, mostly just plumbing.
Also, add support to the CLI; not strictly necessary, but it
doesn't hurt anything and will make testing easier.
Signed-off-by: Matt Heon <mheon@redhat.com>
This is something Docker does, and we did not do until now. Most
difficult/annoying part was the REST API, where I did not really
want to modify the struct being sent, so I made the new restart
policy parameters query parameters instead.
Testing was also a bit annoying, because testing restart policy
always is.
Signed-off-by: Matt Heon <mheon@redhat.com>
Moving from Go module v4 to v5 prepares us for public releases.
Move done using gomove [1] as with the v3 and v4 moves.
[1] https://github.com/KSubedi/gomove
Signed-off-by: Matt Heon <mheon@redhat.com>
Before this, for some special Podman commands (system reset,
system migrate, system renumber), Podman would create a first
Libpod runtime to do initialization and flag parsing, then stop
that runtime and create an entirely new runtime to perform the
actual task. This is an artifact of the pre-Podman 2.0 days, when
there was almost no indirection between Libpod and the CLI, and
we only used one runtime because we didn't need a second runtime
for flag parsing and basic init.
This system was clunky, and apparently, very buggy. When we
migrated to SQLite, some logic was introduced where we'd select a
different database location based on whether or not Libpod's
StaticDir was manually set - which differed between the first
invocation of Libpod and the second. So we'd get a different
database for some commands (like `system reset`) and they would
not be able to see existing containers, meaning they would not
function properly.
The immediate cause is obviously the SQLite behavior, but I'm
certain there's a lot more baggage hiding behind this multiple
Libpod runtime logic, so let's just refactor it out. It doesn't
make sense, and complicates the code. Instead, make Reset,
Renumber, and Migrate methods of the libpod Runtime. For Reset
and Renumber, we can shut the runtime down afterwards to achieve
the desired effect (no valid runtime after). Then pipe all of
them through the ContainerEngine so cmd/podman can access them.
As part of this, remove the SystemEngine part of pkg/domain. This
was supposed to encompass these "special" commands, but every
command in SystemEngine is actually a ContainerEngine command.
Reset, Renumber, Migrate - they all need a full Libpod and access
to all containers. There's no point to a separate engine if it
just wraps Libpod in the exact same way as ContainerEngine. This
consolidation saves us a bit more code and complexity.
Signed-off-by: Matt Heon <mheon@redhat.com>
* Add BaseHostsFile to container configuration
* Do not copy /etc/hosts file from host when creating a container using Docker API
Signed-off-by: Gavin Lam <gavin.oss@tutamail.com>
add a new option --preserve-fd that allows to specify a list of FDs to
pass down to the container.
It is similar to --preserve-fds but it allows to specify a list of FDs
instead of the maximum FD number to preserve.
--preserve-fd and --preserve-fds are mutually exclusive.
It requires crun since runc would complain if any fd below
--preserve-fds is not preserved.
Closes: https://github.com/containers/podman/issues/20844
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
these functions are not used anymore in the codebase, so drop them.
[NO NEW TESTS NEEDED] no new functionalities are added
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
All `[]string`s in containers.conf have now been migrated to attributed
string slices which require some adjustments in Buildah and Podman.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The processing and setting of the static and volume directories was
scattered across the code base (including c/common) leading to subtle
errors that surfaced in #19938.
There were multiple issues that I try to summarize below:
- c/common loaded the graphroot from c/storage to set the defaults for
static and volume dir. That ignored Podman's --root flag and
surfaced in #19938 and other bugs. c/common does not set the
defaults anymore which gives Podman the ability to detect when the
user/admin configured a custom directory (not empty value).
- When parsing the CLI, Podman (ab)uses containers.conf structures to
set the defaults but also to override them in case the user specified
a flag. The --root flag overrode the static dir which is wrong and
broke a couple of use cases. Now there is a dedicated field for in
the "PodmanConfig" which also includes a containers.conf struct.
- The defaults for static and volume dir and now being set correctly
and adhere to --root.
- The CONTAINERS_CONF_OVERRIDE env variable has not been passed to the
cleanup process. I believe that _all_ env variables should be passed
to conmon to avoid such subtle bugs.
Overall I find that the code and logic is scattered and hard to
understand and follow. I refrained from larger refactorings as I really
just want to get #19938 fixed and then go back to other priorities.
https://github.com/containers/common/pull/1659 broke three pkg/machine
tests. Those have been commented out until getting fixed.
Fixes: #19938
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Compat api for containers/stop should take -1 value
Add support for `podman stop --time -1`
Add support for `podman restart --time -1`
Add support for `podman rm --time -1`
Add support for `podman pod stop --time -1`
Add support for `podman pod rm --time -1`
Add support for `podman volume rm --time -1`
Add support for `podman network rm --time -1`
Fixes: https://github.com/containers/podman/issues/17542
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The intention of --read-only-tmpfs=fals when in --read-only mode was to
not allow any processes inside of the container to write content
anywhere, unless the caller also specified a volume or a tmpfs. Having
/dev and /dev/shm writable breaks this assumption.
Fixes: https://github.com/containers/podman/issues/12937
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add support for `--imagestore` in podman which allows users to split the filesystem of containers vs image store, imagestore if configured will pull images in image storage instead of the graphRoot while keeping the other parts still in the originally configured graphRoot.
This is an implementation of
https://github.com/containers/storage/pull/1549 in podman.
Signed-off-by: Aditya R <arajan@redhat.com>
The current way of bind mounting the host timezone file has problems.
Because /etc/localtime in the image may exist and is a symlink under
/usr/share/zoneinfo it will overwrite the targetfile. That confuses
timezone parses especially java where this approach does not work at
all. So we end up with an link which does not reflect the actual truth.
The better way is to just change the symlink in the image like it is
done on the host. However because not all images ship tzdata we cannot
rely on that either. So now we do both, when tzdata is installed then
use the symlink and if not we keep the current way of copying the host
timezone file in the container to /etc/localtime.
Also note that we need to rebuild the systemd image to include tzdata in
order to test this as our images do not contain the tzdata by default.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2149876
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Implement means for reflecting failed containers (i.e., those having
exited non-zero) to better integrate `kube play` with systemd. The
idea is to have the main PID of `kube play` exit non-zero in a
configurable way such that systemd's restart policies can kick in.
When using the default sdnotify-notify policy, the service container
acts as the main PID to further reduce the resource footprint. In that
case, before stopping the service container, Podman will lookup the exit
codes of all non-infra containers. The service will then behave
according to the following three exit-code policies:
- `none`: exit 0 and ignore containers (default)
- `any`: exit non-zero if _any_ container did
- `all`: exit non-zero if _all_ containers did
The upper values can be passed via a hidden `kube play
--service-exit-code-propagation` flag which can be used by tests and
later on by Quadlet.
In case Podman acts as the main PID (i.e., when at least one container
runs with an sdnotify-policy other than "ignore"), Podman will continue
to wait for the service container to exit and reflect its exit code.
Note that this commit also fixes a long-standing annoyance of the
service container exiting non-zero. The underlying issue was that the
service container had been stopped with SIGKILL instead of SIGTERM and
hence exited non-zero. Fixing that was a prerequisite for the exit-code
propagation to work but also improves the integration of `kube play`
with systemd and hence Quadlet with systemd.
Jira: issues.redhat.com/browse/RUN-1776
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add --restart flag to pod create to allow users to set the
restart policy for the pod, which applies to all the containers
in the pod. This reuses the restart policy already there for
containers and has the same restart policy options.
Add "never" to the restart policy options to match k8s syntax.
It is a synonym for "no" and does the exact same thing where the
containers are not restarted once exited.
Only the containers that have exited will be restarted based on the
restart policy, running containers will not be restarted when an exited
container is restarted in the same pod (same as is done in k8s).
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Currently Podman prevents SELinux container separation,
when running within a container. This PR adds a new
--security-opt label=nested
When setting this option, Podman unmasks and mountsi
/sys/fs/selinux into the containers making /sys/fs/selinux
fully exposed. Secondly Podman sets the attribute
run.oci.mount_context_type=rootcontext
This attribute tells crun to mount volumes with rootcontext=MOUNTLABEL
as opposed to context=MOUNTLABEL.
With these two settings Podman inside the container is allowed to set
its own SELinux labels on tmpfs file systems mounted into its parents
container, while still being confined by SELinux. Thus you can have
nested SELinux labeling inside of a container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add a hidden flag to set the database backend and plumb it into
podman-info. Further add a system test to make sure the flag and the
info output are working properly.
Note that the test may need to be changed once we settled on how
to test the sqlite backend in CI.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* add tests
* add documentation for --shm-size-systemd
* add support for both pod and standalone run
Signed-off-by: danishprakash <danish.prakash@suse.com>
This is a cleaner solution and guarantees the variables
will be used before they are initialized.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Every podman command is paying the price for this compile even when they
don't use the Regex, this will speed up start of podman by a little.
[NO NEW TESTS NEEDED] Existing tests should catch issues.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We should have done this much earlier, most of the times CNI networks
just mean networks so I changed this and also fixed some function
names. This should make it more clear what actually refers to CNI and
what is just general network backend stuff.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Also fix a number of duplicate words. Yet disable the new `dupword`
linter as it displays too many false positives.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
subpath allows for only a subdirecty of a volumes data to be mounted in the container
add support for the named volume type sub path with others to follow.
resolves#12929
Signed-off-by: Charlie Doern <cbddoern@gmail.com>
This handles the transient store options from the container/storage
configuration in the runtime/engine.
Changes are:
* Print transient store status in `podman info`
* Print transient store status in runtime debug output
* Add --transient-store argument to override config option
* Propagate config state to conmon cleanup args so the callback podman
gets the same config.
Note: This doesn't really change any behaviour yet (other than the changes
in containers/storage).
Signed-off-by: Alexander Larsson <alexl@redhat.com>
Startup healthchecks are similar to K8S startup probes, in that
they are a separate check from the regular healthcheck that runs
before it. If the startup healthcheck fails repeatedly, the
associated container is restarted.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Whenever image has `arch` and `os` configured for `wasm/wasi` switch to
`crun-wasm` as extension if applicable, following will only work if
`podman` is using `crun` as default runtime.
[NO NEW TESTS NEEDED]
[NO TESTS NEEDED]
A test for this can be added when `crun-wasm` is part of CI.
Signed-off-by: Aditya R <arajan@redhat.com>
This ignores the create request if the named volume already exists.
It is very useful when scripting stuff.
Signed-off-by: Alexander Larsson <alexl@redhat.com>