The current way of bind mounting the host timezone file has problems.
Because /etc/localtime in the image may exist and is a symlink under
/usr/share/zoneinfo it will overwrite the targetfile. That confuses
timezone parses especially java where this approach does not work at
all. So we end up with an link which does not reflect the actual truth.
The better way is to just change the symlink in the image like it is
done on the host. However because not all images ship tzdata we cannot
rely on that either. So now we do both, when tzdata is installed then
use the symlink and if not we keep the current way of copying the host
timezone file in the container to /etc/localtime.
Also note that we need to rebuild the systemd image to include tzdata in
order to test this as our images do not contain the tzdata by default.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=2149876
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Implement means for reflecting failed containers (i.e., those having
exited non-zero) to better integrate `kube play` with systemd. The
idea is to have the main PID of `kube play` exit non-zero in a
configurable way such that systemd's restart policies can kick in.
When using the default sdnotify-notify policy, the service container
acts as the main PID to further reduce the resource footprint. In that
case, before stopping the service container, Podman will lookup the exit
codes of all non-infra containers. The service will then behave
according to the following three exit-code policies:
- `none`: exit 0 and ignore containers (default)
- `any`: exit non-zero if _any_ container did
- `all`: exit non-zero if _all_ containers did
The upper values can be passed via a hidden `kube play
--service-exit-code-propagation` flag which can be used by tests and
later on by Quadlet.
In case Podman acts as the main PID (i.e., when at least one container
runs with an sdnotify-policy other than "ignore"), Podman will continue
to wait for the service container to exit and reflect its exit code.
Note that this commit also fixes a long-standing annoyance of the
service container exiting non-zero. The underlying issue was that the
service container had been stopped with SIGKILL instead of SIGTERM and
hence exited non-zero. Fixing that was a prerequisite for the exit-code
propagation to work but also improves the integration of `kube play`
with systemd and hence Quadlet with systemd.
Jira: issues.redhat.com/browse/RUN-1776
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add --restart flag to pod create to allow users to set the
restart policy for the pod, which applies to all the containers
in the pod. This reuses the restart policy already there for
containers and has the same restart policy options.
Add "never" to the restart policy options to match k8s syntax.
It is a synonym for "no" and does the exact same thing where the
containers are not restarted once exited.
Only the containers that have exited will be restarted based on the
restart policy, running containers will not be restarted when an exited
container is restarted in the same pod (same as is done in k8s).
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Currently Podman prevents SELinux container separation,
when running within a container. This PR adds a new
--security-opt label=nested
When setting this option, Podman unmasks and mountsi
/sys/fs/selinux into the containers making /sys/fs/selinux
fully exposed. Secondly Podman sets the attribute
run.oci.mount_context_type=rootcontext
This attribute tells crun to mount volumes with rootcontext=MOUNTLABEL
as opposed to context=MOUNTLABEL.
With these two settings Podman inside the container is allowed to set
its own SELinux labels on tmpfs file systems mounted into its parents
container, while still being confined by SELinux. Thus you can have
nested SELinux labeling inside of a container.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Add a hidden flag to set the database backend and plumb it into
podman-info. Further add a system test to make sure the flag and the
info output are working properly.
Note that the test may need to be changed once we settled on how
to test the sqlite backend in CI.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
* add tests
* add documentation for --shm-size-systemd
* add support for both pod and standalone run
Signed-off-by: danishprakash <danish.prakash@suse.com>
This is a cleaner solution and guarantees the variables
will be used before they are initialized.
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Every podman command is paying the price for this compile even when they
don't use the Regex, this will speed up start of podman by a little.
[NO NEW TESTS NEEDED] Existing tests should catch issues.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We should have done this much earlier, most of the times CNI networks
just mean networks so I changed this and also fixed some function
names. This should make it more clear what actually refers to CNI and
what is just general network backend stuff.
[NO NEW TESTS NEEDED]
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Also fix a number of duplicate words. Yet disable the new `dupword`
linter as it displays too many false positives.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
subpath allows for only a subdirecty of a volumes data to be mounted in the container
add support for the named volume type sub path with others to follow.
resolves#12929
Signed-off-by: Charlie Doern <cbddoern@gmail.com>
This handles the transient store options from the container/storage
configuration in the runtime/engine.
Changes are:
* Print transient store status in `podman info`
* Print transient store status in runtime debug output
* Add --transient-store argument to override config option
* Propagate config state to conmon cleanup args so the callback podman
gets the same config.
Note: This doesn't really change any behaviour yet (other than the changes
in containers/storage).
Signed-off-by: Alexander Larsson <alexl@redhat.com>
Startup healthchecks are similar to K8S startup probes, in that
they are a separate check from the regular healthcheck that runs
before it. If the startup healthcheck fails repeatedly, the
associated container is restarted.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Whenever image has `arch` and `os` configured for `wasm/wasi` switch to
`crun-wasm` as extension if applicable, following will only work if
`podman` is using `crun` as default runtime.
[NO NEW TESTS NEEDED]
[NO TESTS NEEDED]
A test for this can be added when `crun-wasm` is part of CI.
Signed-off-by: Aditya R <arajan@redhat.com>
This ignores the create request if the named volume already exists.
It is very useful when scripting stuff.
Signed-off-by: Alexander Larsson <alexl@redhat.com>
For systems that have extreme robustness requirements (edge devices,
particularly those in difficult to access environments), it is important
that applications continue running in all circumstances. When the
application fails, Podman must restart it automatically to provide this
robustness. Otherwise, these devices may require customer IT to
physically gain access to restart, which can be prohibitively difficult.
Add a new `--on-failure` flag that supports four actions:
- **none**: Take no action.
- **kill**: Kill the container.
- **restart**: Restart the container. Do not combine the `restart`
action with the `--restart` flag. When running inside of
a systemd unit, consider using the `kill` or `stop`
action instead to make use of systemd's restart policy.
- **stop**: Stop the container.
To remain backwards compatible, **none** is the default action.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
When a kube yaml has a volume set as empty dir, podman
will create an anonymous volume with the empty dir name and
attach it to the containers running in the pod. When the pod
is removed, the empy dir volume created is also removed.
Add tests and docs for this as well.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Also, do a general cleanup of all the timeout code. Changes
include:
- Convert from int to *uint where possible. Timeouts cannot be
negative, hence the uint change; and a timeout of 0 is valid,
so we need a new way to detect that the user set a timeout
(hence, pointer).
- Change name in the database to avoid conflicts between new data
type and old one. This will cause timeouts set with 4.2.0 to be
lost, but considering nobody is using the feature at present
(and the lack of validation means we could have invalid,
negative timeouts in the DB) this feels safe.
- Ensure volume plugin timeouts can only be used with volumes
created using a plugin. Timeouts on the local driver are
nonsensical.
- Remove the existing test, as it did not use a volume plugin.
Write a new test that does.
The actual plumbing of the containers.conf timeout in is one line
in volume_api.go; the remainder are the above-described cleanups.
Signed-off-by: Matthew Heon <mheon@redhat.com>
Integrate sd-notify policies into `kube play`. The policies can be
configured for all contianers via the `io.containers.sdnotify`
annotation or for indidivual containers via the
`io.containers.sdnotify/$name` annotation.
The `kube play` process will wait for all containers to be ready by
waiting for the individual `READY=1` messages which are received via
the `pkg/systemd/notifyproxy` proxy mechanism.
Also update the simple "container" sd-notify test as it did not fully
test the expected behavior which became obvious when adding the new
tests.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The notify socket can now either be specified via an environment
variable or programatically (where the env is ignored). The
notify mode and the socket are now also displayed in `container inspect`
which comes in handy for debugging and allows for propper testing.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
added the following flags and handling for podman pod create
--memory-swap
--cpuset-mems
--device-read-bps
--device-write-bps
--blkio-weight
--blkio-weight-device
--cpu-shares
given the new backend for systemd in c/common, all of these can now be exposed to pod create.
most of the heavy lifting (nearly all) is done within c/common. However, some rewiring needed to be done here
as well!
Signed-off-by: Charlie Doern <cdoern@redhat.com>
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
* Replace "setup", "lookup", "cleanup", "backup" with
"set up", "look up", "clean up", "back up"
when used as verbs. Replace also variations of those.
* Improve language in a few places.
Signed-off-by: Erik Sjölund <erik.sjolund@gmail.com>
add an option to configure the driver timeout when creating a volume.
The default is 5 seconds but this value is too small for some custom drivers.
Signed-off-by: cdoern <cdoern@redhat.com>
If a privileged container is running, stops, and the devices on the host
change, such as a USB device is unplugged, then a container would no
longer start. Previously, the devices from the host were only being
added to the container once: when the container was created. Now, this
happens every time the container starts.
I did this by adding a boolean to the container config that indicates
whether to mount all of the devices or not, which can be set via an option.
During spec generation, if the `MountAllDevices` option is set in the
container config, all host devices are added to the container.
Additionally, a couple of functions from `pkg/specgen/generate/config_linux.go`
were moved into `pkg/util/utils_linux.go` as they were needed in
multiple packages.
Closes#13899
Signed-off-by: Jake Correnti <jcorrenti13@gmail.com>
Firstly, reset is now managed by the runtime itself as a part of
initialization. This ensures that it can be used even with
runtimes that would otherwise fail to be created - most notably,
when the user has changed a core path
(runroot/root/tmpdir/staticdir).
Secondly, we now attempt a best-effort removal even if the store
completely fails to be configured.
Third, we now hold the alive lock for the entire reset operation.
This ensures that no other Podman process can start while we are
running a system reset, and removes any possibility of a race
where a user tries to create containers or pull images while we
are trying to perform a reset.
[NO NEW TESTS NEEDED] we do not test reset last I checked.
Fixes#9075
Signed-off-by: Matthew Heon <mheon@redhat.com>
Add the notion of a "service container" to play kube. A service
container is started before the pods in play kube and is (reverse)
linked to them. The service container is stopped/removed *after*
all pods it is associated with are stopped/removed.
In other words, a service container tracks the entire life cycle
of a service started via `podman play kube`. This is required to
enable `play kube` in a systemd unit file.
The service container is only used when the `--service-container`
flag is set on the CLI. This flag has been marked as hidden as it
is not meant to be used outside the context of `play kube`. It is
further not supported on the remote client.
The wiring with systemd will be done in a later commit.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add the notion of an "exit policy" to a pod. This policy controls the
behaviour when the last container of pod exits. Initially, there are
two policies:
- "continue" : the pod continues running. This is the default policy
when creating a pod.
- "stop" : stop the pod when the last container exits. This is the
default behaviour for `play kube`.
In order to implement the deferred stop of a pod, add a worker queue to
the libpod runtime. The queue will pick up work items and in this case
helps resolve dead locks that would otherwise occur if we attempted to
stop a pod during container cleanup.
Note that the default restart policy of `play kube` is "Always". Hence,
in order to really solve #13464, the YAML files must set a custom
restart policy; the tests use "OnFailure".
Fixes: #13464
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
add a new option to completely disable xfs quota usage for a volume.
xfs quota set on a volume, even just for tracking disk usage, can
cause weird errors if the volume is later re-used by a container with
a different quota projid. More specifically, link(2) and rename(2)
might fail with EXDEV if the source file has a projid that is
different from the parent directory.
To prevent such kind of issues, the volume should be created
beforehand with `podman volume create -o o=noquota $ID`
Closes: https://github.com/containers/podman/issues/14049
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
The linter ensures a common code style.
- use switch/case instead of else if
- use if instead of switch/case for single case statement
- add space between comment and text
- detect the use of defer with os.Exit()
- use short form var += "..." instead of var = var + "..."
- detect problems with append()
```
newSlice := append(orgSlice, val)
```
This could lead to nasty bugs because the orgSlice will be changed in
place if it has enough capacity too hold the new elements. Thus we
newSlice might not be a copy.
Of course most of the changes are just cosmetic and do not cause any
logic errors but I think it is a good idea to enforce a common style.
This should help maintainability.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
It allows to customize the entry that is written to the `/etc/passwd`
file when --passwd is used.
Closes: https://github.com/containers/podman/issues/13185
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
podman container clone takes the id of an existing continer and creates a specgen from the given container's config
recreating all proper namespaces and overriding spec options like resource limits and the container name if given in the cli options
this command utilizes the common function DefineCreateFlags meaning that we can funnel as many create options as we want
into clone over time allowing the user to clone with as much or as little of the original config as they want.
container clone takes a second argument which is a new name and a third argument which is an image name to use instead of the original container's
the current supported flags are:
--destroy (remove the original container)
--name (new ctr name)
--cpus (sets cpu period and quota)
--cpuset-cpus
--cpu-period
--cpu-rt-period
--cpu-rt-runtime
--cpu-shares
--cpuset-mems
--memory
--run
resolves#10875
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
separated cgroupNS sharing from setting the pod as the cgroup parent,
made a new flag --share-parent which sets the pod as the cgroup parent for all
containers entering the pod
remove cgroup from the default kernel namespaces since we want the same default behavior as before which is just the cgroup parent.
resolves#12765
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>