This is the initial stage of implementation. The current API
functions but does not report the additional containers and pods
removed. This is necessary to properly display results to the
user after `podman rm --all`.
The existing remove-dependencies code has been removed in favor
of this more native solution.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Add --restart flag to pod create to allow users to set the
restart policy for the pod, which applies to all the containers
in the pod. This reuses the restart policy already there for
containers and has the same restart policy options.
Add "never" to the restart policy options to match k8s syntax.
It is a synonym for "no" and does the exact same thing where the
containers are not restarted once exited.
Only the containers that have exited will be restarted based on the
restart policy, running containers will not be restarted when an exited
container is restarted in the same pod (same as is done in k8s).
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Init containers are removed once they exit, but podman
reports and error that the container does not exist, when
it was previously removed. Stop reporting missing containers
when removing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Originally, during pod removal, we locked every container in the
pod at once, did a number of validity checks to ensure everything
was safe, and then removed all the containers in the pod.
A deadlock was recently discovered with this approach. In brief,
we cannot lock the entire pod (or much more than a single
container at a time) without causing a deadlock. As such, we
converted to an approach where we just looped over each container
in the pod, removing them individually. Unfortunately, this
removed a lot of the validity checking of the earlier approach,
allowing for a lot of unintended bad things. Infra containers
could be removed while containers in the pod still depended on
them, for example.
There's no easy way to do validity checks while in a simple loop,
so I implemented a version of our graph-traversal logic that
currently handles pod start. This version acts in the reverse
order of startup: startup starts from containers which depend on
nothing and moves outwards, while removal acts on containers which
have nothing depend on them and moves inwards. By doing graph
traversal, we can guarantee that nothing is removed while
something that depends on it still exists - so the infra
container should be the last thing in a pod that is removed, for
example.
In the (unlikely) case that a graph of the pod's containers
cannot be built (most likely impossible without database editing)
the old method of pod removal has been retained to ensure that
even misbehaving pods can be forcibly evicted from the state.
I'm fairly confident that this resolves the problem, but there
are a lot of assumptions around dependency structure built into
the original pod removal code and I am not 100% sure I have
captured all of them.
Fixes#15526
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Podman adds an Error: to every error message. So starting an error
message with "error" ends up being reported to the user as
Error: error ...
This patch removes the stutter.
Also ioutil.ReadFile errors report the Path, so wrapping the err message
with the path causes a stutter.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
added the following flags and handling for podman pod create
--memory-swap
--cpuset-mems
--device-read-bps
--device-write-bps
--blkio-weight
--blkio-weight-device
--cpu-shares
given the new backend for systemd in c/common, all of these can now be exposed to pod create.
most of the heavy lifting (nearly all) is done within c/common. However, some rewiring needed to be done here
as well!
Signed-off-by: Charlie Doern <cdoern@redhat.com>
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
add support for the --uts flag in pod create, allowing users to avoid
issues with default values in containers.conf.
uts follows the same format as other namespace flags:
--uts=private (default), --uts=host, --uts=ns:PATH
resolves#13714
Signed-off-by: Charlie Doern <cdoern@redhat.com>
using the new resource backend, implement podman pod create --memory which enables
users to modify memory.max inside of the parent cgroup (the pod), implicitly impacting all
children unless overriden
Signed-off-by: Charlie Doern <cdoern@redhat.com>
this patch included additonal host namespace checks when creating a ctr as well
as fixing of the tests to check /proc/self/ns/net
see #14461
Signed-off-by: cdoern <cdoern@redhat.com>
Mostly, just removing the comments. These either have been done,
or are no longer a good idea.
No code changes. [NO NEW TESTS NEEDED] as such.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Add the notion of a "service container" to play kube. A service
container is started before the pods in play kube and is (reverse)
linked to them. The service container is stopped/removed *after*
all pods it is associated with are stopped/removed.
In other words, a service container tracks the entire life cycle
of a service started via `podman play kube`. This is required to
enable `play kube` in a systemd unit file.
The service container is only used when the `--service-container`
flag is set on the CLI. This flag has been marked as hidden as it
is not meant to be used outside the context of `play kube`. It is
further not supported on the remote client.
The wiring with systemd will be done in a later commit.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add the notion of an "exit policy" to a pod. This policy controls the
behaviour when the last container of pod exits. Initially, there are
two policies:
- "continue" : the pod continues running. This is the default policy
when creating a pod.
- "stop" : stop the pod when the last container exits. This is the
default behaviour for `play kube`.
In order to implement the deferred stop of a pod, add a worker queue to
the libpod runtime. The queue will pick up work items and in this case
helps resolve dead locks that would otherwise occur if we attempted to
stop a pod during container cleanup.
Note that the default restart policy of `play kube` is "Always". Hence,
in order to really solve #13464, the YAML files must set a custom
restart policy; the tests use "OnFailure".
Fixes: #13464
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
the infra Inherit function was not properly passing pod volume information to new containers
alter the inherit function and struct to use the new `ConfigToSpec` function used in clone
pick and choose the proper entities from a temp spec and validate them on the spegen side rather
than passing directly to a config
resolves#13548
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Added support for pod security options. These are applied to infra and passed down to the
containers as added (unless overridden).
Modified the inheritance process from infra, creating a new function Inherit() which reads the config, and marshals the compatible options into an intermediate struct `InfraInherit`
This is then unmarshaled into a container config and all of this is added to the CtrCreateOptions. Removes the need (mostly) for special additons which complicate the Container_create
code and pod creation.
resolves#12173
Signed-off-by: cdoern <cdoern@redhat.com>
Make sure we create new containers in the db with the correct structure.
Also remove some unneeded code for alias handling. We no longer need this
functions.
The specgen format has not been changed for now.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
added support for a volumes from container. this flag just required movement of the volumes-from flag declaration
out of the !IsInfra block, and minor modificaions to container_create.go
Signed-off-by: cdoern <cdoern@redhat.com>
added the option for the user to specify a rate, in bytes, at which they would like to be able
to read from the device being added to the pod. This is the first in a line of pod device options.
WARNING: changed pod name json tag to pod_name to avoid confusion when marshaling with the containerspec's name
Signed-off-by: cdoern <cdoern@redhat.com>
Access the container's config field directly inside of libpod instead of
calling `Config()` which in turn creates expensive JSON deep copies.
Accessing the field directly drops memory consumption of a simple
`podman run --rm busybox true` from 1245kB to 410kB.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
added support for pod devices. The device gets added to the infra container and
recreated in all containers that join the pod.
This required a new container config item to keep track of the original device passed in by the user before
the path was parsed into the container device.
Signed-off-by: cdoern <cdoern@redhat.com>
added support for the --volume flag in pods using the new infra container design.
users can specify all volume options they can with regular containers
resolves#10379
Signed-off-by: cdoern <cdoern@redhat.com>
InfraContainer should go through the same creation process as regular containers. This change was from the cmd level
down, involving new container CLI opts and specgen creating functions. What now happens is that both container and pod
cli options are populated in cmd and used to create a podSpecgen and a containerSpecgen. The process then goes as follows
FillOutSpecGen (infra) -> MapSpec (podOpts -> infraOpts) -> PodCreate -> MakePod -> createPodOptions -> NewPod -> CompleteSpec (infra) -> MakeContainer -> NewContainer -> newContainer -> AddInfra (to pod state)
Signed-off-by: cdoern <cdoern@redhat.com>
Podman inspect has to show exposed ports to match docker. This requires
storing the exposed ports in the container config.
A exposed port is shown as `"80/tcp": null` while a forwarded port is
shown as `"80/tcp": [{"HostIp": "", "HostPort": "8080" }]`.
Also make sure to add the exposed ports to the new image when the
container is commited.
Fixes#10777
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
after the init containers pr merged, it was suggested to use `once`
instead of `oneshot` containers as it is more aligned with other
terminiology used similarily.
[NO TESTS NEEDED]
Signed-off-by: Brent Baude <bbaude@redhat.com>
Add the --userns flag to podman pod create and keep
track of the userns setting that pod was created with
so that all containers created within the pod will inherit
that userns setting.
Specifically we need to be able to launch a pod with
--userns=keep-id
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
this is the first pass at implementing init containers for podman pods.
init containersare made popular by k8s as a way to run setup for pods
before the pods standard containers run.
unlike k8s, we support two styles of init containers: always and
oneshot. always means the container stays in the pod and starts
whenever a pod is started. this does not apply to pods restarting.
oneshot means the container runs onetime when the pod starts and then is
removed.
Signed-off-by: Brent Baude <bbaude@redhat.com>
added support for --pid flag. User can specify ns:file, pod, private, or host.
container returns an error since you cannot point the ns of the pods infra container
to a container outside of the pod.
Signed-off-by: cdoern <cdoern@redhat.com>
Added logic and handling for two new Podman pod create Flags.
--cpus specifies the total number of cores on which the pod can execute, this
is a combination of the period and quota for the CPU.
--cpuset-cpus is a string value which determines of these available cores,
how many we will truly execute on.
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
We missed bumping the go module, so let's do it now :)
* Automated go code with github.com/sirkon/go-imports-rename
* Manually via `vgrep podman/v2` the rest
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Make a distinction between pods that are completely running (all
containers running) and those that have some containers going,
but not all, by introducing an intermediate state between Stopped
and Running called Degraded. A Degraded pod has at least one, but
not all, containers running; a Running pod has all containers
running.
First step to a solution for #7213.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Make Podman pod operations that do not involve starting
containers (which needs to be done in a specific order) use the
same parallel operation code we use to make `podman stop` on
large numbers of containers fast. We were previously stopping
containers in a pod serially, which could take up to the timeout
(default 15 seconds) for each container - stopping 100 containers
that do not respond to SIGTERM would take 25 minutes.
To do this, refactor the parallel operation code a bit to remove
its dependency on libpod (damn circular import restrictions...)
and use parallel functions that just re-use the standard
container API operations - maximizes code reuse (previously each
pod handler had a separate implementation of the container
function it performed).
This is a bit of a palate cleanser after fighting CI for two
days - nice to be able to return to a land of sanity.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This should help alleviate races where the pod is not fully
cleaned up before subsequent API calls happen.
Signed-off-by: Matthew Heon <mheon@redhat.com>
We were hard-coding two fields to false, instead of grabbing
their value from the pod config, which means that `pod inspect`
would print the wrong value always.
Fixes#6968
Signed-off-by: Matthew Heon <mheon@redhat.com>
We had a field for this in the inspect data, but it was never
being populated. Because of this, `podman pod inspect` stopped
showing port bindings (and other infra container settings). Add
code to populate the infra container inspect data, and add a test
to ensure we don't regress again.
Signed-off-by: Matthew Heon <mheon@redhat.com>
With the advent of Podman 2.0.0 we crossed the magical barrier of go
modules. While we were able to continue importing all packages inside
of the project, the project could not be vendored anymore from the
outside.
Move the go module to new major version and change all imports to
`github.com/containers/libpod/v2`. The renaming of the imports
was done via `gomove` [1].
[1] https://github.com/KSubedi/gomove
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The infra/abi code for pods was written in a flawed way, assuming
that the map[string]error containing individual container errors
was only set when the global error for the pod function was nil;
that is not accurate, and we are actually *guaranteed* to set the
global error when any individual container errors. Thus, we'd
never actually include individual container errors, because the
infra code assumed that err being set meant everything failed and
no container operations were attempted.
We were originally setting the cause of the error to something
nonsensical ("container already exists"), so I made a new error
indicating that some containers in the pod failed. We can then
ignore that error when building the report on the pod operation
and actually return errors from individual containers.
Unfortunately, this exposed another weakness of the infra code,
which was discarding the container IDs. Errors from individual
containers are not guaranteed to identify which container they
came from, hence the use of map[string]error in the Pod API
functions. Rather than restructuring the structs we return from
pkg/infra, I just wrapped the returned errors with a message
including the ID of the container.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>