First, refactor our existing graph traversal code to improve code
sharing. There still isn't much sharing between inward traversal
(stop, remove) and outward traversal (start) but stop and remove
are sharing most of their code, which seems a positive.
Second, add a new graph-traversal function to stop containers.
We already had start and remove; stop uses the newly-refactored
inward-traversal code which it shares with removal.
Third, rework the shared stop/removal inward-traversal code to
add locking. This allows parallel execution of stop and removal,
which should improve the performance of `podman pod rm` and
retain the performance of `podman pod stop` at about what it is
right now.
Fourth and finally, use the new graph-based stop when possible
to solve unordered stop problems with pods - specifically, the
infra container stopping before application containers, leaving
those containers without a working network.
Fixes https://issues.redhat.com/browse/RHEL-76827
Signed-off-by: Matt Heon <mheon@redhat.com>
The intention behind this is to stop races between
`pod stop|start` and `container stop|start` being run at the same
time. This could result in containers with no working network
(they join the still-running infra container's netns, which is
then torn down as the infra container is stopped, leaving the
container in an otherwise unused, nonfunctional, orphan netns.
Locking the pod (if present) in the public container start and
stop APIs should be sufficient to stop this.
Signed-off-by: Matt Heon <mheon@redhat.com>
Fixes: https://github.com/containers/podman/issues/25002
Also add the ability to inspect containers for
UseImageHosts and UseImageHostname.
Finally fixed some bugs in handling of --no-hosts for Pods,
which I descovered.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Add --hosts-file flag to container create, container run and pod create
* Add HostsFile field to pod inspect and container inspect results
* Test BaseHostsFile config in containers.conf
Signed-off-by: Gavin Lam <gavin.oss@tutamail.com>
The podman container cleanup process runs asynchronous and by the time
it gets the lock it is possible another podman process already did the
cleanup and then did a new init() to start it again. If the cleanup
process gets the lock there it will cause very weird things.
This can be observed in the remote start API as CI flakes.
Fixes#23754
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When using service containers and play kube we create a complicated set
of dependencies.
First in a pod all conmon/container cgroups are part of one slice, that
slice will be removed when the entire pod is stopped resulting in
systemd killing all processes that were part in it.
Now the issue here is around the working of stopPodIfNeeded() and
stopIfOnlyInfraRemains(), once a container is cleaned up it will check
if the pod should be stopped depending on the pod ExitPolicy. If this is
the case it wil stop all containers in that pod. However in our flaky
test we calle podman pod kill which logically killed all containers
already. Thus the logic now thinks on cleanup it must stop the pod and
calls into pod.stopWithTimeout(). Then there we try to stop but because
all containers are already stopped it just throws errors and never gets
to the point were it would call Cleanup(). So the code does not do
cleanup and eventually calls removePodCgroup() which will cause all
conmon and other podman cleanup processes of this pod to be killed.
Thus the podman container cleanup process was likely killed while
actually trying to the the proper cleanup which leaves us in a bad
state.
Following commands such as podman pod rm will try to the cleanup again
as they see it was not completed but then fail as they are unable to
recover from the partial cleanup state.
Long term network cleanup needs to be more robust and ideally should be
idempotent to handle cases were cleanup was killed in the middle.
Fixes#21569
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Moving from Go module v4 to v5 prepares us for public releases.
Move done using gomove [1] as with the v3 and v4 moves.
[1] https://github.com/KSubedi/gomove
Signed-off-by: Matt Heon <mheon@redhat.com>
Being able to easily identify what lock has been allocated to a
given Libpod object is only somewhat useful for debugging lock
issues, but it's trivial to expose and I don't see any harm in
doing so.
Signed-off-by: Matt Heon <mheon@redhat.com>
We had something like 6 different boolean options (removing a
container turns out to be rather complicated, because there are a
million-odd things that want to do it), and the function
signature was getting unreasonably large. Change to a struct to
clean things up.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This allows for accurate reporting of dependency removal, but the
work is still incomplete: pods can be removed, but do not report
the containers they removed as part of said removal. Will add
this in a subsequent commit.
Major note: I made ignoring no-such-container errors automatic
once it has been determined that a container did exist in the
first place. I can't think of any case where this would not be a
TOCTOU - IE, no reason not to ignore them. The `--ignore` option
to `podman rm` should still retain meaning as it will ignore
errors from containers that didn't exist in the first place.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This is the initial stage of implementation. The current API
functions but does not report the additional containers and pods
removed. This is necessary to properly display results to the
user after `podman rm --all`.
The existing remove-dependencies code has been removed in favor
of this more native solution.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Add --restart flag to pod create to allow users to set the
restart policy for the pod, which applies to all the containers
in the pod. This reuses the restart policy already there for
containers and has the same restart policy options.
Add "never" to the restart policy options to match k8s syntax.
It is a synonym for "no" and does the exact same thing where the
containers are not restarted once exited.
Only the containers that have exited will be restarted based on the
restart policy, running containers will not be restarted when an exited
container is restarted in the same pod (same as is done in k8s).
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
Init containers are removed once they exit, but podman
reports and error that the container does not exist, when
it was previously removed. Stop reporting missing containers
when removing.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Originally, during pod removal, we locked every container in the
pod at once, did a number of validity checks to ensure everything
was safe, and then removed all the containers in the pod.
A deadlock was recently discovered with this approach. In brief,
we cannot lock the entire pod (or much more than a single
container at a time) without causing a deadlock. As such, we
converted to an approach where we just looped over each container
in the pod, removing them individually. Unfortunately, this
removed a lot of the validity checking of the earlier approach,
allowing for a lot of unintended bad things. Infra containers
could be removed while containers in the pod still depended on
them, for example.
There's no easy way to do validity checks while in a simple loop,
so I implemented a version of our graph-traversal logic that
currently handles pod start. This version acts in the reverse
order of startup: startup starts from containers which depend on
nothing and moves outwards, while removal acts on containers which
have nothing depend on them and moves inwards. By doing graph
traversal, we can guarantee that nothing is removed while
something that depends on it still exists - so the infra
container should be the last thing in a pod that is removed, for
example.
In the (unlikely) case that a graph of the pod's containers
cannot be built (most likely impossible without database editing)
the old method of pod removal has been retained to ensure that
even misbehaving pods can be forcibly evicted from the state.
I'm fairly confident that this resolves the problem, but there
are a lot of assumptions around dependency structure built into
the original pod removal code and I am not 100% sure I have
captured all of them.
Fixes#15526
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Podman adds an Error: to every error message. So starting an error
message with "error" ends up being reported to the user as
Error: error ...
This patch removes the stutter.
Also ioutil.ReadFile errors report the Path, so wrapping the err message
with the path causes a stutter.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
added the following flags and handling for podman pod create
--memory-swap
--cpuset-mems
--device-read-bps
--device-write-bps
--blkio-weight
--blkio-weight-device
--cpu-shares
given the new backend for systemd in c/common, all of these can now be exposed to pod create.
most of the heavy lifting (nearly all) is done within c/common. However, some rewiring needed to be done here
as well!
Signed-off-by: Charlie Doern <cdoern@redhat.com>
We now use the golang error wrapping format specifier `%w` instead of
the deprecated github.com/pkg/errors package.
[NO NEW TESTS NEEDED]
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
add support for the --uts flag in pod create, allowing users to avoid
issues with default values in containers.conf.
uts follows the same format as other namespace flags:
--uts=private (default), --uts=host, --uts=ns:PATH
resolves#13714
Signed-off-by: Charlie Doern <cdoern@redhat.com>
using the new resource backend, implement podman pod create --memory which enables
users to modify memory.max inside of the parent cgroup (the pod), implicitly impacting all
children unless overriden
Signed-off-by: Charlie Doern <cdoern@redhat.com>
this patch included additonal host namespace checks when creating a ctr as well
as fixing of the tests to check /proc/self/ns/net
see #14461
Signed-off-by: cdoern <cdoern@redhat.com>
Mostly, just removing the comments. These either have been done,
or are no longer a good idea.
No code changes. [NO NEW TESTS NEEDED] as such.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Add the notion of a "service container" to play kube. A service
container is started before the pods in play kube and is (reverse)
linked to them. The service container is stopped/removed *after*
all pods it is associated with are stopped/removed.
In other words, a service container tracks the entire life cycle
of a service started via `podman play kube`. This is required to
enable `play kube` in a systemd unit file.
The service container is only used when the `--service-container`
flag is set on the CLI. This flag has been marked as hidden as it
is not meant to be used outside the context of `play kube`. It is
further not supported on the remote client.
The wiring with systemd will be done in a later commit.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Add the notion of an "exit policy" to a pod. This policy controls the
behaviour when the last container of pod exits. Initially, there are
two policies:
- "continue" : the pod continues running. This is the default policy
when creating a pod.
- "stop" : stop the pod when the last container exits. This is the
default behaviour for `play kube`.
In order to implement the deferred stop of a pod, add a worker queue to
the libpod runtime. The queue will pick up work items and in this case
helps resolve dead locks that would otherwise occur if we attempted to
stop a pod during container cleanup.
Note that the default restart policy of `play kube` is "Always". Hence,
in order to really solve #13464, the YAML files must set a custom
restart policy; the tests use "OnFailure".
Fixes: #13464
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
the infra Inherit function was not properly passing pod volume information to new containers
alter the inherit function and struct to use the new `ConfigToSpec` function used in clone
pick and choose the proper entities from a temp spec and validate them on the spegen side rather
than passing directly to a config
resolves#13548
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Signed-off-by: cdoern <cdoern@redhat.com>
Signed-off-by: cdoern <cbdoer23@g.holycross.edu>
Added support for pod security options. These are applied to infra and passed down to the
containers as added (unless overridden).
Modified the inheritance process from infra, creating a new function Inherit() which reads the config, and marshals the compatible options into an intermediate struct `InfraInherit`
This is then unmarshaled into a container config and all of this is added to the CtrCreateOptions. Removes the need (mostly) for special additons which complicate the Container_create
code and pod creation.
resolves#12173
Signed-off-by: cdoern <cdoern@redhat.com>
Make sure we create new containers in the db with the correct structure.
Also remove some unneeded code for alias handling. We no longer need this
functions.
The specgen format has not been changed for now.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Add --time flag to podman container rm
Add --time flag to podman pod rm
Add --time flag to podman volume rm
Add --time flag to podman network rm
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
added support for a volumes from container. this flag just required movement of the volumes-from flag declaration
out of the !IsInfra block, and minor modificaions to container_create.go
Signed-off-by: cdoern <cdoern@redhat.com>
added the option for the user to specify a rate, in bytes, at which they would like to be able
to read from the device being added to the pod. This is the first in a line of pod device options.
WARNING: changed pod name json tag to pod_name to avoid confusion when marshaling with the containerspec's name
Signed-off-by: cdoern <cdoern@redhat.com>
Access the container's config field directly inside of libpod instead of
calling `Config()` which in turn creates expensive JSON deep copies.
Accessing the field directly drops memory consumption of a simple
`podman run --rm busybox true` from 1245kB to 410kB.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
added support for pod devices. The device gets added to the infra container and
recreated in all containers that join the pod.
This required a new container config item to keep track of the original device passed in by the user before
the path was parsed into the container device.
Signed-off-by: cdoern <cdoern@redhat.com>
added support for the --volume flag in pods using the new infra container design.
users can specify all volume options they can with regular containers
resolves#10379
Signed-off-by: cdoern <cdoern@redhat.com>
InfraContainer should go through the same creation process as regular containers. This change was from the cmd level
down, involving new container CLI opts and specgen creating functions. What now happens is that both container and pod
cli options are populated in cmd and used to create a podSpecgen and a containerSpecgen. The process then goes as follows
FillOutSpecGen (infra) -> MapSpec (podOpts -> infraOpts) -> PodCreate -> MakePod -> createPodOptions -> NewPod -> CompleteSpec (infra) -> MakeContainer -> NewContainer -> newContainer -> AddInfra (to pod state)
Signed-off-by: cdoern <cdoern@redhat.com>
Podman inspect has to show exposed ports to match docker. This requires
storing the exposed ports in the container config.
A exposed port is shown as `"80/tcp": null` while a forwarded port is
shown as `"80/tcp": [{"HostIp": "", "HostPort": "8080" }]`.
Also make sure to add the exposed ports to the new image when the
container is commited.
Fixes#10777
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
after the init containers pr merged, it was suggested to use `once`
instead of `oneshot` containers as it is more aligned with other
terminiology used similarily.
[NO TESTS NEEDED]
Signed-off-by: Brent Baude <bbaude@redhat.com>
Add the --userns flag to podman pod create and keep
track of the userns setting that pod was created with
so that all containers created within the pod will inherit
that userns setting.
Specifically we need to be able to launch a pod with
--userns=keep-id
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>