223 Commits

Author SHA1 Message Date
9dcd76e369 Ensure we generate a 'stopped' event on force-remove
When forcibly removing a container, we are initiating an explicit
stop of the container, which is not reflected in 'podman events'.
Swap to using our standard 'stop()' function instead of a custom
one for force-remove, and move the event into the internal stop
function (so internal calls also register it).

This does add one more database save() to `podman remove`. This
should not be a terribly serious performance hit, and does have
the desirable side effect of making things generally safer.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-31 17:29:14 -04:00
ebacfbd091 podman: fix memleak caused by renaming and not deleting
the exit file

If the container exit code needs to be retained, it cannot be retained
in tmpfs, because libpod runs in a memcg itself so it can't leave
traces with a daemon-less design.

This wasn't a memleak detectable by kmemleak for example. The kernel
never lost track of the memory and there was no erroneous refcounting
either. The reference count dependencies however are not easy to track
because when a refcount is increased, there's no way to tell who's
still holding the reference. In this case it was a single page of
tmpfs pagecache holding a refcount that kept pinned a whole hierarchy
of dying memcg, slab kmem, cgropups, unrechable kernfs nodes and the
respective dentries and inodes. Such a problem wouldn't happen if the
exit file was stored in a regular filesystem because the pagecache
could be reclaimed in such case under memory pressure. The tmpfs page
can be swapped out, but that's not enough to release the memcg with
CONFIG_MEMCG_SWAP_ENABLED=y.

No amount of more aggressive kernel slab shrinking could have solved
this. Not even assigning slab kmem of dying cgroups to alive cgroup
would fully solve this. The only way to free the memory of a dying
cgroup when a struct page still references it, would be to loop over
all "struct page" in the kernel to find which one is associated with
the dying cgroup which is a O(N) operation (where N is the number of
pages and can reach billions). Linking all the tmpfs pages to the
memcg would cost less during memcg offlining, but it would waste lots
of memory and CPU globally. So this can't be optimized in the kernel.

A cronjob running this command can act as workaround and will allow
all slab cache to be released, not just the single tmpfs pages.

    rm -f /run/libpod/exits/*

This patch solved the memleak with a reproducer, booting with
cgroup.memory=nokmem and with selinux disabled. The reason memcg kmem
and selinux were disabled for testing of this fix, is because kmem
greatly decreases the kernel effectiveness in reusing partial slab
objects. cgroup.memory=nokmem is strongly recommended at least for
workstation usage. selinux needs to be further analyzed because it
causes further slab allocations.

The upstream podman commit used for testing is
1fe2965e4f672674f7b66648e9973a0ed5434bb4 (v1.4.4).

The upstream kernel commit used for testing is
f16fea666898dbdd7812ce94068c76da3e3fcf1e (v5.2-rc6).

Reported-by: Michele Baldessari <michele@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>

<Applied with small tweaks to comments>
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-31 17:28:42 -04:00
6665269ab8 Merge pull request #3233 from wking/fatal-requested-hook-directory-does-not-exist
libpod/container_internal: Make all errors loading explicitly configured hook dirs fatal
2019-07-29 16:39:08 +02:00
a1a79c08b7 Implement conmon exec
This includes:
	Implement exec -i and fix some typos in description of -i docs
	pass failed runtime status to caller
	Add resize handling for a terminal connection
	Customize exec systemd-cgroup slice
	fix healthcheck
	fix top
	add --detach-keys
	Implement podman-remote exec (jhonce)
	* Cleanup some orphaned code (jhonce)
	adapt remote exec for conmon exec (pehunt)
	Fix healthcheck and exec to match docs
		Introduce two new OCIRuntime errors to more comprehensively describe situations in which the runtime can error
		Use these different errors in branching for exit code in healthcheck and exec
	Set conmon to use new api version

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-07-22 15:57:23 -04:00
db826d5d75 golangci-lint round #3
this is the third round of preparing to use the golangci-lint on our
code base.

Signed-off-by: baude <bbaude@redhat.com>
2019-07-21 14:22:39 -05:00
a78c885397 golangci-lint pass number 2
clean up and prepare to migrate to the golangci-linter

Signed-off-by: baude <bbaude@redhat.com>
2019-07-11 09:13:06 -05:00
edc7f52c95 Merge pull request #3425 from adrianreber/restore-mount-label
Set correct SELinux label on restored containers
2019-07-08 20:31:59 +02:00
1d36501f96 code cleanup
clean up code identified as problematic by golands inspection

Signed-off-by: baude <bbaude@redhat.com>
2019-07-08 09:18:11 -05:00
f7407f2eb5 Merge pull request #3472 from haircommander/generate-volumes
generate kube with volumes
2019-07-04 22:22:07 +02:00
fec1de6ef4 trivial cleanups from golang
the results of a code cleanup performed by the goland IDE.

Signed-off-by: baude <bbaude@redhat.com>
2019-07-03 15:41:33 -05:00
38c6199b80 Wipe PID and ConmonPID in state after container stops
Matches the behavior of Docker.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-02 19:10:51 -04:00
a1bb1987cc Store Conmon's PID in our state and display in inspect
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-07-02 18:52:55 -04:00
aeabc45cce Improve parsing of mounts
Specifically, we were needlessly doing a double lookup to find which config mounts were user volumes. Improve this by refactoring a bit of code from inspect

Signed-off-by: Peter Hunt <pehunt@redhat.com>
2019-07-02 15:18:44 -04:00
8561b99644 libpod removal from main (phase 2)
this is phase 2 for the removal of libpod from main.

Signed-off-by: baude <bbaude@redhat.com>
2019-06-27 07:56:24 -05:00
72cf0c81e8 libpod: use pkg/cgroups instead of containerd/cgroups
use the new implementation for dealing with cgroups.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-06-26 13:17:02 +02:00
dd81a44ccf remove libpod from main
the compilation demands of having libpod in main is a burden for the
remote client compilations.  to combat this, we should move the use of
libpod structs, vars, constants, and functions into the adapter code
where it will only be compiled by the local client.

this should result in cleaner code organization and smaller binaries. it
should also help if we ever need to compile the remote client on
non-Linux operating systems natively (not cross-compiled).

Signed-off-by: baude <bbaude@redhat.com>
2019-06-25 13:51:24 -05:00
220e169cc1 Provide correct SELinux mount-label for restored container
Restoring a container from a checkpoint archive creates a complete
new root file-system. This file-system needs to have the correct SELinux
label or most things in that restored container will fail. Running
processes are not as problematic as newly exec()'d process (internally
or via 'podman exec').

This patch tells the storage setup which label should be used to mount
the container's root file-system.

Signed-off-by: Adrian Reber <areber@redhat.com>
2019-06-25 14:55:11 +02:00
c233a12772 Add additional debugging when refreshing locks
Signed-off-by: Matthew Heon <mheon@redhat.com>
2019-06-21 16:00:39 -04:00
92bae8d308 Begin adding support for multiple OCI runtimes
Allow Podman containers to request to use a specific OCI runtime
if multiple runtimes are configured. This is the first step to
properly supporting containers in a multi-runtime environment.

The biggest changes are that all OCI runtimes are now initialized
when Podman creates its runtime, and containers now use the
runtime requested in their configuration (instead of always the
default runtime).

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-06-19 17:08:43 -04:00
629017bb19 When you change the storage driver we ignore the storage-options
The storage driver and the storage options in storage.conf should
match, but if you change the storage driver via the command line
then we need to nil out the default storage options from storage.conf.

If the user wants to change the storage driver and use storage options,
they need to specify them on the command line.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-06-08 06:20:31 -04:00
0028578b43 Added support to migrate containers
This commit adds an option to the checkpoint command to export a
checkpoint into a tar.gz file as well as importing a checkpoint tar.gz
file during restore. With all checkpoint artifacts in one file it is
possible to easily transfer a checkpoint and thus enabling container
migration in Podman. With the following steps it is possible to migrate
a running container from one system (source) to another (destination).

 Source system:
  * podman container checkpoint -l -e /tmp/checkpoint.tar.gz
  * scp /tmp/checkpoint.tar.gz destination:/tmp

 Destination system:
  * podman pull 'container-image-as-on-source-system'
  * podman container restore -i /tmp/checkpoint.tar.gz

The exported tar.gz file contains the checkpoint image as created by
CRIU and a few additional JSON files describing the state of the
checkpointed container.

Now the container is running on the destination system with the same
state just as during checkpointing. If the container is kept running
on the source system with the checkpoint flag '-R', the result will be
that the same container is running on two different hosts.

Signed-off-by: Adrian Reber <areber@redhat.com>
2019-06-03 22:05:12 +02:00
a05cfd24bb Added helper functions for container migration
This adds a couple of function in structure members needed in the next
commit to make container migration actually work. This just splits of
the function which are not modifying existing code.

Signed-off-by: Adrian Reber <areber@redhat.com>
2019-06-03 22:05:12 +02:00
317a5c72c6 libpod/container_internal: Make all errors loading explicitly configured hook dirs fatal
Remove this IsNotExist out which was added along with the rest of this
block in f6a2b6bf2b (hooks: Add pre-create hooks for runtime-config
manipulation, 2018-11-19, #1830).  Besides the obvious "hook directory
does not exist", it was swallowing the less-obvious "hook command does
not exist".  And either way, folks are likely going to want non-zero
podman exits when we fail to load a hook directory they explicitly
pointed us towards.

Signed-off-by: W. Trevor King <wking@tremily.us>
2019-05-29 20:19:41 -07:00
3788da9344 libpod: prefer WaitForFile to polling
replace two usage of kwait.ExponentialBackoff in favor of WaitForFile
that uses inotify when possible.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-05-21 10:07:31 +02:00
5cbb3e7e9d Use standard remove functions for removing pod ctrs
Instead of rewriting the logic, reuse the standard logic we use
for removing containers, which is much better tested.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-10 14:14:29 -04:00
faae3a7065 When refreshing after a reboot, force lock allocation
After a reboot, when we refresh Podman's state, we retrieved the
lock from the fresh SHM instance, but we did not mark it as
allocated to prevent it being handed out to other containers and
pods.

Provide a method for marking locks as in-use, and use it when we
refresh Podman state after a reboot.

Fixes #2900

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-06 14:17:54 -04:00
5c4fefa533 Small code fix
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 11:42:34 -04:00
d7c367aa61 Address review comments on restart policy
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
cafb68e301 Add a restart event, and make one during restart policy
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
56356d7027 Restart policy should not run if a container is running
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
7ba1b609aa Move to using constants for valid restart policy types
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
f4db6d5cf6 Add support for retry count with --restart flag
The on-failure restart option supports restarting only a given
number of times. To do this, we need one additional field in the
DB to track restart count (which conveniently fills a field in
Inspect we weren't populating), plus some plumbing logic.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
0d73ee40b2 Add container restart policy to Libpod & Podman
This initial version does not support restart count, but it works
as advertised otherwise.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
3fb52f4fbb Add a StoppedByUser field to the DB
This field indicates that a container was explciitly stopped by
an API call, and did not exit naturally. It's used when
implementing restart policy for containers.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-03 10:36:16 -04:00
ccf28a89bd Merge pull request #3039 from mheon/podman_init
Add podman init command
2019-05-02 20:45:44 +02:00
cc9ef4e61b container: drop rootless check
we don't need to treat the rootless case differently now that we use a
single user namespace.

Signed-off-by: Giuseppe Scrivano <giuseppe@scrivano.org>
2019-05-01 18:49:08 +02:00
0b2c9c2acc Add basic structure of podman init command
As part of this, rework the number of workers used by various
Podman tasks to match original behavior - need an explicit
fallthrough in the switch statement for that block to work as
expected.

Also, trivial change to Podman cleanup to work on initialized
containers - we need to reset to a different state after cleaning
up the OCI runtime.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-05-01 11:12:24 -04:00
f929b9e4d5 Merge pull request #2501 from mtrmac/fixed-hook-order
RFC: Make hooks sort order locale-independent
2019-04-14 03:09:41 -07:00
61fa40b256 Merge pull request #2913 from mheon/get_instead_of_lookup
Use GetContainer instead of LookupContainer for full ID
2019-04-12 09:38:48 -07:00
f7951c8776 Use GetContainer instead of LookupContainer for full ID
All IDs in libpod are stored as a full container ID. We can get a
container by full ID faster with GetContainer (which directly
retrieves) than LookupContainer (which finds a match, then
retrieves). No reason to use Lookup when we have full IDs present
and available.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-04-12 10:59:00 -04:00
27d56c7f15 Expand debugging for container cleanup errors
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-04-11 11:05:00 -04:00
97c9115c02 Potentially breaking: Make hooks sort order locale-independent
Don't sort OCI hooks using the locale collation order; it does not
make sense for the same system-wide directory to be interpreted differently
depending on the user's LC_COLLATE setting, and the language-specific
collation order can even change over time.

Besides, the current collation order determination code has never worked
with the most common LC_COLLATE values like en_US.UTF-8.

Ideally, we would like to just order based on Unicode code points
to be reliably stable, but the existing implementation is case-insensitive,
so we are forced to rely on the unicode case mapping tables at least.

(This gives up on canonicalization and width-insensitivity, potentially
breaking users who rely on these previously documented properties.)

Signed-off-by: Miloslav Trmač <mitr@redhat.com>
2019-04-09 21:08:44 +02:00
09ff62429a Implement podman-remote rm
* refactor command output to use one function
* Add new worker pool parallel operations
* Implement podman-remote umount
* Refactored podman wait to use printCmdOutput()

Signed-off-by: Jhon Honce <jhonce@redhat.com>
2019-04-09 11:55:26 -07:00
d245c6df29 Switch Libpod over to new explicit named volumes
This swaps the previous handling (parse all volume mounts on the
container and look for ones that might refer to named volumes)
for the new, explicit named volume lists stored per-container.

It also deprecates force-removing volumes that are in use. I
don't know how we want to handle this yet, but leaving containers
that depend on a volume that no longer exists is definitely not
correct.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-04-04 12:26:29 -04:00
849548ffb8 userns: do not use an intermediate mount namespace
We have an issue in the current implementation where the cleanup
process is not able to umount the storage as it is running in a
separate namespace.

Simplify the implementation for user namespaces by not using an
intermediate mount namespace.  For doing it, we need to relax the
permissions on the parent directories and allow browsing
them. Containers that are running without a user namespace, will still
maintain mode 0700 on their directory.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-29 14:04:44 +01:00
bb69004b8c podman health check phase3
podman will not start a transient service and timer for healthchecks.
this handles the tracking of the timing for health checks.

added the 'started' status which represents the time that a container is
in its start-period.

the systemd timing can be disabled with an env variable of
DISABLE_HC_SYSTEMD="true".

added filter for ps where --filter health=[starting, healthy, unhealthy]
can now be used.

Signed-off-by: baude <bbaude@redhat.com>
2019-03-22 14:58:44 -05:00
4ac08d3aa1 ps: fix segfault if the store is not initialized
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-19 15:01:54 +01:00
9d81be9614 Make sure buildin volumes have the same ownership and permissions as image
When creating a new image volume to be mounted into a container, we need to
make sure the new volume matches the Ownership and permissions of the path
that it will be mounted on.

For example if a volume inside of a containre image is owned by the database
UID, we want the volume to be mounted onto the image to be owned by the
database UID.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2019-03-15 10:44:44 -04:00
508e08410b container: check containerInfo.Config before accessing it
check that containerInfo.Config is not nil before trying to access
it.

Closes: https://github.com/containers/libpod/issues/2654

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2019-03-15 10:39:33 +01:00
3b5805d521 Add event on container death
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
2019-03-13 10:18:51 -04:00