This updates the container-device-interface dependency to v0.6.2 and renames the import to
tags.cncf.io/container-device-interface to make use of the new vanity URL.
[NO NEW TESTS NEEDED]
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: Evan Lezar <elezar@nvidia.com>
Docker allows the passing of -1 to indicate the maximum limit
allowed for the current process.
Fixes: https://github.com/containers/podman/issues/19319
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
when --uts=host is provided, the expectation is to use the hostname
from the host not the container name.
Closes: https://github.com/containers/podman/issues/20448
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This solves `--security-opt unmask=ALL` still masking the path.
[NO NEW TESTS NEEDED] Can't easily test this as we do not have
access to it in CI.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
All `[]string`s in containers.conf have now been migrated to attributed
string slices which require some adjustments in Buildah and Podman.
[NO NEW TESTS NEEDED]
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
I don't really like this solution because it can't be undone by
`--security-opt unmask=all` but I don't see another way to make
this retroactive. We can potentially change things up to do this
the right way with 5.0 (actually have it in the list of masked
paths, as opposed to adding at spec finalization as now).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Add podman farm build command that sends out builds to
nodes defined in the farm, builds the images on the farm
nodes, and pulls them back to the local machine to create
a manifest list.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
There is no need to carry these stub implementations that just error
anyway. The libpod package can only ever uses on linux and freebsd
anyway and the remote client should never ever import libpod directly.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is a potential race condition we are seeing where
we are seeing a message about a removed container which
could be caused by a non mounted container, this change
should clarify which is causing it.
Also if the container does not exists, just warn the user
instead of reporting an error, not much the user can do.
Fixes: https://github.com/containers/podman/issues/19702
[NO NEW TESTS NEEDED]
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
commit 7ade9721020468438e822b16ed7a65380cc7fbd2 introduced the change
that caused an issue in crun since it forces the root user session
instead of the system one when DBUS_SESSION_BUS_ADDRESS is set.
I am addressing it in crun, but for the time being, let's also not
pass the variable down to conmon since the assumption is that when
running as root the containers must be created on the system bus.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
always cleanup the exec session when the command specified to the
"exec" is not found.
Closes: https://github.com/containers/podman/issues/20392
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Updated the error message to suggest user to use --replace option to instruct Podman to replace the existsing external container with a newly created one.
closes#16759
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
When trying to connect a container to a network and the connection
already exists, an error should only be raised if the container is
already running (or is in the `ContainerStateCreated` transition)
to mimic the behavior of Docker as described here:
https://github.com/containers/podman/pull/15516#issuecomment-1229265942
For running and connected containers 403 is returned which fixes#20365
Signed-off-by: Philipp Fruck <dev@p-fruck.de>
When a userns and netns is used we need to let the runtime create the
netns otherwise the netns is not owned by the right userns and thus
the capabilities would not be correct.
The current restart logic tries to reuse the netns which is fine if no
userns is used but when one is used we setup a new netns (which is
correct) but forgot to cleanup the old netns. This resulted in leaked
network namespaces and because no teardown was ever called leaked ipam
assignments, thus a quickly restarting container will run out of ip
space very fast.
Fixes#18615
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Allow users to specify
podman-remote top $cid -eo "pid comm"
or
podman-remote top $cid -eo pid,comm
Fixes: https://github.com/containers/podman/issues/19176
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
didid# new file: test/system/085-top.bats
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This fixes a regression caused by commit 7e6e267329, unfortunately this
was not caught during review as for some reason this works fine rootless
and only fails as root.
Because we set the systemd log level to notice in order to hide the unit
started/stopped messages to prevent spamming the journal the issue is
that this now also causes systemd to ignore the events we write to
journald as we also send them as info level.
To fix this we simply send health_status events now on notice level. I
decided against sending all events on notice as I think info is fine for
them. Whenever the notice level is right is of course debatable but
given it may contain the unhealthy message I think having this a notice
should be ok.
The main reason this made it through testing is because we do not rely
on the systemd unit to fire healthchecks in the tests as this is flaky.
There is one test were we rely on it though and I added a check there
to make sure events are displayed correctly when trigger via systemd.
Fixes#20342
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When containers are created with a named volume it can deadlock because
the create logic tried to lock all volumes in a loop, this is fine if it
only ever creates a single container at any given time. However because
we multiple containers can be created at the same time they can cause a
deadlock between the volumes. This is because the order of the loop is
not stable, in fact it is based on the order of how the volumes were
specified on the cli.
So if you create two containers at the same time with
`-v vol1:/dir2 -v vol2:/dir2` and the other one with
`-v vol2:/dir2 -v vol1:/dir1` then there is chance for a deadlock.
Now one solution could be to order the volumes to prevent the issue but
the reason for holding the lock is dubious. The goal was to prevent the
volume from being removed in the meantime. However that could still
have happend before we acquired the lock so it didn't protect against
that.
Both boltdb and sqlite already prevent us from adding a container with
volumes that do not exists due their internal consistency checks.
Sqlite even uses FOREIGN KEY relationships so the schema will prevent us
from doing anything wrong.
The create code currently first checks if the volume exists and if not
creates it. I have checked that the db will guarantee that this will not
work:
Boltdb: `no volume with name test2 found in database when adding container xxx: no such volume`
Sqlite: `adding container volume test2 to database: FOREIGN KEY constraint failed`
Keep in mind that this error is normally not seen, only if the volume is
removed between the volume exists check and adding the container in the
db this messages will be seen wich is an acceptable race and a
pre-existing condition anyway.
[NO NEW TESTS NEEDED] Race condition, hard to test in CI.
Fixes#20313
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Use sqlite as default but for upgrades it will still use boltdb to avoid
breaking anyone. This is done by checking if the boltdb file already
exists and if it does then we have to use it.
I added a e2e test to check the new logic and removed the system test
for it, the problem with the system test is that we share the storage
dir there so all following commands without --db-backend would try to
use boltdb as a single --db-backend boltdb command will create the file
and then all folllwing commands will use it because of the backwards
compat. In e2e tests each test uses their own --root so it is not an
issue there.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
when running as a service, the c.state.Mounted flag could get out of
sync if the container is cleaned up through the cleanup process.
To avoid this, always check if the mountpoint is really present before
skipping the mount.
[NO NEW TESTS NEEDED]
Closes: https://github.com/containers/podman/issues/17042
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Podman server logs are mostly full of healthcheck output, making them hard to navigate. Hence, made healthcheck service to run with LogLevelMax=notice, this would remove the normal output, inclusive the started/stopped messages from systemd itself.
Fixes#17856
Signed-off-by: Chetan Giradkar <cgiradka@redhat.com>
Add --rdt-class=COS to the create and run command to enable the
assignment of a container to a Class of Service (COS). The COS
represents a part of the cache based on the Cache Allocation Technology
(CAT) feature that is part of Intel's Resource Director Technology
(Intel RDT) feature set. By assigning a container to a COS, all PID's of
the container have only access to the cache space defined for this COS.
The COS has to be pre-configured based on the resctrl kernel driver.
cat_l2 and cat_l3 flags in /proc/cpuinfo represent CAT support for cache
level 2 and 3 respectively.
Signed-off-by: Wolfgang Pross <wolfgang.pross@intel.com>
Pass the _entire_ environment to conmon instead of selectively enabling
only specific variables. The main reasoning is to make sure that conmon
and the podman-cleanup callback process operate in the exact same
environment than the initial podman process. Some configuration files
may be passed via environment variables. Podman not passing those down
to conmon has led to subtle and hard to debug issues in the past, so
passing all down will avoid such kinds of issues in the future.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The processing and setting of the static and volume directories was
scattered across the code base (including c/common) leading to subtle
errors that surfaced in #19938.
There were multiple issues that I try to summarize below:
- c/common loaded the graphroot from c/storage to set the defaults for
static and volume dir. That ignored Podman's --root flag and
surfaced in #19938 and other bugs. c/common does not set the
defaults anymore which gives Podman the ability to detect when the
user/admin configured a custom directory (not empty value).
- When parsing the CLI, Podman (ab)uses containers.conf structures to
set the defaults but also to override them in case the user specified
a flag. The --root flag overrode the static dir which is wrong and
broke a couple of use cases. Now there is a dedicated field for in
the "PodmanConfig" which also includes a containers.conf struct.
- The defaults for static and volume dir and now being set correctly
and adhere to --root.
- The CONTAINERS_CONF_OVERRIDE env variable has not been passed to the
cleanup process. I believe that _all_ env variables should be passed
to conmon to avoid such subtle bugs.
Overall I find that the code and logic is scattered and hard to
understand and follow. I refrained from larger refactorings as I really
just want to get #19938 fixed and then go back to other priorities.
https://github.com/containers/common/pull/1659 broke three pkg/machine
tests. Those have been commented out until getting fixed.
Fixes: #19938
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
This is not really an error, if the anonymous volume is still used then
this likely means it was transferred to another container with
--volumes-from. This is what the user wants and it is not like the user
can act on the logged error anyway. Once the last user of the volume is
removed it will be removed correctly.
see https://github.com/containers/podman/pull/19637
Signed-off-by: Paul Holzinger <pholzing@redhat.com>