commit 8b4a79a744ac3fd176ca4dc0e3ae40f75159e090 introduced
oom_score_adj clamping when the container oom_score_adj value is lower
than the current one in a rootless environment. Move the check to
init() time so it is performed every time the container starts and not
only when it is created. It is more robust if the oom_score_adj value
is changed for the current user session.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Ongoing steps toward RUN-1907: replace Exit(0) with ExitCleanly()
Clean command-line replace, with manual tweaks to two tests:
* clone to a pod: revert to just Exit(0), because podman issues
a namespace warning
* --destroy --force : run "top" in container, not default (shell),
to avoid the 10-second SIGKILL fallback warning
Signed-off-by: Ed Santiago <santiago@redhat.com>
HC events were firing as part of the `exec` call, before it had
even been decided whether the HC succeeded or failed. As such,
the status was not going to be correct any time there was a
change (e.g. the first event after a container went healthy to
unhealthy would still read healthy). Move the event into the
actual Healthcheck function and throw it in a defer to make sure
it happens at the very end, after logs are written.
Ignores several conditions that did not log previously (container
in question does not have a healthcheck, or an internal failure
that should not really happen).
Still not a perfect solution. This relies on the HC log being
written, when instead we could just get the status straight from
the function writing the event - so if we fail to write the log,
we can still report a bad status. But if the log wasn't written,
we're in bad shape regardless - `podman ps` would disagree with
the event written, for example.
Fixes#19237
Signed-off-by: Matt Heon <mheon@redhat.com>
Add support to kube play to support the TerminationGracePeriodSeconds
fiels by sending the value of that to podman's stopTimeout.
Add support to kube generate to generate TerminationGracePeriodSeconds
if stopTimeout is set for a container (will ignore podman's default).
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
To avoid the error:
`Error: unable to read YAML as Kube Pod: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field Container.spec.containers.command of type string`
Also makes it easier to understand as you only need the image parameter.
Signed-off-by: Daskan <kevin81991@web.de>
If some volumes are specified in containers.conf, they are currently
added twice to the containers spec causing the container to fail:
$ head -n2 ~/.config/containers/containers.conf
[containers]
volumes = ["/tmp:/tmp"]
$ podman pod create --name foo
7ac7f97f9b74a596332483e4a13e58cb9c8d997e9c5baae46804ae0acc26cbc6
$ podman run --pod=foo alpine true
Error: "/tmp": duplicate mount destination
The fix is to ignore the setting from containers.conf when setting the
pod default configuration.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This test checks that the pod cgroups are created and that the limits
set for a pod cgroup are enforced also after a reboot.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When a container is created and it is part of a pod, we ensure the pod
cgroup exists so limits can be applied on the pod cgroup.
Closes: https://github.com/containers/podman/issues/19175
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This allows to use --share-parent with --infra=false, so that the
containers in the pod can share the parent cgroup.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
accept only the resources to be used by the pod, so that the function
can more easily be used by a successive patch.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
move the code to remove the pod cgroup to a separate function. It is
a preparation for the next patch.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Ongoing steps toward RUN-1907: replace Exit(0) with ExitCleanly()
Clean command-line replace, with one manual reversion (commented)
And -- duh! -- skip the stderr check on Debian!
Signed-off-by: Ed Santiago <santiago@redhat.com>
Remove the use of the "latest" flags because it cannot be used on
windows or mac.
Fixes#17019
[NO NEW TESTS NEEDED]
Signed-off-by: Brent Baude <bbaude@redhat.com>
Prevent future occurrences of #19894, by making upgrade tests
run any time there's a change to system tests. That's overly
broad: upgrade tests only rely on test/system/helpers.bash,
not test/system/anything-else. IMHO the cost of CI breaking
is higher than the cost of running unnecessary jobs.
Signed-off-by: Ed Santiago <santiago@redhat.com>
When the "rmi" part of "run --rmi" fails due to image being in use
by another container (or for any reason, actually), issue a warning
message, not an error.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Under some circumstances podman tries to kill a container
using signal 37, for which unix.SignalName() returns "".
Not helpful. So, when that happens, show "(signal number)".
Signed-off-by: Ed Santiago <santiago@redhat.com>
PR #19878 (checking for warnings in system tests) broke upgrade tests.
Reason: my long-ago "optimization" in which, if a PR touches only
tests in X, do not run tests in Y. Unfortunately, upgrade tests
rely on code in the system-test directory. I don't know if this
is fixable; nor if it's an acceptable tradeoff. Please discuss.
Sorry, everyone.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Also add a new `StoppedByUser` field to the container-inspect state
which can be useful during debugging and is now also used in the
regression test. Note that I moved the `false` check one test above
such that we can compare the previous Podman version which should just
be stuck in the `wait $ctr` command since it will continue restarting.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The logic here makes little sense, basically the /tmp and /var/tmp are
always set noexec, while /run is not. I don't see a reason to set any
of the three noexec by default.
Fixes: https://github.com/containers/podman/issues/19886
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
With few exceptions, commands that exit 0 should not emit any
messages with level=warning or =error. Let's start enforcing
that in run_podman.
Allow one-off exceptions, typically when we're testing an
actual warning condition (usual case: "podman stop" where it
times out to SIGKILL). Exceptions are specified via:
run_podman 0+w subcommand...
^^^---- or, rarely, 0+e
"0" stands for "expect exit status 0", which is the default
so it's implicit anyway. The +w / +e (or even +we) is the
new part. I have added it to tests where necessary.
And, because life is what it is, add two global exceptions:
- Debian. Because runc has too many flakes.
- kube. Ditto. Kube commands emit lots of nasty error
messages (yes, level=error) that don't seem to affect
results.
Similar to #18442
Signed-off-by: Ed Santiago <santiago@redhat.com>