As shown in #17831, WAL mode plays a role in causing `database is locked`
errors. Those are errors, in theory, should not happen as the DB should
busy wait. mattn/go-sqlite3/issues/274 has some comments indicating
that the busy handler behaves differently in WAL mode which may be an
explanation to the error.
For now, let's disable WAL mode and only re-enable it when we have
clearer understanding of what's going on. The upstream issue along with
the SQLite documentation do not give me the clear guidance that I would
need.
[NO NEW TESTS NEEDED] - flake is only reproducible in CI.
Fixes: #18356
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Run $QUADLET and all systemctl/journalctl commands using 'timeout'.
Nothing should ever, ever take more than the default 2 minutes.
Followup to #18514, in which quadlet tests are found to be
taking 9-10 minutes.
Signed-off-by: Ed Santiago <santiago@redhat.com>
- document env vars that can be used
- list up to date dependencies
- remove unnecessary GOPATH mention, no longer needed with gomodules
- use make targets to tests everything (much faster due `-p` option)
- remove tests in container section as make shell is not a valid target
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In rootFsSize(), instead of calculating the size of the diff for every
layer of the container's base image, ask the storage library for the sum
of the values it recorded when it first wrote those layers.
In a similar fashion, teach rwSize() to use the library's
ContainerSize() method instead of trying to roll its own.
Replace calls to pkg/util.SizeOfPath() with calls to
github.com/containers/storage/pkg/directory.Size(), which does the same
thing.
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Yet another case where tests expect play-kube to be synchronous.
There are probably dozens more of these.
Signed-off-by: Ed Santiago <santiago@redhat.com>
If there's a container defined in multiple directories use the following
precedence:
$XDG_CONFIG_HOME/containers/systemd/ or ~/.config/containers/systemd/
takes precedence over /etc/containers/systemd/users/$(UID) and this
takes precedence over /etc/containers/systemd/users/
Signed-off-by: Petr Lautrbach <lautrbach@redhat.com>
Fixes: https://github.com/containers/podman/issues/16354
Currently we check on the server side, which ends up generating a bad
error message.
$ podman --remote build foo/
ERRO[0000] While reading directory /home/dwalsh/go/src/github.com/containers/podman/foo: EOF
Error: stat /var/tmp/libpod_builder1249622306/build/Dockerfile: no such file or directory
With this change you will get
./bin/podman --remote build foo/
Error: Containerfile not specified and no Containerfile or Dockerfile found in context directory, /home/dwalsh/podman/foo
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
- treadmill script: run root & rootless in parallel, not
sequentially. It's only four jobs, and it seems dumb
to fix root tests, repush, then discover a rootless failure.
- apply-podman-deltas: implement skip_if_rootless(), and
use it to skip a nasty longstanding flake
- bud-tests-in-podman diffs: ugly code to fix a rootless hang.
background: rootless remote tests hang
cause: stray podman server process
root cause: no idea. No clue at all. I just gave up
workaround: seek out and kill stray server processes
Rootless buildah-bud tests are not run in regular CI,
only in the buildah treadmill.
Signed-off-by: Ed Santiago <santiago@redhat.com>
Restart policy of initContainers should not be overriden by pod and
the restart policy should always be "no".
See #16343
Signed-off-by: Tony Duan <tony.duan@gapp.nthu.edu.tw>
I would like to allow admin to control quadlet containers
in users homedirs.
If an admin sets a quadlet in
/etc/containers/systemd/users, then all users will run these
quadlet services when they login.
If an admin places a quadlet in /etc/containers/systemd/users/$(USERNAME)
then only the USERNAME will execute this quadlet service when
they login.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If the container was already cleaned up we should not try to do it
again. Podman stop will always try to call Cleanup() if you look at the
podman event log and just keep calling podman stop --all you see a
cleanup event every time. This is not wanted. Also in case of the host
pidns we report a error every single time, see the linked issue.
Fixes#18460
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The logic which checks for duplicated volumes here did not work
correctly because it used filepath.Clean(). However the writes to the
volDestinations map did not thus the string no longer matched when you
included a final slash for example.
So we can either call Clean() on all or no paths. I decided to call it
on no path because this is what we do right now. Just the check did it.
Fixed#18454
Signed-off-by: Paul Holzinger <pholzing@redhat.com>