A podman could not read logs written to journald properly, due to a tail config bug.
Added a system test to check this - since e2e tests don't like journald
Signed-off-by: Ashley Cui <acui@redhat.com>
* --format "table {{.field..." will print fields out in a table with
headings. Table keyword is removed, spaces between fields are
converted to tabs
* Update parse.MatchesJSONFormat()'s regex to be more inclusive
* Add report.Headers(), obtain all the field names to be used as
column headers, a map of field name to column headers may be provided
to override the field names
* Update several commands to use new functions
Signed-off-by: Jhon Honce <jhonce@redhat.com>
check there are enough gids in the user namespace before adding
supplementary gids from /etc/group.
Follow-up for baede7cd2776b1f722dcbb65cff6228eeab5db44
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Make sure to remove images until there's nothing left to prune.
A single iteration may not be sufficient.
Fixes: #7872
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
If the container uses the /dev/fuse device, attempt to load the fuse
kernel module first so that nested containers can use it.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1872240
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
At the top of each generated page, add a Synopsis table with:
PR number/name, and link to github
Author name(s)
Test name (fedora/ubuntu, rootless, etc)
Cirrus build ID (usually uninteresting)
Cirrus task ID (usu. important), with link to Cirrus
The value of $SPECIALMODE
This is all we can get from the Cirrus environment in
which logformatter runs; we can't get things like
cgroup manager or username that the test runs under.
Note that the table is at the top, which is usually
unseen because we autoscroll to the bottom on
page load. I tentatively think that top is a more
natural place for this info than bottom, but am
willing to listen to arguments against.
Also, one minor tweak: highlight podman commands in
the BATS output. The idea is to make it easier for the eye
to spot those, then copy/paste them to find a reproducer.
And, sigh, disable the new 'podman network create'
system test. It is flaking much too much.
Signed-off-by: Ed Santiago <santiago@redhat.com>
when adding /dev to a privileged container using the compatibility API, we need to make sure we dont pass on devices that are simply symlinks. this was already being done by specgen but not on the compat. side.
the entrypoint code that was recently rewritten for the compatibility layer was also failing due to the odd inputs that docker is willing to accept in its json, specifically [] vs "". in the case of the latter, this was being made into a []string with a len of one but no content. this would then be used to prefix the command to run in the container and would fail. For example " ls" vs "ls".
Signed-off-by: baude <bbaude@redhat.com>
Extend the system tests to test `podman untag $image` without further
arguments to force removing all tags from the image.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Fix the look up of containers and pods in the remote client. User input
can refer to both, names or IDs of containers and pods, so there is a
fair chance of collisions (e.g., "c1" name with a "c1...." ID).
Those collisions are well handled (and battle tested) in the local
client which is directly using the libpod backend. Hence, the remote
client should not attempt to introduce its own logic to prevent bugs and
divergence between the local and the remote clients. To prevent
collisions such as in #7837, do a container/pod inspect on the
user-provided input to find the corresponding ID and eventually do full
ID comparisons to avoid potential collisions with names.
Note that this has a cost that I am not entirely happy with. Looking at
issue #7837, the collisions are happening when removing the two
containers. Remote container removal is now very chatty with the server
as it first queries for all containers, then iterates over the provided
names or IDs and does a remote inspect to figure out the IDs and find a
matching container object. However, remote removal could just pass the
names and IDs directly to the batch removal endpoint. Querying for all
containers could be prevented if the batch removal endpoint would remove
all if the slice is empty.
In other words, the bug is fixed but there's room for performance
improvements.
Fixes: #7837
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
There is a risk here, that if the GID does not exists
within the User Namespace the container will fail to start.
This is only likely to happen in HPC Envioronments, and I think
we should add a field to disable it for this environment,
Added a FIXME for this issue.
We currently have this problem with running a rootfull container within
a user namespace, it will fail if the GID is not available.
I looked at potentially checking the usernamespace that you are assigned
to, but I believe this will be very difficult to code up and to figure out.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The network test created config files with random filenames
but the network name was static. Since the tests can run in
parallel podman was not able to distinguish the networks.
We need to make sure that each test has its own config file
and network name. This helps to prevent unnecessary flakes.
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
podman volume prune -f
Should just tell the prune command to not prompt for confirmation.
It should not be passing the prune flag into the API.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
* Fix misspelled parameter
* add http-proxy support for builds
http_proxy must be set in the podman.service unit file, for example
Environment=http_proxy=<value>
Signed-off-by: Jhon Honce <jhonce@redhat.com>