This fixes two problems, first if a port is published and exposed it
should not be shown twice. It is enough to show the published one.
Second, if there is a huge range the ports were no grouped causing the
output to be unreadable basically. Now we group exposed ports like we do
with the normal published ports.
Fixes#23317
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
There is no good reason for the special case, kube and pod units
definitely need it. Volume and network units maybe not but for
consistency we add it there as well. This makes the docs much easier to
write and understand for users as the behavior will not differ.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As documented in the issue there is no way to wait for system units from
the user session[1]. This causes problems for rootless quadlet units as
they might be started before the network is fully up. TWhile this was
always the case and thus was never really noticed the main thing that
trigger a bunch of errors was the switch to pasta.
Pasta requires the network to be fully up in order to correctly select
the right "template" interface based on the routes. If it cannot find a
suitable interface it just fails and we cannot start the container
understandingly leading to a lot of frustration from users.
As there is no sign of any movement on the systemd issue we work around
here by using our own user unit that check if the system session
network-online.target it ready.
Now for testing it is a bit complicated. While we do now correctly test
the root and rootless generator since commit ada75c0bb8 the resulting
Wants/After= lines differ between them and there is no logic in the
testfiles themself to say if root/rootless to match specifics. One idea
was to use `assert-key-is-rootless/root` but that seemed like more
duplication for little reason so use a regex and allow both to make it
pass always. To still have some test coverage add a check in the system
test to ask systemd if we did indeed have the right depdendencies where
we can check for exact root/rootless name match.
[1] https://github.com/systemd/systemd/issues/3312Fixes#22197
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Two flakes seen in the last three months. One of them was in
August, so it's not related to ongoing criu-4.0 problems.
Suspected cause: race waiting for "podman run --rm" container
to transition from stopped to removed.
Solution: allow a 5-second grace period, retrying every second.
Also: add explanations to the Expect()s, remove unnecessary
code, and tighten up the CID check.
Signed-off-by: Ed Santiago <santiago@redhat.com>
...for debugging #24147, because "md5sum mismatch" is not
the best way to troubleshoot bytestream differences.
socat is run on the container, so this requires building a
new testimage (20241011). Bump to new CI VMs[1] which include it.
[1] https://github.com/containers/automation_images/pull/389
Signed-off-by: Ed Santiago <santiago@redhat.com>
By default golang programs exit 2 on special exit signals that can be
cought and produce a stack trace. However this is behavior that can be
modfied via GOTRACEBACK=crash[1], in that case it does not exit(2) but
rather sends itself SIGABRT to the parent sees the signal exit and out
test sees that es exit code 134, 128 + 6 (SIGABRT), like most shells do.
As it turns out GOTRACEBACK=crash is the default mode on all fedora and
RHEL rpm builds as they patch the build with a special
"rpm_crashtraceback" go build tag.
While that change is old and existing for a very long time it was never
caught until commit 5e240ab1f5, which switched the old ExitWithError()
check that accepted anything > 0, to just accept 2. And as CI only test
upstream builds that are build without rpm_crashtraceback we did not
catch in CI either. Only once a user actually used distro build against
the source e2e test it failed.
I like to highlight that running distro builds against upstream e2e
tests is not something we really support or plan to support but given
this is a easy fix I decided to just fix it here as any user with
GOTRACEBACK=crash set would face the same issue.
While I touch this test remove the unnecessary RestoreArtifact() call
which is not needed at all as we do nothing with the image and just
slows the test down for now reason.
[1] https://pkg.go.dev/runtime#section-sourcefilesFixes#24213
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
They no longer work in the latest image update, it is not clear why and
I do not have the time to debug that stuff. I opened #24230 to track it.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
In debian EST and MST7MDT are gone by default and moved to a special
package[1], instead of also installing that in the images lets use
different timezones in the test.
[1] 42c0008f86
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Run pasta with --trace and a log file to see if the hangs are caused by
pasta not correctly closing connections as assumed in #24219.
As the log is super verbose do not log it by default so I added some
extra logic to make sure it is only logged when the test fails.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Quadlet inserts network-online.target Wants/After dependencies to ensure pulling works.
Those systemd statements cannot be subsequently reset.
In the cases where those dependencies are not wanted, we add a new
configuration item called `DefaultDependencies=` in a new section called
[Quadlet]. This section is shared between different unit types.
fixes#24193
Signed-off-by: Farya L. Maerten <me@ltow.me>
When we are activated by systemd the code assumed that we had a valid
URL which was not the case so it failed to parse the URL which causes
the info call to fail all the time.
This fixes two problems first add the schema to the systemd activated
listener URL so it can be parsed correctly but second simply do not
parse it as url as all we care about in the info call is if it is unix
and the file path exists.
Fixes#24152
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Undoing some of my own work here from #24090 now that we have the
ExposedPorts field implemented in inspect. I considered a revert
of that patch, but it's still needed as without it we'd be
including exposed ports when --net=container which is not
correct.
Basically, exposed ports for a container should always go in the
new ExposedPorts field we added. They sometimes go in the Ports
field in NetworkSettings, but only when the container is not
net=host and not net=container. We were always including exposed
ports, which was not correct, but is an easy logical fix.
Also required is a test change to correct the expected behavior
as we were testing for incorrect behavior.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
Similar to github.com/containers/buildah/pull/5761 but not
security critical as Podman does not have an expectation that
mounts are scoped (the ability to write a --mount option is
already the ability to mount arbitrary content into the container
so sneaking arbitrary options into the mount doesn't have
security implications). Still, bad practice to let users inject
anything into the mount command line so let's not do that.
Signed-off-by: Matt Heon <mheon@redhat.com>
A field we missed versus Docker. Matches the format of our
existing Ports list in the NetworkConfig, but only includes
exposed ports (and maps these to struct{}, as they never go to
real ports on the host).
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
There is no reason to validate the args here, first podman may change
the syntax so this is just duplication that may hurt us long term. It
also added special handling of some options that just do not make sense,
i.e. removing 0.0.0.0, podman should really be the only parser here. And
more importantly this prevents variables from being used.
Fixes#24081
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Previously, we didn't bother including exposed ports in the
container config when creating a container with --net=host. Per
Docker this isn't really correct; host-net containers are still
considered to have exposed ports, even though that specific
container can be guaranteed to never use them.
We could just fix this for host container, but we might as well
make it generic. This patch unconditionally adds exposed ports to
the container config - it was previously conditional on a network
namespace being configured. The behavior of `podman inspect` with
exposed ports when using `--net=container:` has also been
corrected. Previously, we used exposed ports from the container
sharing its network namespace, which was not correct. Now, we use
regular port bindings from the namespace container, but exposed
ports from our own container.
Fixes https://issues.redhat.com/browse/RHEL-60382
Signed-off-by: Matt Heon <mheon@redhat.com>
Change getUnitDirs to maintain a slice in addition to the map and return the slice
Add helper functions to make the code more readable
Adjust unit tests
Restore system test
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Use os.ReadDir recursively instead of filepath.WalkDir
Use map instead of list to easily find looped Symlinks
Update existing tests and add a more elaborate one
Update the man page
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
There is no reason to disallow exposed sctp ports at all. As root we can
publish them find and as rootless it should error later anyway.
And for the case mentioned in the issue it doesn't make sense as the
port is not even published thus it is just part of the metadata which is
totally in all cases.
Fixes#23911
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Like we do in system tests now check for netns leaks in e2e as well. Now
because things run in parallel and this dir is shared we cannot test
after each test only once per suite. This will be a PITA to debug if
leaks happen as the netns files do not contain the container ID and are
just random bytes (maybe we should change this?)
Fixes#23715
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
As it turns on things are not so simple after all...
In podman-py it was reported[1] that waiting might hang, per our docs wait
on multiple conditions should exit once the first one is hit and not all
of them. However because the new wait logic never checked if the context
was cancelled the goroutine kept running until conmon exited and because
we used a waitgroup to wait for all of them to finish it blocked until
that happened.
First we can remove the waitgroup as we only need to wait for one of
them anyway via the channel. While this alone fixes the hang it would
still leak the other goroutine. As there is no way to cancel a goroutine
all the code must check for a cancelled context in the wait loop to no
leak.
Fixes 8a943311db ("libpod: simplify WaitForExit()")
[1] https://github.com/containers/podman-py/issues/425
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
...and remove one old skip() for older debian, but leave
two others in place and mark that they're still a problem.
Signed-off-by: Ed Santiago <santiago@redhat.com>
podman-remote events are not flushed, so order is not guaranteed.
This results in CI flakes. Only on Debian, for reasons unknown.
Make the network-connection events test more lenient when remote.
Closes: #23634 (but does not actually fix it)
Signed-off-by: Ed Santiago <santiago@redhat.com>
Minor bump. Fedora VMs now include ShellCheck, so we can
remove the 'dnf install' at CI run time.
Also, FWIW, Debian *vark are now at 1.12 (from 1.9)
VMs built in https://github.com/containers/automation_images/pull/385
Signed-off-by: Ed Santiago <santiago@redhat.com>
Creating networks in a different dir is not parallel safe when running
containers on them as the network configs may end up using the same
bridge names which then causes conflicts on the host.
Fixes#23876
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The kube generate command can now generate a yaml for
the Job kind and the kube play command can create a pod
and containers with podman when passed in a Job yaml.
Add relevant tests and docs for this.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>