Primarily, add skip_if_journald_unavailable because RHEL.
Secondarily, reverse a flipped actual/expect assertion
that made it difficult to understand the RHEL failure.
Signed-off-by: Ed Santiago <santiago@redhat.com>
We currently name the container being created during kube play
as ctrName-podName, but this is not how it is done in k8s.
Since we can't change this at the CLI level as it will be a breaking
change (it will be planned for podman 5.0), add only ctrName as an alias
to the network of the pod.
Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>
We already label the issue anyway and this results in reports without
an actual title so remove it. This leaves more space for an actual
useful title.
ref: https://github.com/containers/podman/discussions/17431
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Both are Quadlet maintainers and active contributors.
With great power, comes great responsibility.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
- podman-remote unshare returns an error message
with the exit code '125'.
- Need to run RestartRemoteService() to apply
changes to the TMPDIR.
Signed-off-by: Toshiki Sonoda <sonoda.toshiki@fujitsu.com>
Commit 2f29639bd3aa9 added a UX improvement to cleanup/teardown when
running the specified YAML has failed. However, the teardown happens
unconditionally such that rerunning the same YAML file will teardown the
previously created workload instead of just failing with a name-conflict
error (e.g., "pod already exists"). The regression popped up testing
the Ansible system role with Podman v4.4.0.
For now, do not teardown at all on error to quickly fix this regression
for the upcoming Podman v4.4.1 release. The UX improvement is still
desired but must be conditional and only happen on newly created
resources, which probably requires moving it down to the backend.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
The tests for generating username/passwd entries assume that
UID/GID 123/456 do not exist, which is not a safe assumption on
Debian. If a /etc/passwd entry with that UID/GID already exists,
the test will not add a new one with the same UID/GID, and will
fail. Change UID and GID to be 6 digits, because we're a lot less
likely to collide with UIDs and GIDs in use on the system that
way. Could also go further and randomly generate the UID/GID, but
that feels like overkill.
Fixes#17366
Signed-off-by: Matt Heon <mheon@redhat.com>
When golangci-lint it will only report 3 errors fromt he same linter by
default. This is annoying when a new linter is added and you think only
3 three errors lets fix it real quick only to notice when you rerun it
there again new 3 errors and so on.
In CI and local I want to see all issues at once so I can fix them and
know how much work it is before starting to fix them.
With `max-issues-per-linter: 0` and `max-same-issues: 0` it will show
us all errors because 0 means unlimted. By default it will only show 50
per linter and 3 from the same issue.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The new version contains the ginkgolinter, which makes sure the
assertions are more helpful.
Also replace the deprecated os.SEEK_END with io.SeekEnd.
There is also a new `musttag` linter which checks if struct that are
un/marshalled all have json tags. This results in many warnings so I
disabled the check for now. We can reenable it if we think it is worth
it but for now it way to much work to fix all report problems.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Document the identifiers used in the journald events backend. Those can
be used to filter Podman events with journalctl and I need them to be
documented for a blog I am writing at the moment.
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Quadlet should not exit with failure if no files to process have been
found. Otherwise, even simple operations such as reloading systemd
will fail as it retriggers generators.
Fixes: #17374
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
Handle a race condition in the REST API when listing networks.
In between listing all containers and inspecting them, they may have
already been removed, so handle this case gracefully.
[NO NEW TESTS NEEDED] as it's a race condition.
Fixes: #17341
Signed-off-by: Valentin Rothberg <vrothberg@redhat.com>
- Don't order the container unit before local-fs.target as that creates
an ordering cycle that triggers other issues.
- Use the example network in the container unit
- Only use groups that exists by default for the volume
Signed-off-by: Timothée Ravier <tim@siosm.fr>