Podman machine list now supports a new option, --all-providers, which lists all machines from all providers.
Signed-off-by: Ashley Cui <acui@redhat.com>
The podman container cleanup process runs asynchronous and by the time
it gets the lock it is possible another podman process already did the
cleanup and then did a new init() to start it again. If the cleanup
process gets the lock there it will cause very weird things.
This can be observed in the remote start API as CI flakes.
Fixes#23754
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When we use passthrough logging and the --rmi option should not try to
delete the image right away. Simply speak passthough only means do not
print the cotnainer id but we should never try to delete the image here
as this will be done in the cleanup process.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Since commit 458ba5a8af the cleanup process now removes the image as
well, thus the removal is racy and it will cause an error here.
The code tried to ignore the error with errors.Is() but this never works
across the remote API. However the API already has a ignore option so
juts use that and fix the error message so that we can easily find the
root cause and I do not have to guess where the log was written.
Fixes#23719
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
This started off as an attempt to make `podman stop` on a
container started with `--rm` actually remove the container,
instead of just cleaning it up and waiting for the cleanup
process to finish the removal.
In the process, I realized that `podman run --rmi` was rather
broken. It was only done as part of the Podman CLI, not the
cleanup process (meaning it only worked with attached containers)
and the way it was wired meant that I was fairly confident that
it wouldn't work if I did a `podman stop` on an attached
container run with `--rmi`. I rewired it to use the same
mechanism that `podman run --rm` uses, so it should be a lot more
durable now, and I also wired it into `podman inspect` so you can
tell that a container will remove its image.
Tests have been added for the changes to `podman run --rmi`. No
tests for `stop` on a `run --rm` container as that would be racy.
Fixes#22852
Fixes RHEL-39513
Signed-off-by: Matt Heon <mheon@redhat.com>
The new golangci-lint version 1.60.1 has problems with typecheck when
linting remote files. We have certain pakcages that should never be
inlcuded in remote but the typecheck tries to compile all of them but
this never works and it seems to ignore the exclude files we gave it.
To fix this the proper way is to mark all packages we only use locally
with !remote tags. This is a bit ugly but more correct. I also moved the
DecodeChanges() code around as it is called from the client so the
handles package which should only be remote doesn't really fit anyway.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Rootless units placed in `users` would be loaded for root when
`/etc/containers/systemd` is a symlink. In this case, since
`UnitDirAdmin` is hardcoded, `userLevelFilter` always returns `true`.
If `/etc/containers/systemd/users` is a symlink, any user would load
other users' units.
Fix the above two problems.
Fixes: #23483
Signed-off-by: Uzinn Kagurazaka <uzinn.kagurazaka@11555511.xyz>
Add support for the ServiceName key for all unit types
Extend the PodInfo struct into UnitInfo to consolidate all prepopulated data into a single map
Use the NodesInfo map instead of the resourceName
Update the UnitInfo in the convert function instead of returning it
No need to replace extension anymore just remove it
All e2e tests with dependencies on other Quadlet files moved to a separate section
Add the capability of overriding the service name in the test
Add e2e tests for the new functionality
Adjust integration tests
Update the MAN page
Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
Fixes: e62c928642ad ("Make podman-compose refer to podman-compose(1) when using an external provider")
- test: add coverage for PODMAN_COMPOSE_WARNING_LOGS
Signed-off-by: Petter Mikkelsen <43xhyr9m@anonaddy.me>
Using the ExactArgs(1) function is better because we have less
duplication of the error text and the ValidArgsFunction uses that to
suggest shell completion. The command before this commit would suggest
connection names even if there was already one arg on the cli set.
However because there is the --all option we still must exclude that
first.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Change the warning message at runtime to refer to the man page of podman-compose instead of "the documentation"
Add instructions in the man page on how to disable the warning emitted by podman-compose when using an external compose provider
Signed-off-by: marinmo <bugzilla@marinmo.org>
The events code makes use of two channels, one for the events and one
for the resulting error. Then in the main file we have a loop reading
from both channels that should exit on first error it gets.
However in case the event channel is closed before the error channel
cotains the error it could caused an early exit as it looked like all
events were done. Commit c46884aa93 fixed that somewhat by checking for
an error in the error channel before exiting. This however was still
racy as it added a default case in the select which means the channel
check is non blocking. Thus the error was not yet send into the channel.
To fix this we should make it a blocking read to wait for the error in
the channel. Also the err != nil check can be removed as we either
return err or nil anyway.
And as last step make sure the error channel is closed, that prevents us
from blocking forever in case the main select already processed the nil
error.
Fixes#23165
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
The driver is now hardcoded again, and there can only be
one type of mounts at a time (which one changes over time)
Revert "Make it possible to select the volume driver"
This reverts commit 6630e5cf66cf76aefcfe9caebe5df4f37dd0bdd5.
Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
Close loophole that would allow you to assign more memory than the
system has to a podman machine
Fixes: #18206
Signed-off-by: Brent Baude <bbaude@redhat.com>
Podman machine reset now removes and resets machines from all providers availabe on the platform.
On windows, if the user is does not have admin privs, machine will only reset WSL, but will emit a warning that it is unable to remove hyperV machines without elevated privs.
Signed-off-by: Ashley Cui <acui@redhat.com>
When the user specifies a Containerfile or Dockfile with the -f flag in podman build, if the file does not exist, the error should be intuitive to the user.
Fixed: #22940
Signed-off-by: Kevin Cui <bh@bugs.cc>
The pod was set after we checked the namespace and the namespace code
only checked the --pod flag but didn't consider --pod-id-file option.
As such fix the check to first set the pod option on the spec then use
that for the namespace. Also make sure we always use an empty default
otherwise it would be impossible in the backend to know if a user
requested a specific userns or not, i.e. even in case of a set
PODMAN_USERNS env a container should still get the userns from the pod
and not use the var in this case. Therefore unset it from the default
cli value.
There are more issues here around --pod-id-file and cli validation that
does not consider the option as conflicting with --userns like --pod
does but I decided to fix the bug at hand and don't try to fix the
entire mess which most likely would take days.
Fixes#22931
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
When a user specifies a invalid connection in CONTAINER_CONNECTION then
podman should return a proper error saying so. Currently it ignored the
error and in rootFlags() just exited early with defining any flags. This
caused a panic then when trying to use the flags later.
In order to address this first store the connection error in the
PodmanConfig struct and not abort right away during flag setup. This is
important as the user might have specified a flag with a valid remote
connection. As such we check all flags and only when none were given we
return the connection error.
Also while at it I noticed that the default connection reported via
podman --help was wrong as it only used the old containers.conf field
for it and did not consider the podman-connections.json default.
New regression tests have been added to make sure it behaves correctly.
This fixes the problem reported in the PR #22997.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
add a new flag that allows to override the pull options configured in
the storage.conf file.
e.g.: --pull-option="enable_partial_images=false" can be specified to
Podman to disable partial pulls even if enabled.
Leave it as a hidden configuration flag for now since the API itself
is marked as experimental in c/storage.
Currently c/storage doesn't honor the overrides, being fixed with
https://github.com/containers/storage/pull/1966
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This is the same as what --squash-all is doing, and we already support
--squash with --layers=true since this is the default.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>