Switch defaults for --layers, --force-rm and --pull-always
from buildah to podman.
Only override default values.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
We should just bind mount the original containers /etc/resolv.conf and /etchosts
into the new container. Changes in the resolv.conf and hosts should be seen
by all containers, This matches Docker behaviour.
In order to make this work the labels on these files need to have a shared
SELinux label.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This seems to be a needless restriction. We make a copy of the
hosts /etc/resolv.conf file, so these changes to not modify the
host.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
If user specifies network namespace and the /etc/netns/XXX/resolv.conf
exists, we should use this rather then /etc/resolv.conf
Also fail cleaner if the user specifies an invalid Network Namespace.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
when joining an existing namespace, we were not maintaining the
current working directory, causing commands like export -o to fail
when they weren't referring to absolute paths.
Closes: https://github.com/containers/libpod/issues/2381
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
Vendors in Buildah 1.7 into Podman.
Also the latest imagebuilder and changes for
`build --target`
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
in the case of the remote-client, it was decided to hide the latest
flag to avoid confusion for end-users on what the "last" container,
volume, or pod are.
Signed-off-by: baude <bbaude@redhat.com>
the remote-client is currently weak for carrying error messages
over the varlink interface and displaying something useful to users
and developers for the purposes of debug. this is a starting point
to improve that user experience.
Signed-off-by: baude <bbaude@redhat.com>
if --runtime is specified, then it has higher priority on the
runtime_path option, which was added for backward compatibility.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
enable the remote client to be able to inspect a pod. also, bonus of
enabling the podman pod exists command which returns a 0 or 1 depending
on whether the given pod exists.
Signed-off-by: baude <bbaude@redhat.com>
There is no native package for this, so the packaged version must also
be installed, otherwise all the support/dependencies would be removed
also (like go-md2man). Fix this by installing from the google released
tarball, into /usr/local/go and set $GOROOT to point there.
Also, include a small fix for hack/get_ci_vm.sh not installing
testing dependencies because of an old assumption.
***CIRRUS: REBUILD IMAGES***
Signed-off-by: Chris Evich <cevich@redhat.com>
Tests running slower than normally-slow, bump timeout to allow them to
pass until better solution (for slow Ubuntu tests) can be found.
Signed-off-by: Chris Evich <cevich@redhat.com>
Introduce a new part inside the contribution guide
who explain how to start to hack on libpod:
- setup environment
- install tools
- using make
- building podman
- test your changes locally
Signed-off-by: Hervé Beraud <hberaud@redhat.com>
The original intent behind the requirement was to ensure that, if
two SHM lock structs were open at the same time, we should not
make such a runtime available to the user, and should clean it up
instead.
It turns out that we don't even need to open a second SHM lock
struct - if we get an error mapping the first one due to a lock
count mismatch, we can just delete it, and it cleans itself up
when it errors. So there's no reason not to return a valid
runtime.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
This command allows for renumbering Podman locks after an upgrade
to Podman with SHM locks from a 1.0 or earlier branch, or after
the number of locks was changed.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
When we're renumbering locks, we're destroying all existing
allocations anyways, so destroying the old lock struct is not a
particularly big deal. Existing long-lived libpod instances will
continue to use the old locks, but that will be solved in a
followon.
Also, solve an issue with returning error values in the C code.
There were a few places where we return ERRNO where it was not
set, so make them return actual error codes).
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
We can't do renumbering after init - we need to open a
potentially invalid locks file (too many/too few locks), and then
potentially delete the old locks and make new ones.
We need to be in init to bypass the checks that would otherwise
make this impossible.
This leaves us with two choices: make RenumberLocks a separate
entrypoint from NewRuntime, duplicating a lot of configuration
load code (we need to know where the locks live, how many there
are, etc) - or modify NewRuntime to allow renumbering during it.
Previous experience says the first is not really a viable option
and produces massive code bloat, so the second it is.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
I was looking into why we have locks in volumes, and I'm fairly
convinced they're unnecessary.
We don't have a state whose accesses we need to guard with locks
and syncs. The only real purpose for the lock was to prevent
concurrent removal of the same volume.
Looking at the code, concurrent removal ought to be fine with a
bit of reordering - one or the other might fail, but we will
successfully evict the volume from the state.
Also, remove the 'prune' bool from RemoveVolume. None of our
other API functions accept it, and it only served to toggle off
more verbose error messages.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
Renumber is a way of renumbering container locks after the number
of locks available has changed.
For now, renumber only works with containers.
Signed-off-by: Matthew Heon <matthew.heon@pm.me>