Files
podman/libpod/image/pull.go
Jhon Honce d924494f56 Initial commit on compatible API
Signed-off-by: Jhon Honce <jhonce@redhat.com>

Create service command

Use cd cmd/service && go build .

$ systemd-socket-activate -l 8081 cmd/service/service &
$ curl http://localhost:8081/v1.24/images/json

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Correct Makefile

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Two more stragglers

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Report errors back as http headers

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Split out handlers, updated output

Output aligned to docker structures

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Refactored routing, added more endpoints and types

* Encapsulated all the routing information in the handler_* files.
* Added more serviceapi/types, including podman additions. See Info

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Cleaned up code, implemented info content

* Move Content-Type check into serviceHandler
* Custom 404 handler showing the url, mostly for debugging
* Refactored images: better method names and explicit http codes
* Added content to /info
* Added podman fields to Info struct
* Added Container struct

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Add a bunch of endpoints

containers: stop, pause, unpause, wait, rm
images: tag, rmi, create (pull only)

Signed-off-by: baude <bbaude@redhat.com>

Add even more handlers

* Add serviceapi/Error() to improve error handling
* Better support for API return payloads
* Renamed unimplemented to unsupported these are generic endpoints
  we don't intend to ever support.  Swarm broken out since it uses
  different HTTP codes to signal that the node is not in a swarm.
* Added more types
* API Version broken out so it can be validated in the future

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Refactor to introduce ServiceWriter

Signed-off-by: Jhon Honce <jhonce@redhat.com>

populate pods endpoints

/libpod/pods/..

exists, kill, pause, prune, restart, remove, start, stop, unpause

Signed-off-by: baude <bbaude@redhat.com>

Add components to Version, fix Error body

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Add images pull output, fix swarm routes

* docker-py tests/integration/api_client_test.py pass 100%
* docker-py tests/integration/api_image_test.py pass 4/16
+ Test failures include services podman does not support

Signed-off-by: Jhon Honce <jhonce@redhat.com>

pods endpoint submission 2

add create and others; only top and stats is left.

Signed-off-by: baude <bbaude@redhat.com>

Update pull image to work from empty registry

Signed-off-by: Jhon Honce <jhonce@redhat.com>

pod create and container create

first pass at pod and container create.  the container create does not
quite work yet but it is very close.  pod create needs a partial
rewrite.  also broken off the DELETE (rm/rmi) to specific handler funcs.

Signed-off-by: baude <bbaude@redhat.com>

Add docker-py demos, GET .../containers/json

* Update serviceapi/types to reflect libpod not podman
* Refactored removeImage() to provide non-streaming return

Signed-off-by: Jhon Honce <jhonce@redhat.com>

create container part2

finished minimal config needed for create container.  started demo.py
for upcoming talk

Signed-off-by: baude <bbaude@redhat.com>

Stop server after honoring request

* Remove casting for method calls
* Improve WriteResponse()
* Update Container API type to match docker API

Signed-off-by: Jhon Honce <jhonce@redhat.com>

fix namespace assumptions

cleaned up namespace issues with libpod.

Signed-off-by: baude <bbaude@redhat.com>

wip

Signed-off-by: baude <bbaude@redhat.com>

Add sliding window when shutting down server

* Added a Timeout rather than closing down service on each call
* Added gorilla/schema dependency for Decode'ing query parameters
* Improved error handling
* Container logs returned and multiplexed for stdout and stderr
  * .../containers/{name}/logs?stdout=True&stderr=True
* Container stats
  * .../containers/{name}/stats

Signed-off-by: Jhon Honce <jhonce@redhat.com>

Improve error handling

* Add check for at least one std stream required for /containers/{id}/logs
* Add check for state in /containers/{id}/top
* Fill in more fields for /info
* Fixed error checking in service start code

Signed-off-by: Jhon Honce <jhonce@redhat.com>

get rest  of image tests for pass

Signed-off-by: baude <bbaude@redhat.com>

linting our content

Signed-off-by: baude <bbaude@redhat.com>

more linting

Signed-off-by: baude <bbaude@redhat.com>

more linting

Signed-off-by: baude <bbaude@redhat.com>

pruning

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]apiv2 pods

migrate from using args in the url to using a json struct in body for
pod create.

Signed-off-by: baude <bbaude@redhat.com>

fix handler_images prune

prune's api changed slightly to deal with filters.

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]enabled base container create tests

enabling the base container create tests which allow us to get more into
the stop, kill, etc tests. many new tests now pass.

Signed-off-by: baude <bbaude@redhat.com>

serviceapi errors: append error message to API message

I dearly hope this is not breaking any other tests but debugging
"Internal Server Error" is not helpful to any user.  In case, it
breaks tests, we can rever the commit - that's why it's a small one.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>

serviceAPI: add containers/prune endpoint

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>

add `service` make target

Also remove the non-functional sub-Makefile.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>

add make targets for testing the service

 * `sudo make run-service` for running the service.

 * `DOCKERPY_TEST="tests/integration/api_container_test.py::ListContainersTest" \
 	make run-docker-py-tests`
   for running a specific tests.  Run all tests by leaving the env
   variable empty.

Signed-off-by: Valentin Rothberg <rothberg@redhat.com>

Split handlers and server packages

The files were split to help contain bloat. The api/server package will
contain all code related to the functioning of the server while
api/handlers will have all the code related to implementing the end
points.

api/server/register_* will contain the methods for registering
endpoints.  Additionally, they will have the comments for generating the
swagger spec file.

See api/handlers/version.go for a small example handler,
api/handlers/containers.go contains much more complex handlers.

Signed-off-by: Jhon Honce <jhonce@redhat.com>

[CI:DOCS]enabled more tests

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]libpod endpoints

small refactor for libpod inclusion and began adding endpoints.

Signed-off-by: baude <bbaude@redhat.com>

Implement /build and /events

* Include crypto libraries for future ssh work

Signed-off-by: Jhon Honce <jhonce@redhat.com>

[CI:DOCS]more image implementations

convert from using for to query structs among other changes including
new endpoints.

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]add bindings for golang

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]add volume endpoints for libpod

create, inspect, ls, prune, and rm

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]apiv2 healthcheck enablement

wire up container healthchecks for the api.

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]Add mount endpoints

via the api, allow ability to mount a container and list container
mounts.

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]Add search endpoint

add search endpoint with golang bindings

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]more apiv2 development

misc population of methods, etc

Signed-off-by: baude <bbaude@redhat.com>

rebase cleanup and epoch reset

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]add more network endpoints

also, add some initial error handling and convenience functions for
standard endpoints.

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]use helper funcs for bindings

use the methods developed to make writing bindings less duplicative and
easier to use.

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]add return info for prereview

begin to add return info and status codes for errors so that we can
review the apiv2

Signed-off-by: baude <bbaude@redhat.com>

[CI:DOCS]first pass at adding swagger docs for api

Signed-off-by: baude <bbaude@redhat.com>
2020-01-10 09:41:39 -06:00

412 lines
15 KiB
Go

package image
import (
"context"
"fmt"
"io"
"path/filepath"
"strings"
cp "github.com/containers/image/v5/copy"
"github.com/containers/image/v5/directory"
"github.com/containers/image/v5/docker"
dockerarchive "github.com/containers/image/v5/docker/archive"
"github.com/containers/image/v5/docker/tarfile"
ociarchive "github.com/containers/image/v5/oci/archive"
oci "github.com/containers/image/v5/oci/layout"
is "github.com/containers/image/v5/storage"
"github.com/containers/image/v5/transports"
"github.com/containers/image/v5/transports/alltransports"
"github.com/containers/image/v5/types"
"github.com/containers/libpod/libpod/events"
"github.com/containers/libpod/pkg/registries"
"github.com/hashicorp/go-multierror"
"github.com/opentracing/opentracing-go"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var (
// DockerArchive is the transport we prepend to an image name
// when saving to docker-archive
DockerArchive = dockerarchive.Transport.Name()
// OCIArchive is the transport we prepend to an image name
// when saving to oci-archive
OCIArchive = ociarchive.Transport.Name()
// DirTransport is the transport for pushing and pulling
// images to and from a directory
DirTransport = directory.Transport.Name()
// DockerTransport is the transport for docker registries
DockerTransport = docker.Transport.Name()
// OCIDirTransport is the transport for pushing and pulling
// images to and from a directory containing an OCI image
OCIDirTransport = oci.Transport.Name()
// AtomicTransport is the transport for atomic registries
AtomicTransport = "atomic"
// DefaultTransport is a prefix that we apply to an image name
// NOTE: This is a string prefix, not actually a transport name usable for transports.Get();
// and because syntaxes of image names are transport-dependent, the prefix is not really interchangeable;
// each user implicitly assumes the appended string is a Docker-like reference.
DefaultTransport = DockerTransport + "://"
// DefaultLocalRegistry is the default local registry for local image operations
// Remote pulls will still use defined registries
DefaultLocalRegistry = "localhost"
)
// pullRefPair records a pair of prepared image references to pull.
type pullRefPair struct {
image string
srcRef types.ImageReference
dstRef types.ImageReference
}
// pullGoal represents the prepared image references and decided behavior to be executed by imagePull
type pullGoal struct {
refPairs []pullRefPair
pullAllPairs bool // Pull all refPairs instead of stopping on first success.
usedSearchRegistries bool // refPairs construction has depended on registries.GetRegistries()
searchedRegistries []string // The list of search registries used; set only if usedSearchRegistries
}
// singlePullRefPairGoal returns a no-frills pull goal for the specified reference pair.
func singlePullRefPairGoal(rp pullRefPair) *pullGoal {
return &pullGoal{
refPairs: []pullRefPair{rp},
pullAllPairs: false, // Does not really make a difference.
usedSearchRegistries: false,
searchedRegistries: nil,
}
}
func (ir *Runtime) getPullRefPair(srcRef types.ImageReference, destName string) (pullRefPair, error) {
decomposedDest, err := decompose(destName)
if err == nil && !decomposedDest.hasRegistry {
// If the image doesn't have a registry, set it as the default repo
ref, err := decomposedDest.referenceWithRegistry(DefaultLocalRegistry)
if err != nil {
return pullRefPair{}, err
}
destName = ref.String()
}
reference := destName
if srcRef.DockerReference() != nil {
reference = srcRef.DockerReference().String()
}
destRef, err := is.Transport.ParseStoreReference(ir.store, reference)
if err != nil {
return pullRefPair{}, errors.Wrapf(err, "error parsing dest reference name %#v", destName)
}
return pullRefPair{
image: destName,
srcRef: srcRef,
dstRef: destRef,
}, nil
}
// getSinglePullRefPairGoal calls getPullRefPair with the specified parameters, and returns a single-pair goal for the return value.
func (ir *Runtime) getSinglePullRefPairGoal(srcRef types.ImageReference, destName string) (*pullGoal, error) {
rp, err := ir.getPullRefPair(srcRef, destName)
if err != nil {
return nil, err
}
return singlePullRefPairGoal(rp), nil
}
// pullGoalFromImageReference returns a pull goal for a single ImageReference, depending on the used transport.
func (ir *Runtime) pullGoalFromImageReference(ctx context.Context, srcRef types.ImageReference, imgName string, sc *types.SystemContext) (*pullGoal, error) {
span, _ := opentracing.StartSpanFromContext(ctx, "pullGoalFromImageReference")
defer span.Finish()
// supports pulling from docker-archive, oci, and registries
switch srcRef.Transport().Name() {
case DockerArchive:
archivePath := srcRef.StringWithinTransport()
tarSource, err := tarfile.NewSourceFromFile(archivePath)
if err != nil {
return nil, err
}
manifest, err := tarSource.LoadTarManifest()
if err != nil {
return nil, errors.Wrapf(err, "error retrieving manifest.json")
}
// to pull the first image stored in the tar file
if len(manifest) == 0 {
// use the hex of the digest if no manifest is found
reference, err := getImageDigest(ctx, srcRef, sc)
if err != nil {
return nil, err
}
return ir.getSinglePullRefPairGoal(srcRef, reference)
}
if len(manifest[0].RepoTags) == 0 {
// If the input image has no repotags, we need to feed it a dest anyways
digest, err := getImageDigest(ctx, srcRef, sc)
if err != nil {
return nil, err
}
return ir.getSinglePullRefPairGoal(srcRef, digest)
}
// Need to load in all the repo tags from the manifest
res := []pullRefPair{}
for _, dst := range manifest[0].RepoTags {
//check if image exists and gives a warning of untagging
localImage, err := ir.NewFromLocal(dst)
imageID := strings.TrimSuffix(manifest[0].Config, ".json")
if err == nil && imageID != localImage.ID() {
logrus.Errorf("the image %s already exists, renaming the old one with ID %s to empty string", dst, localImage.ID())
}
pullInfo, err := ir.getPullRefPair(srcRef, dst)
if err != nil {
return nil, err
}
res = append(res, pullInfo)
}
return &pullGoal{
refPairs: res,
pullAllPairs: true,
usedSearchRegistries: false,
searchedRegistries: nil,
}, nil
case OCIArchive:
// retrieve the manifest from index.json to access the image name
manifest, err := ociarchive.LoadManifestDescriptor(srcRef)
if err != nil {
return nil, errors.Wrapf(err, "error loading manifest for %q", srcRef)
}
var dest string
if manifest.Annotations == nil || manifest.Annotations["org.opencontainers.image.ref.name"] == "" {
// If the input image has no image.ref.name, we need to feed it a dest anyways
// use the hex of the digest
dest, err = getImageDigest(ctx, srcRef, sc)
if err != nil {
return nil, errors.Wrapf(err, "error getting image digest; image reference not found")
}
} else {
dest = manifest.Annotations["org.opencontainers.image.ref.name"]
}
return ir.getSinglePullRefPairGoal(srcRef, dest)
case DirTransport:
image := toLocalImageName(srcRef.StringWithinTransport())
return ir.getSinglePullRefPairGoal(srcRef, image)
case OCIDirTransport:
split := strings.SplitN(srcRef.StringWithinTransport(), ":", 2)
image := toLocalImageName(split[0])
return ir.getSinglePullRefPairGoal(srcRef, image)
default:
return ir.getSinglePullRefPairGoal(srcRef, imgName)
}
}
// toLocalImageName converts an image name into a 'localhost/' prefixed one
func toLocalImageName(imageName string) string {
return fmt.Sprintf(
"%s/%s",
DefaultLocalRegistry,
strings.TrimLeft(imageName, "/"),
)
}
// pullImageFromHeuristicSource pulls an image based on inputName, which is heuristically parsed and may involve configured registries.
// Use pullImageFromReference if the source is known precisely.
func (ir *Runtime) pullImageFromHeuristicSource(ctx context.Context, inputName string, writer io.Writer, authfile, signaturePolicyPath string, signingOptions SigningOptions, dockerOptions *DockerRegistryOptions, label *string) ([]string, error) {
span, _ := opentracing.StartSpanFromContext(ctx, "pullImageFromHeuristicSource")
defer span.Finish()
var goal *pullGoal
sc := GetSystemContext(signaturePolicyPath, authfile, false)
if dockerOptions != nil {
sc.OSChoice = dockerOptions.OSChoice
sc.ArchitectureChoice = dockerOptions.ArchitectureChoice
}
sc.BlobInfoCacheDir = filepath.Join(ir.store.GraphRoot(), "cache")
srcRef, err := alltransports.ParseImageName(inputName)
if err != nil {
// We might be pulling with an unqualified image reference in which case
// we need to make sure that we're not using any other transport.
srcTransport := alltransports.TransportFromImageName(inputName)
if srcTransport != nil && srcTransport.Name() != DockerTransport {
return nil, err
}
goal, err = ir.pullGoalFromPossiblyUnqualifiedName(inputName)
if err != nil {
return nil, errors.Wrap(err, "error getting default registries to try")
}
} else {
goal, err = ir.pullGoalFromImageReference(ctx, srcRef, inputName, sc)
if err != nil {
return nil, errors.Wrapf(err, "error determining pull goal for image %q", inputName)
}
}
return ir.doPullImage(ctx, sc, *goal, writer, signingOptions, dockerOptions, label)
}
// pullImageFromReference pulls an image from a types.imageReference.
func (ir *Runtime) pullImageFromReference(ctx context.Context, srcRef types.ImageReference, writer io.Writer, authfile, signaturePolicyPath string, signingOptions SigningOptions, dockerOptions *DockerRegistryOptions) ([]string, error) {
span, _ := opentracing.StartSpanFromContext(ctx, "pullImageFromReference")
defer span.Finish()
sc := GetSystemContext(signaturePolicyPath, authfile, false)
if dockerOptions != nil {
sc.OSChoice = dockerOptions.OSChoice
sc.ArchitectureChoice = dockerOptions.ArchitectureChoice
}
goal, err := ir.pullGoalFromImageReference(ctx, srcRef, transports.ImageName(srcRef), sc)
if err != nil {
return nil, errors.Wrapf(err, "error determining pull goal for image %q", transports.ImageName(srcRef))
}
return ir.doPullImage(ctx, sc, *goal, writer, signingOptions, dockerOptions, nil)
}
func cleanErrorMessage(err error) string {
errMessage := strings.TrimPrefix(errors.Cause(err).Error(), "errors:\n")
errMessage = strings.Split(errMessage, "\n")[0]
return fmt.Sprintf(" %s\n", errMessage)
}
// doPullImage is an internal helper interpreting pullGoal. Almost everyone should call one of the callers of doPullImage instead.
func (ir *Runtime) doPullImage(ctx context.Context, sc *types.SystemContext, goal pullGoal, writer io.Writer, signingOptions SigningOptions, dockerOptions *DockerRegistryOptions, label *string) ([]string, error) {
span, _ := opentracing.StartSpanFromContext(ctx, "doPullImage")
defer span.Finish()
policyContext, err := getPolicyContext(sc)
if err != nil {
return nil, err
}
defer func() {
if err := policyContext.Destroy(); err != nil {
logrus.Errorf("failed to destroy policy context: %q", err)
}
}()
systemRegistriesConfPath := registries.SystemRegistriesConfPath()
var (
images []string
pullErrors *multierror.Error
)
for _, imageInfo := range goal.refPairs {
copyOptions := getCopyOptions(sc, writer, dockerOptions, nil, signingOptions, "", nil)
copyOptions.SourceCtx.SystemRegistriesConfPath = systemRegistriesConfPath // FIXME: Set this more globally. Probably no reason not to have it in every types.SystemContext, and to compute the value just once in one place.
// Print the following statement only when pulling from a docker or atomic registry
if writer != nil && (imageInfo.srcRef.Transport().Name() == DockerTransport || imageInfo.srcRef.Transport().Name() == AtomicTransport) {
if _, err := io.WriteString(writer, fmt.Sprintf("Trying to pull %s...\n", imageInfo.image)); err != nil {
return nil, err
}
}
// If the label is not nil, check if the label exists and if not, return err
if label != nil {
if err := checkRemoteImageForLabel(ctx, *label, imageInfo, sc); err != nil {
return nil, err
}
}
_, err = cp.Image(ctx, policyContext, imageInfo.dstRef, imageInfo.srcRef, copyOptions)
if err != nil {
pullErrors = multierror.Append(pullErrors, err)
logrus.Debugf("Error pulling image ref %s: %v", imageInfo.srcRef.StringWithinTransport(), err)
if writer != nil {
_, _ = io.WriteString(writer, cleanErrorMessage(err))
}
} else {
if !goal.pullAllPairs {
ir.newImageEvent(events.Pull, "")
return []string{imageInfo.image}, nil
}
images = append(images, imageInfo.image)
}
}
// If no image was found, we should handle. Lets be nicer to the user and see if we can figure out why.
if len(images) == 0 {
if goal.usedSearchRegistries && len(goal.searchedRegistries) == 0 {
return nil, errors.Errorf("image name provided is a short name and no search registries are defined in the registries config file.")
}
// If the image passed in was fully-qualified, we will have 1 refpair. Bc the image is fq'd, we don't need to yap about registries.
if !goal.usedSearchRegistries {
if pullErrors != nil && len(pullErrors.Errors) > 0 { // this should always be true
return nil, errors.Wrap(pullErrors.Errors[0], "unable to pull image")
}
return nil, errors.Errorf("unable to pull image, or you do not have pull access")
}
return nil, pullErrors
}
if len(images) > 0 {
ir.newImageEvent(events.Pull, images[0])
}
return images, nil
}
// pullGoalFromPossiblyUnqualifiedName looks at inputName and determines the possible
// image references to try pulling in combination with the registries.conf file as well
func (ir *Runtime) pullGoalFromPossiblyUnqualifiedName(inputName string) (*pullGoal, error) {
decomposedImage, err := decompose(inputName)
if err != nil {
return nil, err
}
if decomposedImage.hasRegistry {
srcRef, err := docker.ParseReference("//" + inputName)
if err != nil {
return nil, errors.Wrapf(err, "unable to parse '%s'", inputName)
}
return ir.getSinglePullRefPairGoal(srcRef, inputName)
}
searchRegistries, err := registries.GetRegistries()
if err != nil {
return nil, err
}
var refPairs []pullRefPair
for _, registry := range searchRegistries {
ref, err := decomposedImage.referenceWithRegistry(registry)
if err != nil {
return nil, err
}
imageName := ref.String()
srcRef, err := docker.ParseReference("//" + imageName)
if err != nil {
return nil, errors.Wrapf(err, "unable to parse '%s'", imageName)
}
ps, err := ir.getPullRefPair(srcRef, imageName)
if err != nil {
return nil, err
}
refPairs = append(refPairs, ps)
}
return &pullGoal{
refPairs: refPairs,
pullAllPairs: false,
usedSearchRegistries: true,
searchedRegistries: searchRegistries,
}, nil
}
// checkRemoteImageForLabel checks if the remote image has a specific label. if the label exists, we
// return nil, else we return an error
func checkRemoteImageForLabel(ctx context.Context, label string, imageInfo pullRefPair, sc *types.SystemContext) error {
labelImage, err := imageInfo.srcRef.NewImage(ctx, sc)
if err != nil {
return err
}
remoteInspect, err := labelImage.Inspect(ctx)
if err != nil {
return err
}
// Labels are case insensitive; so we iterate instead of simple lookup
for k := range remoteInspect.Labels {
if strings.ToLower(label) == strings.ToLower(k) {
return nil
}
}
return errors.Errorf("%s has no label %s in %q", imageInfo.image, label, remoteInspect.Labels)
}