feat: Add new gcloud commands, API clients, and third-party libraries across various services.

This commit is contained in:
2026-01-01 20:26:35 +01:00
parent 5e23cbece0
commit a19e592eb7
25221 changed files with 8324611 additions and 0 deletions

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,18 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry']

View File

@@ -0,0 +1,30 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.client']
from containerregistry.client import docker_name_
setattr(x, 'docker_name', docker_name_)
from containerregistry.client import docker_creds_
setattr(x, 'docker_creds', docker_creds_)
from containerregistry.client import monitor_
setattr(x, 'monitor', monitor_)

View File

@@ -0,0 +1,297 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package exposes credentials for talking to a Docker registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import base64
import errno
import io
import json
import logging
import os
import subprocess
from containerregistry.client import docker_name
import httplib2
from oauth2client import client as oauth2client
import six
class Provider(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for providing User Credentials for use with a Docker Registry."""
# pytype: disable=bad-return-type
@abc.abstractmethod
def Get(self):
"""Produces a value suitable for use in the Authorization header."""
# pytype: enable=bad-return-type
class Anonymous(Provider):
"""Implementation for anonymous access."""
def Get(self):
"""Implement anonymous authentication."""
return ''
class SchemeProvider(Provider):
"""Implementation for providing a challenge response credential."""
def __init__(self, scheme):
self._scheme = scheme
# pytype: disable=bad-return-type
@property
@abc.abstractmethod
def suffix(self):
"""Returns the authentication payload to follow the auth scheme."""
# pytype: enable=bad-return-type
def Get(self):
"""Gets the credential in a form suitable for an Authorization header."""
return u'%s %s' % (self._scheme, self.suffix)
class Basic(SchemeProvider):
"""Implementation for providing a username/password-based creds."""
def __init__(self, username, password):
super(Basic, self).__init__('Basic')
self._username = username
self._password = password
@property
def username(self):
return self._username
@property
def password(self):
return self._password
@property
def suffix(self):
u = self.username.encode('utf8')
p = self.password.encode('utf8')
return base64.b64encode(u + b':' + p).decode('utf8')
_USERNAME = '_token'
class OAuth2(Basic):
"""Base class for turning OAuth2Credentials into suitable GCR credentials."""
def __init__(self, creds,
transport):
"""Constructor.
Args:
creds: the credentials from which to retrieve access tokens.
transport: the http transport to use for token exchanges.
"""
super(OAuth2, self).__init__(_USERNAME, 'does not matter')
self._creds = creds
self._transport = transport
@property
def password(self):
# WORKAROUND...
# The python oauth2client library only loads the credential from an
# on-disk cache the first time 'refresh()' is called, and doesn't
# actually 'Force a refresh of access_token' as advertised.
# This call will load the credential, and the call below will refresh
# it as needed. If the credential is unexpired, the call below will
# simply return a cache of this refresh.
unused_at = self._creds.get_access_token(http=self._transport)
# Most useful API ever:
# https://www.googleapis.com/oauth2/v1/tokeninfo?access_token={at}
return self._creds.get_access_token(http=self._transport).access_token
_MAGIC_NOT_FOUND_MESSAGE = 'credentials not found in native keychain'
class Helper(Basic):
"""This provider wraps a particularly named credential helper."""
def __init__(self, name, registry):
"""Constructor.
Args:
name: the name of the helper, as it appears in the Docker config.
registry: the registry for which we're invoking the helper.
"""
super(Helper, self).__init__('does not matter', 'does not matter')
self._name = name
self._registry = registry.registry
def Get(self):
# Invokes:
# echo -n {self._registry} | docker-credential-{self._name} get
# The resulting JSON blob will have 'Username' and 'Secret' fields.
bin_name = 'docker-credential-{name}'.format(name=self._name)
logging.info('Invoking %r to obtain Docker credentials.', bin_name)
try:
p = subprocess.Popen(
[bin_name, 'get'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
stderr=subprocess.STDOUT)
except OSError as e:
if e.errno == errno.ENOENT:
raise Exception('executable not found: ' + bin_name)
raise
# Some keychains expect a scheme:
# https://github.com/bazelbuild/rules_docker/issues/111
stdout = p.communicate(
input=('https://' + self._registry).encode('utf-8'))[0]
if stdout.strip() == _MAGIC_NOT_FOUND_MESSAGE:
# Use empty auth when no auth is found.
logging.info('Credentials not found, falling back to anonymous auth.')
return Anonymous().Get()
if p.returncode != 0:
raise Exception('Error fetching credential for %s, exit status: %d\n%s' %
(self._name, p.returncode, stdout))
blob = json.loads(stdout.decode('utf-8'))
logging.info('Successfully obtained Docker credentials.')
return Basic(blob['Username'], blob['Secret']).Get()
class Keychain(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for resolving an image reference to a credential."""
# pytype: disable=bad-return-type
@abc.abstractmethod
def Resolve(self, name):
"""Resolves the appropriate credential for the given registry.
Args:
name: the registry for which we need a credential.
Returns:
a Provider suitable for use with registry operations.
"""
# pytype: enable=bad-return-type
_FORMATS = [
# Allow naked domains
'%s',
# Allow scheme-prefixed.
'https://%s',
'http://%s',
# Allow scheme-prefixes with version in url path.
'https://%s/v1/',
'http://%s/v1/',
'https://%s/v2/',
'http://%s/v2/',
]
def _GetUserHomeDir():
if os.name == 'nt':
# %HOME% has precedence over %USERPROFILE% for os.path.expanduser('~')
# The Docker config resides under %USERPROFILE% on Windows
return os.path.expandvars('%USERPROFILE%')
else:
return os.path.expanduser('~')
def _GetConfigDirectory():
# Return the value of $DOCKER_CONFIG, if it exists, otherwise ~/.docker
# see https://github.com/docker/docker/blob/master/cliconfig/config.go
if os.environ.get('DOCKER_CONFIG') is not None:
return os.environ.get('DOCKER_CONFIG')
else:
return os.path.join(_GetUserHomeDir(), '.docker')
class _DefaultKeychain(Keychain):
"""This implements the default docker credential resolution."""
def __init__(self):
# Store a custom directory to get the Docker configuration JSON from
self._config_dir = None
# Name of the docker configuration JSON file to look for in the
# configuration directory
self._config_file = 'config.json'
def setCustomConfigDir(self, config_dir):
# Override the configuration directory where the docker configuration
# JSON is searched for
if not os.path.isdir(config_dir):
raise Exception('Attempting to override docker configuration directory'
' to invalid directory: {}'.format(config_dir))
self._config_dir = config_dir
def Resolve(self, name):
# TODO(user): Consider supporting .dockercfg, which was used prior
# to Docker 1.7 and consisted of just the contents of 'auths' below.
logging.info('Loading Docker credentials for repository %r', str(name))
config_file = None
if self._config_dir is not None:
config_file = os.path.join(self._config_dir, self._config_file)
else:
config_file = os.path.join(_GetConfigDirectory(), self._config_file)
try:
with io.open(config_file, u'r', encoding='utf8') as reader:
cfg = json.loads(reader.read())
except IOError:
# If the file doesn't exist, fallback on anonymous auth.
return Anonymous()
# Per-registry credential helpers take precedence.
cred_store = cfg.get('credHelpers', {})
for form in _FORMATS:
if form % name.registry in cred_store:
return Helper(cred_store[form % name.registry], name)
# A global credential helper is next in precedence.
if 'credsStore' in cfg:
return Helper(cfg['credsStore'], name)
# Lastly, the 'auths' section directly contains basic auth entries.
auths = cfg.get('auths', {})
for form in _FORMATS:
if form % name.registry in auths:
entry = auths[form % name.registry]
if 'auth' in entry:
decoded = base64.b64decode(entry['auth']).decode('utf8')
username, password = decoded.split(':', 1)
return Basic(username, password)
elif 'username' in entry and 'password' in entry:
return Basic(entry['username'], entry['password'])
else:
# TODO(user): Support identitytoken
# TODO(user): Support registrytoken
raise Exception(
'Unsupported entry in "auth" section of Docker config: ' +
json.dumps(entry))
return Anonymous()
# pylint: disable=invalid-name
DefaultKeychain = _DefaultKeychain()

View File

@@ -0,0 +1,318 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package defines Tag a way of representing an image uri."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import six.moves.urllib.parse
class BadNameException(Exception):
"""Exceptions when a bad docker name is supplied."""
_REPOSITORY_CHARS = 'abcdefghijklmnopqrstuvwxyz0123456789_-./'
_TAG_CHARS = 'abcdefghijklmnopqrstuvwxyz0123456789_-.ABCDEFGHIJKLMNOPQRSTUVWXYZ'
# These have the form: sha256:<hex string>
_DIGEST_CHARS = 'sh:0123456789abcdef'
# TODO(b/73235733): Add a flag to allow specifying custom app name to be
# appended to useragent.
_APP = os.path.basename(sys.argv[0]) if sys.argv[0] else 'console'
USER_AGENT = '//containerregistry/client:%s' % _APP
DEFAULT_DOMAIN = 'index.docker.io'
DEFAULT_TAG = 'latest'
def _check_element(name, element, characters, min_len,
max_len):
"""Checks a given named element matches character and length restrictions.
Args:
name: the name of the element being validated
element: the actual element being checked
characters: acceptable characters for this element, or None
min_len: minimum element length, or None
max_len: maximum element length, or None
Raises:
BadNameException: one of the restrictions was not met.
"""
length = len(element)
if min_len and length < min_len:
raise BadNameException('Invalid %s: %s, must be at least %s characters' %
(name, element, min_len))
if max_len and length > max_len:
raise BadNameException('Invalid %s: %s, must be at most %s characters' %
(name, element, max_len))
if element.strip(characters):
raise BadNameException('Invalid %s: %s, acceptable characters include: %s' %
(name, element, characters))
def _check_repository(repository):
_check_element('repository', repository, _REPOSITORY_CHARS, 2, 255)
def _check_tag(tag):
_check_element('tag', tag, _TAG_CHARS, 1, 127)
def _check_digest(digest):
_check_element('digest', digest, _DIGEST_CHARS, 7 + 64, 7 + 64)
def _check_registry(registry):
# Per RFC 3986, netlocs (authorities) are required to be prefixed with '//'
parsed_hostname = six.moves.urllib.parse.urlparse('//' + registry)
# If urlparse doesn't recognize the given registry as a netloc, fail
# validation.
if registry != parsed_hostname.netloc:
raise BadNameException('Invalid registry: %s' % (registry))
class Registry(object):
"""Stores a docker registry name in a structured form."""
def __init__(self, name, strict = True):
if strict:
if not name:
raise BadNameException('A Docker registry domain must be specified.')
_check_registry(name)
self._registry = name
@property
def registry(self):
return self._registry or DEFAULT_DOMAIN
def __str__(self):
return self._registry
def __repr__(self):
return self.__str__()
def __eq__(self, other):
return (bool(other) and
# pylint: disable=unidiomatic-typecheck
type(self) == type(other) and
self.registry == other.registry)
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.registry)
def scope(self, unused_action):
# The only resource under 'registry' is 'catalog'. http://goo.gl/N9cN9Z
return 'registry:catalog:*'
class Repository(Registry):
"""Stores a docker repository name in a structured form."""
def __init__(self, name, strict = True):
if not name:
raise BadNameException('A Docker image name must be specified')
domain = ''
repo = name
parts = name.split('/', 1)
if len(parts) == 2:
# The first part of the repository is treated as the registry domain
# iff it contains a '.' or ':' character, otherwise it is all repository
# and the domain defaults to DockerHub.
if '.' in parts[0] or ':' in parts[0]:
domain = parts[0]
repo = parts[1]
super(Repository, self).__init__(domain, strict=strict)
self._repository = repo
_check_repository(self._repository)
def _validation_exception(self, name):
return BadNameException('Docker image name must have the form: '
'registry/repository, saw: %s' % name)
@property
def repository(self):
return self._repository
def __str__(self):
base = super(Repository, self).__str__()
if base:
return '{registry}/{repository}'.format(
registry=base, repository=self._repository)
else:
return self._repository
def __eq__(self, other):
return (bool(other) and
# pylint: disable=unidiomatic-typecheck
type(self) == type(other) and
self.registry == other.registry and
self.repository == other.repository)
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.registry, self.repository))
def scope(self, action):
return 'repository:{resource}:{action}'.format(
resource=self._repository,
action=action)
class Tag(Repository):
"""Stores a docker repository tag in a structured form."""
def __init__(self, name, strict = True):
parts = name.rsplit(':', 1)
if len(parts) != 2:
base = name
tag = ''
else:
base = parts[0]
tag = parts[1]
self._tag = tag
# We don't require a tag, but if we get one check it's valid,
# even when not being strict.
# If we are being strict, we want to validate the tag regardless in case
# it's empty.
if self._tag or strict:
_check_tag(self._tag)
# Parse the (base) repository portion of the name.
super(Tag, self).__init__(base, strict=strict)
@property
def tag(self):
return self._tag or DEFAULT_TAG
def __str__(self):
base = super(Tag, self).__str__()
if self._tag:
return '{base}:{tag}'.format(base=base, tag=self._tag)
else:
return base
def as_repository(self):
# Construct a new Repository object from the string representation
# our parent class (Repository) produces. This is a convenience
# method to allow consumers to stringify the repository portion of
# a tag or digest without their own format string.
# We have already validated, and we don't persist strictness.
return Repository(super(Tag, self).__str__(), strict=False)
def __eq__(self, other):
return (bool(other) and
# pylint: disable=unidiomatic-typecheck
type(self) == type(other) and
self.registry == other.registry and
self.repository == other.repository and
self.tag == other.tag)
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.registry, self.repository, self.tag))
class Digest(Repository):
"""Stores a docker repository digest in a structured form."""
def __init__(self, name, strict = True):
parts = name.split('@')
if len(parts) != 2:
raise self._validation_exception(name)
self._digest = parts[1]
_check_digest(self._digest)
# check if there is a tag
try:
tag = Tag(parts[0], strict=strict)
super(Digest, self).__init__(tag.as_repository().__str__(), strict=strict)
except BadNameException:
super(Digest, self).__init__(parts[0], strict=strict)
def _validation_exception(self, name):
return BadNameException('Docker image name must be fully qualified (e.g.'
'registry/repository@digest) saw: %s' % name)
@property
def digest(self):
return self._digest
def __str__(self):
base = super(Digest, self).__str__()
return '{base}@{digest}'.format(base=base, digest=self.digest)
def as_repository(self):
# Construct a new Repository object from the string representation
# our parent class (Repository) produces. This is a convenience
# method to allow consumers to stringify the repository portion of
# a tag or digest without their own format string.
# We have already validated, and we don't persist strictness.
return Repository(super(Digest, self).__str__(), strict=False)
def __eq__(self, other):
return (bool(other) and
# pylint: disable=unidiomatic-typecheck
type(self) == type(other) and
self.registry == other.registry and
self.repository == other.repository and
self.digest == other.digest)
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.registry, self.repository, self.digest))
def from_string(name):
"""Parses the given name string.
Parsing is done strictly; registry is required and a Tag will only be returned
if specified explicitly in the given name string.
Args:
name: The name to convert.
Returns:
The parsed name.
Raises:
BadNameException: The name could not be parsed.
"""
for name_type in [Digest, Tag, Repository, Registry]:
# Re-uses validation logic in the name classes themselves.
try:
return name_type(name, strict=True)
except BadNameException:
pass
raise BadNameException("'%s' is not a valid Tag, Digest, Repository or "
"Registry" % (name))

View File

@@ -0,0 +1,60 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module contains utilities for monitoring client side calls."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import six
class Context(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for implementations of client monitoring context manager.
All client operations are executed inside this context.
"""
@abc.abstractmethod
def __init__(self, operation):
pass
@abc.abstractmethod
def __enter__(self):
return self
@abc.abstractmethod
def __exit__(self, exc_type,
exc_value,
traceback):
pass
class Nop(Context):
"""Default implementation of Context that does nothing."""
# pylint: disable=useless-super-delegation
def __init__(self, operation):
super(Nop, self).__init__(operation)
def __enter__(self):
return self
def __exit__(self, exc_type,
exc_value,
traceback):
pass

View File

@@ -0,0 +1,38 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.client.v1']
from containerregistry.client.v1 import docker_creds_
setattr(x, 'docker_creds', docker_creds_)
from containerregistry.client.v1 import docker_http_
setattr(x, 'docker_http', docker_http_)
from containerregistry.client.v1 import docker_image_
setattr(x, 'docker_image', docker_image_)
from containerregistry.client.v1 import docker_session_
setattr(x, 'docker_session', docker_session_)
from containerregistry.client.v1 import save_
setattr(x, 'save', save_)

View File

@@ -0,0 +1,32 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package exposes credentials for talking to a Docker registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from containerregistry.client import docker_creds
class Token(docker_creds.SchemeProvider):
"""Implementation for providing a transaction's X-Docker-Token as creds."""
def __init__(self, token):
super(Token, self).__init__('Token')
self._token = token
@property
def suffix(self):
return self._token

View File

@@ -0,0 +1,92 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package facilitates HTTP/REST requests to the registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
import httplib2
class BadStatusException(Exception):
"""Exceptions when an unexpected HTTP status is returned."""
def __init__(self, resp, content):
message = 'Response:\n{resp}\nContent:\n{content}'.format(
resp=resp, content=content)
super(BadStatusException, self).__init__(message)
self._resp = resp
self._content = content
@property
def resp(self):
return self._resp
@property
def status(self):
return self._resp.status
@property
def content(self):
return self._content
# pylint: disable=invalid-name
def Request(transport,
url,
credentials,
accepted_codes = None,
body = None,
content_type = None):
"""Wrapper containing much of the boilerplate REST logic for Registry calls.
Args:
transport: the HTTP transport to use for requesting url
url: the URL to which to talk
credentials: the source of the Authorization header
accepted_codes: the list of acceptable http status codes
body: the body to pass into the PUT request (or None for GET)
content_type: the mime-type of the request (or None for JSON)
Raises:
BadStatusException: the status codes wasn't among the acceptable set.
Returns:
The response of the HTTP request, and its contents.
"""
headers = {
'content-type': content_type if content_type else 'application/json',
'Authorization': credentials.Get(),
'X-Docker-Token': 'true',
'user-agent': docker_name.USER_AGENT,
}
resp, content = transport.request(
url, 'PUT' if body else 'GET', body=body, headers=headers)
if resp.status not in accepted_codes:
# Use the content returned by GCR as the error message.
raise BadStatusException(resp, content)
return resp, content
def Scheme(endpoint):
"""Returns https scheme for all the endpoints except localhost."""
if endpoint.startswith('localhost:'):
return 'http'
else:
return 'https'

View File

@@ -0,0 +1,476 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides DockerImage for examining docker_build outputs."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import gzip
import io
import json
import os
import string
import subprocess
import sys
import tarfile
import tempfile
import threading
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v1 import docker_creds as v1_creds
from containerregistry.client.v1 import docker_http
import httplib2
import six
from six.moves import range # pylint: disable=redefined-builtin
import six.moves.http_client
class DockerImage(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for implementations that interact with Docker images."""
# pytype: disable=bad-return-type
@abc.abstractmethod
def top(self):
"""The layer id of the topmost layer."""
# pytype: enable=bad-return-type
# pytype: disable=bad-return-type
@abc.abstractmethod
def repositories(self):
"""The json blob of tags, loaded as a dict."""
pass
# pytype: enable=bad-return-type
def parent(self, layer_id):
"""The layer of id of the parent of the provided layer, or None.
Args:
layer_id: the id of the layer whose parentage we're asking
Returns:
The identity of the parent layer, or None if the root.
"""
metadata = json.loads(self.json(layer_id))
if 'parent' not in metadata:
return None
return metadata['parent']
# pytype: disable=bad-return-type
@abc.abstractmethod
def json(self, layer_id):
"""The JSON metadata of the provided layer.
Args:
layer_id: the id of the layer whose metadata we're asking
Returns:
The raw json string of the layer.
"""
pass
# pytype: enable=bad-return-type
# pytype: disable=bad-return-type
@abc.abstractmethod
def layer(self, layer_id):
"""The layer.tar.gz blob of the provided layer id.
Args:
layer_id: the id of the layer for whose layer blob we're asking
Returns:
The raw blob string of the layer.
"""
pass
# pytype: enable=bad-return-type
def uncompressed_layer(self, layer_id):
"""Same as layer() but uncompressed."""
zipped = self.layer(layer_id)
buf = io.BytesIO(zipped)
f = gzip.GzipFile(mode='rb', fileobj=buf)
unzipped = f.read()
return unzipped
def diff_id(self, digest):
"""diff_id only exist in schema v22."""
return None
# pytype: disable=bad-return-type
@abc.abstractmethod
def ancestry(self, layer_id):
"""The ancestry of the given layer, base layer first.
Args:
layer_id: the id of the layer whose ancestry we're asking
Returns:
The list of ancestor IDs, base first, layer_id last.
"""
pass
# pytype: enable=bad-return-type
# __enter__ and __exit__ allow use as a context manager.
@abc.abstractmethod
def __enter__(self):
pass
@abc.abstractmethod
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
# Gzip injects a timestamp into its output, which makes its output and digest
# non-deterministic. To get reproducible pushes, freeze time.
# This approach is based on the following StackOverflow answer:
# http://stackoverflow.com/
# questions/264224/setting-the-gzip-timestamp-from-python
class _FakeTime(object):
def time(self):
return 1225856967.109
gzip.time = _FakeTime()
class FromShardedTarball(DockerImage):
"""This decodes the sharded image tarballs from docker_build."""
def __init__(self,
layer_to_tarball,
top,
name = None,
compresslevel = 9):
self._layer_to_tarball = layer_to_tarball
self._top = top
self._compresslevel = compresslevel
self._memoize = {}
self._lock = threading.Lock()
self._name = name
def _content(self, layer_id, name, memoize = True):
"""Fetches a particular path's contents from the tarball."""
# Check our cache
if memoize:
with self._lock:
if name in self._memoize:
return self._memoize[name]
# tarfile is inherently single-threaded:
# https://mail.python.org/pipermail/python-bugs-list/2015-March/265999.html
# so instead of locking, just open the tarfile for each file
# we want to read.
with tarfile.open(name=self._layer_to_tarball(layer_id), mode='r:') as tar:
try:
content = tar.extractfile(name).read() # pytype: disable=attribute-error
except KeyError:
content = tar.extractfile('./' + name).read() # pytype: disable=attribute-error
# Populate our cache.
if memoize:
with self._lock:
self._memoize[name] = content
return content
def top(self):
"""Override."""
return self._top
def repositories(self):
"""Override."""
return json.loads(self._content(self.top(), 'repositories').decode('utf8'))
def json(self, layer_id):
"""Override."""
return self._content(layer_id, layer_id + '/json').decode('utf8')
# Large, do not memoize.
def uncompressed_layer(self, layer_id):
"""Override."""
return self._content(layer_id, layer_id + '/layer.tar', memoize=False)
# Large, do not memoize.
def layer(self, layer_id):
"""Override."""
unzipped = self.uncompressed_layer(layer_id)
buf = io.BytesIO()
f = gzip.GzipFile(mode='wb', compresslevel=self._compresslevel, fileobj=buf)
try:
f.write(unzipped)
finally:
f.close()
zipped = buf.getvalue()
return zipped
def ancestry(self, layer_id):
"""Override."""
p = self.parent(layer_id)
if not p:
return [layer_id]
return [layer_id] + self.ancestry(p)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
def _get_top(tarball, name = None):
"""Get the topmost layer in the image tarball."""
with tarfile.open(name=tarball, mode='r:') as tar:
reps = tar.extractfile('repositories') or tar.extractfile('./repositories')
if reps is None:
raise ValueError('Tarball must contain a repositories file')
repositories = json.loads(reps.read().decode('utf8'))
if name:
key = str(name.as_repository())
return repositories[key][name.tag]
if len(repositories) != 1:
raise ValueError('Tarball must contain a single repository, '
'or a name must be specified to FromTarball.')
for (unused_repo, tags) in six.iteritems(repositories):
if len(tags) != 1:
raise ValueError('Tarball must contain a single tag, '
'or a name must be specified to FromTarball.')
for (unused_tag, layer_id) in six.iteritems(tags):
return layer_id
raise Exception('Unreachable code in _get_top()')
class FromTarball(FromShardedTarball):
"""This decodes the image tarball output of docker_build for upload."""
def __init__(self,
tarball,
name = None,
compresslevel = 9):
super(FromTarball, self).__init__(
lambda unused_id: tarball,
_get_top(tarball, name),
name=name,
compresslevel=compresslevel)
class FromRegistry(DockerImage):
"""This accesses a docker image hosted on a registry (non-local)."""
def __init__(
self,
name,
basic_creds,
transport):
self._name = name
self._creds = basic_creds
self._transport = transport
# Set up in __enter__
self._tags = {}
self._response = {}
def top(self):
"""Override."""
assert isinstance(self._name, docker_name.Tag)
return self._tags[self._name.tag]
def repositories(self):
"""Override."""
return {self._name.repository: self._tags}
def tags(self):
"""Lists the tags present in the remote repository."""
return list(self.raw_tags().keys())
def raw_tags(self):
"""Dictionary of tag to image id."""
return self._tags
def _content(self, suffix):
if suffix not in self._response:
_, self._response[suffix] = docker_http.Request(
self._transport, '{scheme}://{endpoint}/v1/images/{suffix}'.format(
scheme=docker_http.Scheme(self._endpoint),
endpoint=self._endpoint,
suffix=suffix), self._creds, [six.moves.http_client.OK])
return self._response[suffix]
def json(self, layer_id):
"""Override."""
# GET server1/v1/images/IMAGEID/json
return self._content(layer_id + '/json').decode('utf8')
# Large, do not memoize.
def layer(self, layer_id):
"""Override."""
# GET server1/v1/images/IMAGEID/layer
return self._content(layer_id + '/layer')
def ancestry(self, layer_id):
"""Override."""
# GET server1/v1/images/IMAGEID/ancestry
return json.loads(self._content(layer_id + '/ancestry').decode('utf8'))
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
# This initiates the pull by issuing:
# GET H:P/v1/repositories/R/images
resp, unused_content = docker_http.Request(
self._transport,
'{scheme}://{registry}/v1/repositories/{repository_name}/images'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
repository_name=self._name.repository), self._creds,
[six.moves.http_client.OK])
# The response should have an X-Docker-Token header, which
# we should extract and annotate subsequent requests with:
# Authorization: Token {extracted value}
self._creds = v1_creds.Token(resp['x-docker-token'])
self._endpoint = resp['x-docker-endpoints']
# TODO(user): Consider also supporting cookies, which are
# used by Quay.io for authenticated sessions.
# Next, fetch the set of tags in this repository.
# GET server1/v1/repositories/R/tags
resp, content = docker_http.Request(
self._transport,
'{scheme}://{endpoint}/v1/repositories/{repository_name}/tags'.format(
scheme=docker_http.Scheme(self._endpoint),
endpoint=self._endpoint,
repository_name=self._name.repository), self._creds,
[six.moves.http_client.OK])
self._tags = json.loads(content.decode('utf8'))
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
class Random(DockerImage):
"""This generates an image with Random properties.
We ensure basic consistency of the generated docker
image.
"""
# TODO(b/36589467): Add function arg for creating blob.
def __init__(self,
sample,
num_layers = 5,
layer_byte_size = 64,
blobs = None):
# Generate the image.
self._ancestry = []
self._layers = {}
num_layers = len(blobs) if blobs else num_layers
for i in range(num_layers):
# Avoid repetitions.
while True:
layer_id = self._next_id(sample)
if layer_id not in self._ancestry:
self._ancestry += [layer_id]
blob = blobs[i] if blobs else None
self._layers[layer_id] = self._next_layer(
sample, layer_byte_size, blob)
break
def top(self):
"""Override."""
return self._ancestry[0]
def repositories(self):
"""Override."""
return {'random/image': {'latest': self.top(),}}
def json(self, layer_id):
"""Override."""
metadata = {'id': layer_id}
ancestry = self.ancestry(layer_id)
if len(ancestry) != 1:
metadata['parent'] = ancestry[1]
return json.dumps(metadata, sort_keys=True)
def layer(self, layer_id):
"""Override."""
return self._layers[layer_id]
def ancestry(self, layer_id):
"""Override."""
assert layer_id in self._ancestry
index = self._ancestry.index(layer_id)
return self._ancestry[index:]
def _next_id(self, sample):
return sample(b'0123456789abcdef', 64).decode('utf8')
# pylint: disable=missing-docstring
def _next_layer(self, sample,
layer_byte_size, blob):
buf = io.BytesIO()
# TODO(user): Consider doing something more creative...
with tarfile.open(fileobj=buf, mode='w:gz') as tar:
if blob:
info = tarfile.TarInfo(name='./'+self._next_id(sample))
info.size = len(blob)
tar.addfile(info, fileobj=io.BytesIO(blob))
# Linux optimization, use dd for data file creation.
elif sys.platform.startswith('linux') and layer_byte_size >= 1024 * 1024:
mb = layer_byte_size / (1024 * 1024)
tempdir = tempfile.mkdtemp()
data_filename = os.path.join(tempdir, 'a.bin')
if os.path.exists(data_filename):
os.remove(data_filename)
process = subprocess.Popen([
'dd', 'if=/dev/urandom',
'of=%s' % data_filename, 'bs=1M',
'count=%d' % mb
])
process.wait()
with io.open(data_filename, u'rb') as fd:
info = tar.gettarinfo(name=data_filename)
tar.addfile(info, fileobj=fd)
os.remove(data_filename)
os.rmdir(tempdir)
else:
data = sample(string.printable.encode('utf8'), layer_byte_size)
info = tarfile.TarInfo(name='./' + self._next_id(sample))
info.size = len(data)
tar.addfile(info, fileobj=io.BytesIO(data))
return buf.getvalue()
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass

View File

@@ -0,0 +1,201 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package manages interaction sessions with the docker registry.
'Push' implements the go/docker:push session.
'Pull' is not implemented (go/docker:pull).
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import logging
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v1 import docker_creds as v1_creds
from containerregistry.client.v1 import docker_http
from containerregistry.client.v1 import docker_image
import httplib2
import six.moves.http_client
class Push(object):
"""Push encapsulates a go/docker:push session."""
def __init__(self, name, creds,
transport):
"""Constructor.
Args:
name: the fully-qualified name of the tag to push.
creds: provider for authorizing requests.
transport: the http transport to use for sending requests.
Raises:
TypeError: an incorrectly typed argument was supplied.
"""
self._name = name
self._basic_creds = creds
self._transport = transport
self._top = None
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
# This initiates the upload by issuing:
# PUT H:P/v1/repositories/R/
# In that request, we specify the headers:
# Content-Type: application/json
# Authorization: Basic {base64 encoded auth token}
# X-Docker-Token: true
resp, unused_content = docker_http.Request(
self._transport,
'{scheme}://{registry}/v1/repositories/{repository}/'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
repository=self._name.repository),
self._basic_creds,
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.CREATED
],
body='[]') # pytype: disable=wrong-arg-types
# The response should have an X-Docker-Token header, which
# we should extract and annotate subsequent requests with:
# Authorization: Token {extracted value}
self._token_creds = v1_creds.Token(resp['x-docker-token'])
self._endpoint = resp['x-docker-endpoints']
# TODO(user): Consider also supporting cookies, which are
# used by Quay.io for authenticated sessions.
logging.info('Initiated upload of: %s', self._name)
return self
def _exists(self, layer_id):
"""Check the remote for the given layer."""
resp, unused_content = docker_http.Request(
self._transport,
'{scheme}://{endpoint}/v1/images/{layer}/json'.format(
scheme=docker_http.Scheme(self._endpoint),
endpoint=self._endpoint,
layer=layer_id),
self._token_creds,
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
])
return resp.status == six.moves.http_client.OK
def _put_json(self, image, layer_id):
"""Upload the json for a single layer."""
docker_http.Request(
self._transport,
'{scheme}://{endpoint}/v1/images/{layer}/json'.format(
scheme=docker_http.Scheme(self._endpoint),
endpoint=self._endpoint,
layer=layer_id),
self._token_creds,
accepted_codes=[six.moves.http_client.OK],
body=image.json(layer_id).encode('utf8'))
def _put_layer(self, image, layer_id):
"""Upload the aufs tarball for a single layer."""
# TODO(user): We should stream this instead of loading
# it into memory.
docker_http.Request(
self._transport,
'{scheme}://{endpoint}/v1/images/{layer}/layer'.format(
scheme=docker_http.Scheme(self._endpoint),
endpoint=self._endpoint,
layer=layer_id),
self._token_creds,
accepted_codes=[six.moves.http_client.OK],
body=image.layer(layer_id),
content_type='application/octet-stream')
def _put_checksum(self, image,
layer_id):
"""Upload the checksum for a single layer."""
# GCR doesn't use this for anything today,
# so no point in implementing it.
pass
def _upload_one(self, image,
layer_id):
"""Upload a single layer, after checking whether it exists already."""
if self._exists(layer_id):
logging.info('Layer %s exists, skipping', layer_id)
return
# TODO(user): This ordering is consistent with the docker client,
# however, only the json needs to be uploaded serially. We can upload
# the blobs in parallel. Today, GCR allows the layer to be uploaded
# first.
self._put_json(image, layer_id)
self._put_layer(image, layer_id)
self._put_checksum(image, layer_id)
logging.info('Layer %s pushed.', layer_id)
def upload(self, image):
"""Upload the layers of the given image.
Args:
image: the image tarball to upload.
"""
self._top = image.top()
for layer in reversed(image.ancestry(self._top)):
self._upload_one(image, layer)
def _put_tag(self):
"""Upload the new value of the tag we are pushing."""
docker_http.Request(
self._transport,
'{scheme}://{endpoint}/v1/repositories/{repository}/tags/{tag}'.format(
scheme=docker_http.Scheme(self._endpoint),
endpoint=self._endpoint,
repository=self._name.repository,
tag=self._name.tag),
self._token_creds,
accepted_codes=[six.moves.http_client.OK],
body=('"%s"' % self._top).encode('utf8'))
def _put_images(self):
"""Close the session by putting to the .../images endpoint."""
docker_http.Request(
self._transport,
'{scheme}://{registry}/v1/repositories/{repository}/images'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
repository=self._name.repository),
self._basic_creds,
accepted_codes=[six.moves.http_client.NO_CONTENT],
body=b'[]')
def __exit__(self, exception_type, unused_value, unused_traceback):
if exception_type:
logging.error('Error during upload of: %s', self._name)
return
# This should complete the upload by issuing:
# PUT server1/v1/repositories/R/tags/T
# for each tag, with token auth talking to endpoint.
self._put_tag()
# Then issuing:
# PUT H:P/v1/repositories/R/images
# to complete the transaction, with basic auth talking to registry.
self._put_images()
logging.info('Finished upload of: %s', self._name)

View File

@@ -0,0 +1,100 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides tools for saving docker images."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import io
import json
import tarfile
from containerregistry.client import docker_name
from containerregistry.client.v1 import docker_image
import six
def multi_image_tarball(
tag_to_image,
tar):
"""Produce a "docker save" compatible tarball from the DockerImages.
Args:
tag_to_image: A dictionary of tags to the images they label.
tar: the open tarfile into which we are writing the image tarball.
"""
def add_file(filename, contents):
info = tarfile.TarInfo(filename)
info.size = len(contents)
tar.addfile(tarinfo=info, fileobj=io.BytesIO(contents))
seen = set()
repositories = {}
# Each layer is encoded as a directory in the larger tarball of the form:
# {layer_id}\
# layer.tar
# VERSION
# json
for (tag, image) in six.iteritems(tag_to_image):
# Add this image's repositories entry.
repo = str(tag.as_repository())
tags = repositories.get(repo, {})
tags[tag.tag] = image.top()
repositories[repo] = tags
for layer_id in image.ancestry(image.top()):
# Add each layer_id exactly once.
if layer_id in seen or json.loads(image.json(layer_id)).get('throwaway'):
continue
seen.add(layer_id)
# VERSION generally seems to contain 1.0, not entirely sure
# what the point of this is.
add_file(layer_id + '/VERSION', b'1.0')
# Add the unzipped layer tarball
content = image.uncompressed_layer(layer_id)
add_file(layer_id + '/layer.tar', content)
# Now the json metadata
add_file(layer_id + '/json', image.json(layer_id).encode('utf8'))
# Add the metadata tagging the top layer.
add_file('repositories',
json.dumps(repositories, sort_keys=True).encode('utf8'))
def tarball(name, image,
tar):
"""Produce a "docker save" compatible tarball from the DockerImage.
Args:
name: The tag name to write into the repositories file.
image: a docker image to save.
tar: the open tarfile into which we are writing the image tarball.
"""
def add_file(filename, contents):
info = tarfile.TarInfo(filename)
info.size = len(contents)
tar.addfile(tarinfo=info, fileobj=io.BytesIO(contents))
multi_image_tarball({name: image}, tar)
# Add our convenience file with the top layer's ID.
add_file('top', image.top().encode('utf8'))

View File

@@ -0,0 +1,50 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.client.v2']
from containerregistry.client.v2 import docker_creds_
setattr(x, 'docker_creds', docker_creds_)
from containerregistry.client.v2 import docker_http_
setattr(x, 'docker_http', docker_http_)
from containerregistry.client.v2 import util_
setattr(x, 'util', util_)
from containerregistry.client.v2 import docker_digest_
setattr(x, 'docker_digest', docker_digest_)
from containerregistry.client.v2 import docker_image_
setattr(x, 'docker_image', docker_image_)
from containerregistry.client.v2 import v1_compat_
setattr(x, 'v1_compat', v1_compat_)
from containerregistry.client.v2 import docker_session_
setattr(x, 'docker_session', docker_session_)
from containerregistry.client.v2 import append_
setattr(x, 'append', append_)

View File

@@ -0,0 +1,106 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides DockerImage for examining docker_build outputs."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import binascii
import json
import os
from containerregistry.client.v2 import docker_digest
from containerregistry.client.v2 import docker_image
from containerregistry.client.v2 import util
# _EMPTY_LAYER_TAR_ID is the sha256 of an empty tarball.
_EMPTY_LAYER_TAR_ID = 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4' # pylint: disable=line-too-long
class Layer(docker_image.DockerImage):
"""Appends a new layer on top of a base image.
This augments a base docker image with new files from a gzipped tarball,
adds environment variables and exposes a port.
"""
def __init__(self, base, tar_gz,
port, *envs):
"""Creates a new layer on top of a base with optional tar.gz, port or envs.
Args:
base: a base DockerImage for a new layer.
tar_gz: an optional gzipped tarball passed as a string with filesystem
changeset.
port: an optional port to be exposed, passed as a string. For example:
'8080/tcp'.
*envs: environment variables passed as strings in the format:
'ENV_ONE=val', 'ENV_TWO=val2'.
"""
self._base = base
unsigned_manifest, unused_signatures = util.DetachSignatures(
self._base.manifest())
manifest = json.loads(unsigned_manifest)
v1_compat = json.loads(manifest['history'][0]['v1Compatibility'])
if tar_gz:
self._blob = tar_gz
self._blob_sum = docker_digest.SHA256(self._blob)
v1_compat['throwaway'] = False
else:
self._blob_sum = _EMPTY_LAYER_TAR_ID
self._blob = b''
v1_compat['throwaway'] = True
manifest['fsLayers'].insert(0, {'blobSum': self._blob_sum})
v1_compat['parent'] = v1_compat['id']
v1_compat['id'] = binascii.hexlify(os.urandom(32)).decode('utf8')
config = v1_compat.get('config', {}) or {}
envs = list(envs)
if envs:
env_keys = [env.split('=')[0] for env in envs]
old_envs = config.get('Env', []) or []
old_envs = [env for env in old_envs if env.split('=')[0] not in env_keys]
config['Env'] = old_envs + envs
if port is not None:
old_ports = config.get('ExposedPorts', {}) or {}
old_ports[port] = {}
config['ExposedPorts'] = old_ports
v1_compat['config'] = config
manifest['history'].insert(
0, {'v1Compatibility': json.dumps(v1_compat, sort_keys=True)})
self._manifest = util.Sign(json.dumps(manifest, sort_keys=True))
def manifest(self):
"""Override."""
return self._manifest
def blob(self, digest):
"""Override."""
if digest == self._blob_sum:
return self._blob
return self._base.blob(digest)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
"""Override."""
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Override."""
return

View File

@@ -0,0 +1,28 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package exposes credentials for talking to a Docker registry."""
from containerregistry.client import docker_creds
class Bearer(docker_creds.SchemeProvider):
"""Implementation for providing a transaction's Bearer token as creds."""
def __init__(self, bearer_token):
super(Bearer, self).__init__('Bearer')
self._bearer_token = bearer_token
@property
def suffix(self):
return self._bearer_token

View File

@@ -0,0 +1,34 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package holds a handful of utilities for calculating digests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import hashlib
from containerregistry.client.v2 import util
def SHA256(content, prefix='sha256:'):
"""Return 'sha256:' + hex(sha256(content))."""
return prefix + hashlib.sha256(content).hexdigest()
def SignedManifestToSHA256(manifest):
"""Return 'sha256:' + hex(sha256(manifest - signatures))."""
unsigned_manifest, unused_signatures = util.DetachSignatures(manifest)
return SHA256(unsigned_manifest.encode('utf8'))

View File

@@ -0,0 +1,415 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package facilitates HTTP/REST requests to the registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import re
import threading
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2 import docker_creds as v2_creds
import httplib2
import six.moves.http_client
import six.moves.urllib.parse
# Options for docker_http.Transport actions
PULL = 'pull'
PUSH = 'push,pull'
# For now DELETE is PUSH, which is the read/write ACL.
DELETE = PUSH
CATALOG = 'catalog'
ACTIONS = [PULL, PUSH, DELETE, CATALOG]
class Diagnostic(object):
"""Diagnostic encapsulates a Registry v2 diagnostic message.
This captures one of the "errors" from a v2 Registry error response
message, as outlined here:
https://github.com/docker/distribution/blob/master/docs/spec/api.md#errors
Args:
error: the decoded JSON of the "errors" array element.
"""
def __init__(self, error):
self._error = error
def __eq__(self, other):
return (self.code == other.code and self.message == other.message and
self.detail == other.detail)
@property
def code(self):
return self._error.get('code', 'UNKNOWN')
@property
def message(self):
return self._error.get('message', '<no message specified>')
@property
def detail(self):
return self._error.get('detail', '<no details provided>')
def _DiagnosticsFromContent(content):
"""Extract and return the diagnostics from content."""
try:
content = content.decode('utf8')
except: # pylint: disable=bare-except
# Assume it's already decoded. Defensive coding for old py2 habits that
# are hard to break. Passing does not make the problem worse.
pass
try:
o = json.loads(content)
return [Diagnostic(d) for d in o.get('errors', [])]
except: # pylint: disable=bare-except
return [Diagnostic({
'code': 'UNKNOWN',
'message': content,
})]
class V2DiagnosticException(Exception):
"""Exceptions when an unexpected HTTP status is returned."""
def __init__(self, resp, content):
self._resp = resp
self._diagnostics = _DiagnosticsFromContent(content)
message = '\n'.join(
['response: %s' % resp] +
['%s: %s' % (d.message, d.detail) for d in self._diagnostics])
super(V2DiagnosticException, self).__init__(message)
@property
def diagnostics(self):
return self._diagnostics
@property
def response(self):
return self._resp
@property
def status(self):
return self._resp.status
class BadStateException(Exception):
"""Exceptions when we have entered an unexpected state."""
class TokenRefreshException(BadStateException):
"""Exception when token refresh fails."""
def _CheckState(predicate, message = None):
if not predicate:
raise BadStateException(message if message else 'Unknown')
_ANONYMOUS = ''
_BASIC = 'Basic'
_BEARER = 'Bearer'
_REALM_PFX = 'realm='
_SERVICE_PFX = 'service='
class Transport(object):
"""HTTP Transport abstraction to handle automatic v2 reauthentication.
In the v2 Registry protocol, all of the API endpoints expect to receive
'Bearer' authentication. These Bearer tokens are generated by exchanging
'Basic' or 'Anonymous' authentication with an authentication endpoint
designated by the opening ping request.
The Bearer tokens are scoped to a resource (typically repository), and
are generated with a set of capabilities embedded (e.g. push, pull).
The Docker client has a baked in 60-second expiration for Bearer tokens,
and upon expiration, registries can reject any request with a 401. The
transport should automatically refresh the Bearer token and reissue the
request.
Args:
name: the structured name of the docker resource being referenced.
creds: the basic authentication credentials to use for authentication
challenge exchanges.
transport: the HTTP transport to use under the hood.
action: One of docker_http.ACTIONS, for which we plan to use this transport
"""
def __init__(self, name,
creds,
transport, action):
self._name = name
self._basic_creds = creds
self._transport = transport
self._action = action
self._lock = threading.Lock()
_CheckState(action in ACTIONS,
'Invalid action supplied to docker_http.Transport: %s' % action)
# Ping once to establish realm, and then get a good credential
# for use with this transport.
self._Ping()
if self._authentication == _BEARER:
self._Refresh()
elif self._authentication == _BASIC:
self._creds = self._basic_creds
else:
self._creds = docker_creds.Anonymous()
def _Ping(self):
"""Ping the v2 Registry.
Only called during transport construction, this pings the listed
v2 registry. The point of this ping is to establish the "realm"
and "service" to use for Basic for Bearer-Token exchanges.
"""
# This initiates the pull by issuing a v2 ping:
# GET H:P/v2/
headers = {
'content-type': 'application/json',
'user-agent': docker_name.USER_AGENT,
}
resp, content = self._transport.request(
'{scheme}://{registry}/v2/'.format(
scheme=Scheme(self._name.registry), registry=self._name.registry),
'GET',
body=None,
headers=headers)
# We expect a www-authenticate challenge.
_CheckState(
resp.status in [
six.moves.http_client.OK, six.moves.http_client.UNAUTHORIZED
], 'Unexpected response pinging the registry: {}\nBody: {}'.format(
resp.status, content or '<empty>'))
# The registry is authenticated iff we have an authentication challenge.
if resp.status == six.moves.http_client.OK:
self._authentication = _ANONYMOUS
self._service = 'none'
self._realm = 'none'
return
challenge = resp['www-authenticate']
_CheckState(' ' in challenge,
'Unexpected "www-authenticate" header form: %s' % challenge)
(self._authentication, remainder) = challenge.split(' ', 1)
# Normalize the authentication scheme to have exactly the first letter
# capitalized. Scheme matching is required to be case insensitive:
# https://tools.ietf.org/html/rfc7235#section-2.1
self._authentication = self._authentication.capitalize()
_CheckState(self._authentication in [_BASIC, _BEARER],
'Unexpected "www-authenticate" challenge type: %s' %
self._authentication)
# Default "_service" to the registry
self._service = self._name.registry
tokens = remainder.split(',')
for t in tokens:
if t.startswith(_REALM_PFX):
self._realm = t[len(_REALM_PFX):].strip('"')
elif t.startswith(_SERVICE_PFX):
self._service = t[len(_SERVICE_PFX):].strip('"')
# Make sure these got set.
_CheckState(self._realm, 'Expected a "%s" in "www-authenticate" '
'header: %s' % (_REALM_PFX, challenge))
def _Scope(self):
"""Construct the resource scope to pass to a v2 auth endpoint."""
return self._name.scope(self._action)
def _Refresh(self):
"""Refreshes the Bearer token credentials underlying this transport.
This utilizes the "realm" and "service" established during _Ping to
set up _creds with up-to-date credentials, by passing the
client-provided _basic_creds to the authorization realm.
This is generally called under two circumstances:
1) When the transport is created (eagerly)
2) When a request fails on a 401 Unauthorized
Raises:
TokenRefreshException: Error during token exchange.
"""
headers = {
'content-type': 'application/json',
'user-agent': docker_name.USER_AGENT,
'Authorization': self._basic_creds.Get()
}
parameters = {
'scope': self._Scope(),
'service': self._service,
}
resp, content = self._transport.request(
# 'realm' includes scheme and path
'{realm}?{query}'.format(
realm=self._realm,
query=six.moves.urllib.parse.urlencode(parameters)),
'GET',
body=None,
headers=headers)
if resp.status != six.moves.http_client.OK:
raise TokenRefreshException('Bad status during token exchange: %d\n%s' %
(resp.status, content))
try:
content = content.decode('utf8')
except: # pylint: disable=bare-except
# Assume it's already decoded. Defensive coding for old py2 habits that
# are hard to break. Passing does not make the problem worse.
pass
wrapper_object = json.loads(content)
token = wrapper_object.get('token') or wrapper_object.get('access_token')
_CheckState(token is not None,
'Malformed JSON response: %s' % content)
with self._lock:
# We have successfully reauthenticated.
self._creds = v2_creds.Bearer(token)
# pylint: disable=invalid-name
def Request(
self,
url,
accepted_codes = None,
method = None,
body = None,
content_type = None):
"""Wrapper containing much of the boilerplate REST logic for Registry calls.
Args:
url: the URL to which to talk
accepted_codes: the list of acceptable http status codes
method: the HTTP method to use (defaults to GET/PUT depending on
whether body is provided)
body: the body to pass into the PUT request (or None for GET)
content_type: the mime-type of the request (or None for JSON).
content_type is ignored when body is None.
Raises:
BadStateException: an unexpected internal state has been encountered.
V2DiagnosticException: an error has occurred interacting with v2.
Returns:
The response of the HTTP request, and its contents.
"""
if not method:
method = 'GET' if not body else 'PUT'
# If the first request fails on a 401 Unauthorized, then refresh the
# Bearer token and retry, if the authentication mode is bearer.
for retry in [self._authentication == _BEARER, False]:
# self._creds may be changed by self._Refresh(), so do
# not hoist this.
headers = {
'user-agent': docker_name.USER_AGENT,
}
auth = self._creds.Get()
if auth:
headers['Authorization'] = auth
if body: # Requests w/ bodies should have content-type.
headers['content-type'] = (
content_type if content_type else 'application/json')
# POST/PUT require a content-length, when no body is supplied.
if method in ('POST', 'PUT') and not body:
headers['content-length'] = '0'
resp, content = self._transport.request(
url, method, body=body, headers=headers)
if resp.status != six.moves.http_client.UNAUTHORIZED:
break
elif retry:
# On Unauthorized, refresh the credential and retry.
self._Refresh()
if resp.status not in accepted_codes:
# Use the content returned by GCR as the error message.
raise V2DiagnosticException(resp, content)
return resp, content
def PaginatedRequest(self,
url,
accepted_codes = None,
method = None,
body = None,
content_type = None
):
"""Wrapper around Request that follows Link headers if they exist.
Args:
url: the URL to which to talk
accepted_codes: the list of acceptable http status codes
method: the HTTP method to use (defaults to GET/PUT depending on
whether body is provided)
body: the body to pass into the PUT request (or None for GET)
content_type: the mime-type of the request (or None for JSON)
Yields:
The return value of calling Request for each page of results.
"""
next_page = url
while next_page:
resp, content = self.Request(next_page, accepted_codes, method, body,
content_type)
yield resp, content
next_page = ParseNextLinkHeader(resp)
def ParseNextLinkHeader(resp):
"""Returns "next" link from RFC 5988 Link header or None if not present."""
link = resp.get('link')
if not link:
return None
m = re.match(r'.*<(.+)>;\s*rel="next".*', link)
if not m:
return None
return m.group(1)
def Scheme(endpoint):
"""Returns https scheme for all the endpoints except localhost."""
if endpoint.startswith('localhost:'):
return 'http'
elif re.match(r'.*\.local(?:host)?(?::\d{1,5})?$', endpoint):
return 'http'
else:
return 'https'

View File

@@ -0,0 +1,319 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides DockerImage for examining docker_build outputs."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import gzip
import io
import json
import os
import tarfile
from typing import Any, Dict, Iterator, List, Set, Text, Union # pylint: disable=g-multiple-import,unused-import
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2 import docker_digest
from containerregistry.client.v2 import docker_http
import httplib2
import six
import six.moves.http_client
class DigestMismatchedError(Exception):
"""Exception raised when a digest mismatch is encountered."""
class DockerImage(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for implementations that interact with Docker images."""
def fs_layers(self):
"""The ordered collection of filesystem layers that comprise this image."""
manifest = json.loads(self.manifest())
return [x['blobSum'] for x in manifest['fsLayers']]
def blob_set(self):
"""The unique set of blobs that compose to create the filesystem."""
return set(self.fs_layers())
def digest(self):
"""The digest of the manifest."""
return docker_digest.SignedManifestToSHA256(self.manifest())
# pytype: disable=bad-return-type
@abc.abstractmethod
def manifest(self):
"""The JSON manifest referenced by the tag/digest.
Returns:
The raw json manifest
"""
# pytype: enable=bad-return-type
def blob_size(self, digest):
"""The byte size of the raw blob."""
return len(self.blob(digest))
# pytype: disable=bad-return-type
@abc.abstractmethod
def blob(self, digest):
"""The raw blob of the layer.
Args:
digest: the 'algo:digest' of the layer being addressed.
Returns:
The raw blob bytes of the layer.
"""
# pytype: enable=bad-return-type
def uncompressed_blob(self, digest):
"""Same as blob() but uncompressed."""
buf = io.BytesIO(self.blob(digest))
f = gzip.GzipFile(mode='rb', fileobj=buf)
return f.read()
def diff_id(self, digest):
"""diff_id only exist in schema v22."""
return None
# __enter__ and __exit__ allow use as a context manager.
@abc.abstractmethod
def __enter__(self):
"""Open the image for reading."""
@abc.abstractmethod
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Close the image."""
def __str__(self):
"""A human-readable representation of the image."""
return str(type(self))
class FromRegistry(DockerImage):
"""This accesses a docker image hosted on a registry (non-local)."""
def __init__(self, name,
basic_creds,
transport):
super().__init__()
self._name = name
self._creds = basic_creds
self._original_transport = transport
self._response = {}
def _content(self, suffix, cache = True):
"""Fetches content of the resources from registry by http calls."""
if isinstance(self._name, docker_name.Repository):
suffix = '{repository}/{suffix}'.format(
repository=self._name.repository, suffix=suffix)
if suffix in self._response:
return self._response[suffix]
_, content = self._transport.Request(
'{scheme}://{registry}/v2/{suffix}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
suffix=suffix),
accepted_codes=[six.moves.http_client.OK])
if cache:
self._response[suffix] = content
return content
def _tags(self):
# See //cloud/containers/registry/proto/v2/tags.proto
# for the full response structure.
return json.loads(self._content('tags/list').decode('utf8'))
def tags(self):
return self._tags().get('tags', [])
def digest(self):
"""The digest of the manifest."""
if isinstance(self._name, docker_name.Digest):
return self._name.digest
return super().digest()
def manifests(self):
payload = self._tags()
if 'manifest' not in payload:
# Only GCR supports this schema.
return {}
return payload['manifest']
def children(self):
payload = self._tags()
if 'child' not in payload:
# Only GCR supports this schema.
return []
return payload['child']
def exists(self):
try:
self.manifest(validate=False)
return True
except docker_http.V2DiagnosticException as err:
if err.status == six.moves.http_client.NOT_FOUND:
return False
raise
def manifest(self, validate=True):
"""Override."""
# GET server1/v2/<name>/manifests/<tag_or_digest>
if isinstance(self._name, docker_name.Tag):
return self._content('manifests/' + self._name.tag).decode('utf8')
else:
assert isinstance(self._name, docker_name.Digest)
c = self._content('manifests/' + self._name.digest).decode('utf8')
# v2 removes signatures to compute the manifest digest, this is hard.
computed = docker_digest.SignedManifestToSHA256(c)
if validate and computed != self._name.digest:
raise DigestMismatchedError(
'The returned manifest\'s digest did not match requested digest, '
'%s vs. %s' % (self._name.digest, computed))
return c
def blob_size(self, digest):
"""The byte size of the raw blob."""
suffix = 'blobs/' + digest
if isinstance(self._name, docker_name.Repository):
suffix = '{repository}/{suffix}'.format(
repository=self._name.repository, suffix=suffix)
resp, unused_content = self._transport.Request(
'{scheme}://{registry}/v2/{suffix}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
suffix=suffix),
method='HEAD',
accepted_codes=[six.moves.http_client.OK])
return int(resp['content-length'])
# Large, do not memoize.
def blob(self, digest):
"""Override."""
# GET server1/v2/<name>/blobs/<digest>
c = self._content('blobs/' + digest, cache=False)
computed = docker_digest.SHA256(c)
if digest != computed:
raise DigestMismatchedError(
'The returned content\'s digest did not match its content-address, '
'%s vs. %s' % (digest, computed if c else '(content was empty)'))
return c
def catalog(self, page_size = 100):
# TODO(user): Handle docker_name.Repository for /v2/<name>/_catalog
if isinstance(self._name, docker_name.Repository):
raise ValueError('Expected docker_name.Registry for "name"')
url = '{scheme}://{registry}/v2/_catalog?n={page_size}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
page_size=page_size)
for _, content in self._transport.PaginatedRequest(
url, accepted_codes=[six.moves.http_client.OK]):
wrapper_object = json.loads(content)
if 'repositories' not in wrapper_object:
raise docker_http.BadStateException(
'Malformed JSON response: %s' % content)
for repo in wrapper_object['repositories']:
# TODO(user): This should return docker_name.Repository instead.
yield repo
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
# Create a v2 transport to use for making authenticated requests.
self._transport = docker_http.Transport(
self._name, self._creds, self._original_transport, docker_http.PULL)
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
def __str__(self):
return '<docker_image.FromRegistry name: {}>'.format(str(self._name))
def _in_whiteout_dir(fs, name):
while name:
dirname = os.path.dirname(name)
if name == dirname:
break
if fs.get(dirname):
return True
name = dirname
return False
_WHITEOUT_PREFIX = '.wh.'
def extract(image, tar):
"""Extract the final filesystem from the image into tar.
Args:
image: a docker image whose final filesystem to construct.
tar: the open tarfile into which we are writing the final filesystem.
"""
# Maps all of the files we have already added (and should never add again)
# to whether they are a tombstone or not.
fs = {}
# Walk the layers, topmost first and add files. If we've seen them in a
# higher layer then we skip them.
for layer in image.fs_layers():
buf = io.BytesIO(image.blob(layer))
with tarfile.open(mode='r:gz', fileobj=buf) as layer_tar:
for member in layer_tar.getmembers():
# If we see a whiteout file, then don't add anything to the tarball
# but ensure that any lower layers don't add a file with the whited
# out name.
basename = os.path.basename(member.name)
dirname = os.path.dirname(member.name)
tombstone = basename.startswith(_WHITEOUT_PREFIX)
if tombstone:
basename = basename[len(_WHITEOUT_PREFIX):]
# Before adding a file, check to see whether it (or its whiteout) have
# been seen before.
name = os.path.normpath(os.path.join('.', dirname, basename))
if name in fs:
continue
# Check for a whited out parent directory
if _in_whiteout_dir(fs, name):
continue
# Mark this file as handled by adding its name.
# A non-directory implicitly tombstones any entries with
# a matching (or child) name.
fs[name] = tombstone or not member.isdir()
if not tombstone:
if member.isfile():
tar.addfile(member, fileobj=layer_tar.extractfile(member.name))
else:
tar.addfile(member, fileobj=None)

View File

@@ -0,0 +1,335 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package manages pushes to and deletes from a v2 docker registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import logging
import concurrent.futures
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2 import docker_http
from containerregistry.client.v2 import docker_image
import httplib2
import six.moves.http_client
import six.moves.urllib.parse
def _tag_or_digest(name):
if isinstance(name, docker_name.Tag):
return name.tag
else:
assert isinstance(name, docker_name.Digest)
return name.digest
class Push(object):
"""Push encapsulates a Registry v2 Docker push session."""
def __init__(self,
name,
creds,
transport,
mount = None,
threads = 1):
"""Constructor.
If multiple threads are used, the caller *must* ensure that the provided
transport is thread-safe, as well as the image that is being uploaded.
It is notable that tarfile and httplib2.Http in Python are NOT threadsafe.
Args:
name: the fully-qualified name of the tag to push
creds: provider for authorizing requests
transport: the http transport to use for sending requests
mount: list of repos from which to mount blobs.
threads: the number of threads to use for uploads.
Raises:
ValueError: an incorrectly typed argument was supplied.
"""
self._name = name
self._transport = docker_http.Transport(name, creds, transport,
docker_http.PUSH)
self._mount = mount
self._threads = threads
def name(self):
return self._name
def _scheme_and_host(self):
return '{scheme}://{registry}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry)
def _base_url(self):
return self._scheme_and_host() + '/v2/{repository}'.format(
repository=self._name.repository)
def _get_absolute_url(self, location):
# If 'location' is an absolute URL (includes host), this will be a no-op.
return six.moves.urllib.parse.urljoin(
base=self._scheme_and_host(), url=location)
def blob_exists(self, digest):
"""Check the remote for the given layer."""
# HEAD the blob, and check for a 200
resp, unused_content = self._transport.Request(
'{base_url}/blobs/{digest}'.format(
base_url=self._base_url(), digest=digest),
method='HEAD',
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
])
return resp.status == six.moves.http_client.OK # pytype: disable=attribute-error
def manifest_exists(self, image):
"""Check the remote for the given manifest by digest."""
# GET the manifest by digest, and check for 200
resp, unused_content = self._transport.Request(
'{base_url}/manifests/{digest}'.format(
base_url=self._base_url(), digest=image.digest()),
method='GET',
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
])
return resp.status == six.moves.http_client.OK # pytype: disable=attribute-error
def _monolithic_upload(self, image,
digest):
self._transport.Request(
'{base_url}/blobs/uploads/?digest={digest}'.format(
base_url=self._base_url(), digest=digest),
method='POST',
body=image.blob(digest),
accepted_codes=[six.moves.http_client.CREATED])
def _add_digest(self, url, digest):
scheme, netloc, path, query_string, fragment = (
six.moves.urllib.parse.urlsplit(url))
qs = six.moves.urllib.parse.parse_qs(query_string)
qs['digest'] = [digest]
query_string = six.moves.urllib.parse.urlencode(qs, doseq=True)
return six.moves.urllib.parse.urlunsplit((scheme, netloc, path, # pytype: disable=bad-return-type
query_string, fragment))
def _put_upload(self, image, digest):
mounted, location = self._start_upload(digest, self._mount)
if mounted:
logging.info('Layer %s mounted.', digest)
return
location = self._add_digest(location, digest)
self._transport.Request(
location,
method='PUT',
body=image.blob(digest),
accepted_codes=[six.moves.http_client.CREATED])
# pylint: disable=missing-docstring
def patch_upload(self, source,
digest):
mounted, location = self._start_upload(digest, self._mount)
if mounted:
logging.info('Layer %s mounted.', digest)
return
location = self._get_absolute_url(location)
blob = source
if isinstance(source, docker_image.DockerImage):
blob = source.blob(digest)
resp, unused_content = self._transport.Request(
location,
method='PATCH',
body=blob,
content_type='application/octet-stream',
accepted_codes=[
six.moves.http_client.NO_CONTENT, six.moves.http_client.ACCEPTED,
six.moves.http_client.CREATED
])
location = self._add_digest(resp['location'], digest)
location = self._get_absolute_url(location)
self._transport.Request(
location,
method='PUT',
body=None,
accepted_codes=[six.moves.http_client.CREATED])
def _put_blob(self, image, digest):
"""Upload the aufs .tgz for a single layer."""
# We have a few choices for unchunked uploading:
# POST to /v2/<name>/blobs/uploads/?digest=<digest>
# Fastest, but not supported by many registries.
# self._monolithic_upload(image, digest)
#
# or:
# POST /v2/<name>/blobs/uploads/ (no body*)
# PUT /v2/<name>/blobs/uploads/<uuid> (full body)
# Next fastest, but there is a mysterious bad interaction
# with Bintray. This pattern also hasn't been used in
# clients since 1.8, when they switched to the 3-stage
# method below.
# self._put_upload(image, digest)
# or:
# POST /v2/<name>/blobs/uploads/ (no body*)
# PATCH /v2/<name>/blobs/uploads/<uuid> (full body)
# PUT /v2/<name>/blobs/uploads/<uuid> (no body)
#
# * We attempt to perform a cross-repo mount if any repositories are
# specified in the "mount" parameter. This does a fast copy from a
# repository that is known to contain this blob and skips the upload.
self.patch_upload(image, digest)
def _remote_tag_digest(self):
"""Check the remote for the given manifest by digest."""
# GET the tag we're pushing
resp, unused_content = self._transport.Request(
'{base_url}/manifests/{tag}'.format(
base_url=self._base_url(),
tag=self._name.tag), # pytype: disable=attribute-error
method='GET',
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
])
if resp.status == six.moves.http_client.NOT_FOUND: # pytype: disable=attribute-error
return None
return resp.get('docker-content-digest')
def put_manifest(self, image):
"""Upload the manifest for this image."""
self._transport.Request(
'{base_url}/manifests/{tag_or_digest}'.format(
base_url=self._base_url(),
tag_or_digest=_tag_or_digest(self._name)),
method='PUT',
body=image.manifest().encode('utf8'),
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.CREATED,
six.moves.http_client.ACCEPTED
])
def _start_upload(self,
digest,
mount = None
):
"""POST to begin the upload process with optional cross-repo mount param."""
if not mount:
# Do a normal POST to initiate an upload if mount is missing.
url = '{base_url}/blobs/uploads/'.format(base_url=self._base_url())
accepted_codes = [six.moves.http_client.ACCEPTED]
else:
# If we have a mount parameter, try to mount the blob from another repo.
mount_from = '&'.join([
'from=' + six.moves.urllib.parse.quote(repo.repository, '')
for repo in self._mount
])
url = '{base_url}/blobs/uploads/?mount={digest}&{mount_from}'.format(
base_url=self._base_url(), digest=digest, mount_from=mount_from)
accepted_codes = [
six.moves.http_client.CREATED, six.moves.http_client.ACCEPTED
]
resp, unused_content = self._transport.Request(
url, method='POST', body=None, accepted_codes=accepted_codes)
# pytype: disable=attribute-error,bad-return-type
return resp.status == six.moves.http_client.CREATED, resp.get('location')
# pytype: enable=attribute-error,bad-return-type
def _upload_one(self, image, digest):
"""Upload a single layer, after checking whether it exists already."""
if self.blob_exists(digest):
logging.info('Layer %s exists, skipping', digest)
return
self._put_blob(image, digest)
logging.info('Layer %s pushed.', digest)
def upload(self, image):
"""Upload the layers of the given image.
Args:
image: the image to upload.
"""
# If the manifest (by digest) exists, then avoid N layer existence
# checks (they must exist).
if self.manifest_exists(image):
if isinstance(self._name, docker_name.Tag):
if self._remote_tag_digest() == image.digest():
logging.info('Tag points to the right manifest, skipping push.')
return
logging.info('Manifest exists, skipping blob uploads and pushing tag.')
else:
logging.info('Manifest exists, skipping upload.')
elif self._threads == 1:
for digest in image.blob_set():
self._upload_one(image, digest)
else:
with concurrent.futures.ThreadPoolExecutor(
max_workers=self._threads) as executor:
future_to_params = {
executor.submit(self._upload_one, image, digest): (image, digest)
for digest in image.blob_set()
}
for future in concurrent.futures.as_completed(future_to_params):
future.result()
# This should complete the upload by uploading the manifest.
self.put_manifest(image)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, exception_type, unused_value, unused_traceback):
if exception_type:
logging.error('Error during upload of: %s', self._name)
return
logging.info('Finished upload of: %s', self._name)
# pylint: disable=invalid-name
def Delete(name,
creds, transport):
"""Delete a tag or digest.
Args:
name: a tag or digest to be deleted.
creds: the credentials to use for deletion.
transport: the transport to use to contact the registry.
"""
docker_transport = docker_http.Transport(name, creds, transport,
docker_http.DELETE)
_, unused_content = docker_transport.Request(
'{scheme}://{registry}/v2/{repository}/manifests/{entity}'.format(
scheme=docker_http.Scheme(name.registry),
registry=name.registry,
repository=name.repository,
entity=_tag_or_digest(name)),
method='DELETE',
accepted_codes=[six.moves.http_client.OK, six.moves.http_client.ACCEPTED])

View File

@@ -0,0 +1,141 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package holds a handful of utilities for manipulating manifests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import base64
import json
import os
import subprocess
from containerregistry.client import docker_name
class BadManifestException(Exception):
"""Exception type raised when a malformed manifest is encountered."""
def _JoseBase64UrlDecode(message):
"""Perform a JOSE-style base64 decoding of the supplied message.
This is based on the docker/libtrust version of the similarly named
function found here:
https://github.com/docker/libtrust/blob/master/util.go
Args:
message: a JOSE-style base64 url-encoded message.
Raises:
BadManifestException: a malformed message was supplied.
Returns:
The decoded message.
"""
bytes_msg = message.encode('utf8')
l = len(bytes_msg)
if l % 4 == 0:
pass
elif l % 4 == 2:
bytes_msg += b'=='
elif l % 4 == 3:
bytes_msg += b'='
else:
raise BadManifestException('Malformed JOSE Base64 encoding.')
return base64.urlsafe_b64decode(bytes_msg).decode('utf8')
def _ExtractProtectedRegion(signature):
"""Extract the length and encoded suffix denoting the protected region."""
protected = json.loads(_JoseBase64UrlDecode(signature['protected']))
return (protected['formatLength'], protected['formatTail'])
def _ExtractCommonProtectedRegion(
signatures):
"""Verify that the signatures agree on the protected region and return one."""
p = _ExtractProtectedRegion(signatures[0])
for sig in signatures[1:]:
if p != _ExtractProtectedRegion(sig):
raise BadManifestException('Signatures disagree on protected region')
return p
def DetachSignatures(manifest):
"""Detach the signatures from the signed manifest and return the two halves.
Args:
manifest: a signed JSON manifest.
Raises:
BadManifestException: the provided manifest was improperly signed.
Returns:
a pair consisting of the manifest with the signature removed and a list of
the removed signatures.
"""
# First, decode the manifest to extract the list of signatures.
json_manifest = json.loads(manifest)
# Next, extract the signatures that have signed a portion of the manifest.
signatures = json_manifest['signatures']
# Do some basic validation of the signature input
if len(signatures) < 1:
raise BadManifestException('Expected a signed manifest.')
for sig in signatures:
if 'protected' not in sig:
raise BadManifestException('Signature is missing "protected" key')
# Establish the protected region and extract it from our original string.
(format_length, format_tail) = _ExtractCommonProtectedRegion(signatures)
suffix = _JoseBase64UrlDecode(format_tail)
unsigned_manifest = manifest[0:format_length] + suffix
return (unsigned_manifest, signatures)
def Sign(unsigned_manifest):
# TODO(user): Implement v2 signing in Python.
return unsigned_manifest
def _AttachSignatures(manifest,
signatures):
"""Attach the provided signatures to the provided naked manifest."""
(format_length, format_tail) = _ExtractCommonProtectedRegion(signatures)
prefix = manifest[0:format_length]
suffix = _JoseBase64UrlDecode(format_tail)
return '{prefix},"signatures":{signatures}{suffix}'.format(
prefix=prefix,
signatures=json.dumps(signatures, sort_keys=True),
suffix=suffix)
def Rename(manifest, name):
"""Rename this signed manifest to the provided name, and resign it."""
unsigned_manifest, unused_signatures = DetachSignatures(manifest)
json_manifest = json.loads(unsigned_manifest)
# Rewrite the name fields.
json_manifest['name'] = name.repository
json_manifest['tag'] = name.tag
# Reserialize the json to a string.
updated_unsigned_manifest = json.dumps(
json_manifest, sort_keys=True, indent=2)
# Sign the updated manifest
return Sign(updated_unsigned_manifest)

View File

@@ -0,0 +1,188 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides compatibility interfaces for v1/v2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
from containerregistry.client.v1 import docker_image as v1_image
from containerregistry.client.v2 import docker_digest
from containerregistry.client.v2 import docker_image as v2_image
from containerregistry.client.v2 import util
from six.moves import zip # pylint: disable=redefined-builtin
class V1FromV2(v1_image.DockerImage):
"""This compatibility interface serves the v1 interface from a v2 image."""
def __init__(self, v2_img):
"""Constructor.
Args:
v2_img: a v2 DockerImage on which __enter__ has already been called.
"""
self._v2_image = v2_img
self._ComputeLayerMapping()
def _ComputeLayerMapping(self):
"""Parse the v2 manifest and extract indices to efficiently answer v1 apis.
This reads the v2 manifest, corrolating the v1 compatibility and v2 fsLayer
arrays and creating three indices for efficiently answering v1 queries:
self._v1_to_v2: dict, maps from v1 layer id to v2 digest
self._v1_json: dict, maps from v1 layer id to v1 json
self._v1_ancestry: list, the order of the v1 layers
"""
raw_manifest = self._v2_image.manifest()
manifest = json.loads(raw_manifest)
v2_ancestry = [fs_layer['blobSum'] for fs_layer in manifest['fsLayers']]
v1_jsons = [v1_layer['v1Compatibility'] for v1_layer in manifest['history']]
def ExtractId(v1_json):
v1_metadata = json.loads(v1_json)
return v1_metadata['id']
# Iterate once using the maps to deduplicate.
self._v1_to_v2 = {}
self._v1_json = {}
self._v1_ancestry = []
for (v1_json, v2_digest) in zip(v1_jsons, v2_ancestry):
v1_id = ExtractId(v1_json)
if v1_id in self._v1_to_v2:
assert self._v1_to_v2[v1_id] == v2_digest
assert self._v1_json[v1_id] == v1_json
continue
self._v1_to_v2[v1_id] = v2_digest
self._v1_json[v1_id] = v1_json
self._v1_ancestry.append(v1_id)
# Already effectively memoized.
def top(self):
"""Override."""
return self._v1_ancestry[0]
def repositories(self):
"""Override."""
# TODO(user): This is only used in v1-specific test code.
pass
def parent(self, layer_id):
"""Override."""
ancestry = self.ancestry(layer_id)
if len(ancestry) == 1:
return None
return ancestry[1]
# Already effectively memoized.
def json(self, layer_id):
"""Override."""
return self._v1_json.get(layer_id, '{}')
# Large, don't memoize
def uncompressed_layer(self, layer_id):
"""Override."""
v2_digest = self._v1_to_v2.get(layer_id)
return self._v2_image.uncompressed_blob(v2_digest)
# Large, don't memoize
def layer(self, layer_id):
"""Override."""
v2_digest = self._v1_to_v2.get(layer_id)
return self._v2_image.blob(v2_digest)
def diff_id(self, digest): # pytype: disable=signature-mismatch # overriding-return-type-checks
"""Override."""
return self._v2_image.diff_id(self._v1_to_v2.get(digest))
def ancestry(self, layer_id):
"""Override."""
index = self._v1_ancestry.index(layer_id)
return self._v1_ancestry[index:]
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
class V2FromV1(v2_image.DockerImage):
"""This compatibility interface serves the v2 interface from a v1 image."""
def __init__(self, v1_img):
"""Constructor.
Args:
v1_img: a v1 DockerImage on which __enter__ has already been called.
Raises:
ValueError: an incorrectly typed argument was supplied.
"""
self._v1_image = v1_img
# Construct a manifest from the v1 image, including establishing mappings
# from v2 layer digests to v1 layer ids.
self._ProcessImage()
def _ProcessImage(self):
fs_layers = []
self._layer_map = {}
for layer_id in self._v1_image.ancestry(self._v1_image.top()):
blob = self._v1_image.layer(layer_id)
digest = docker_digest.SHA256(blob)
fs_layers += [{'blobSum': digest}]
self._layer_map[digest] = layer_id
self._manifest = util.Sign(
json.dumps(
{
'schemaVersion':
1,
'name':
'unused',
'tag':
'unused',
'architecture':
'amd64',
'fsLayers':
fs_layers,
'history': [{
'v1Compatibility': self._v1_image.json(layer_id)
} for layer_id in self._v1_image.ancestry(self._v1_image.top())
],
},
sort_keys=True))
def manifest(self):
"""Override."""
return self._manifest
def uncompressed_blob(self, digest):
"""Override."""
return self._v1_image.uncompressed_layer(self._layer_map[digest])
def blob(self, digest):
"""Override."""
return self._v1_image.layer(self._layer_map[digest])
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass

View File

@@ -0,0 +1,58 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.client.v2_2']
from containerregistry.client.v2_2 import docker_creds_
setattr(x, 'docker_creds', docker_creds_)
from containerregistry.client.v2_2 import docker_digest_
setattr(x, 'docker_digest', docker_digest_)
from containerregistry.client.v2_2 import docker_http_
setattr(x, 'docker_http', docker_http_)
from containerregistry.client.v2_2 import docker_image_
setattr(x, 'docker_image', docker_image_)
from containerregistry.client.v2_2 import append_
setattr(x, 'append', append_)
from containerregistry.client.v2_2 import docker_image_list_
setattr(x, 'docker_image_list', docker_image_list_)
from containerregistry.client.v2_2 import oci_compat_
setattr(x, 'oci_compat', oci_compat_)
from containerregistry.client.v2_2 import v2_compat_
setattr(x, 'v2_compat', v2_compat_)
from containerregistry.client.v2_2 import docker_session_
setattr(x, 'docker_session', docker_session_)
from containerregistry.client.v2_2 import save_
setattr(x, 'save', save_)

View File

@@ -0,0 +1,108 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides tools for appending layers to docker images."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_digest
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image
from containerregistry.transform.v2_2 import metadata
# _EMPTY_LAYER_TAR_ID is the sha256 of an empty tarball.
_EMPTY_LAYER_TAR_ID = 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4'
class Layer(docker_image.DockerImage):
"""Appends a new layer on top of a base image.
This augments a base docker image with new files from a gzipped tarball,
adds environment variables and exposes a port.
"""
def __init__(self,
base,
tar_gz,
diff_id = None,
overrides = None):
"""Creates a new layer on top of a base with optional tar.gz.
Args:
base: a base DockerImage for a new layer.
tar_gz: an optional gzipped tarball passed as a bytes with filesystem
changeset.
diff_id: an optional string containing the digest of the
uncompressed tar_gz.
overrides: an optional metadata.Overrides object of properties to override
on the base image.
"""
self._base = base
manifest = json.loads(self._base.manifest())
config_file = json.loads(self._base.config_file())
overrides = overrides or metadata.Overrides()
overrides = overrides.Override(created_by=docker_name.USER_AGENT)
if tar_gz:
self._blob = tar_gz
self._blob_sum = docker_digest.SHA256(self._blob)
manifest['layers'].append({
'digest': self._blob_sum,
'mediaType': docker_http.LAYER_MIME,
'size': len(self._blob),
})
if not diff_id:
diff_id = docker_digest.SHA256(self.uncompressed_blob(self._blob_sum))
# Takes naked hex.
overrides = overrides.Override(layers=[diff_id[len('sha256:'):]])
else:
# The empty layer.
overrides = overrides.Override(layers=[docker_digest.SHA256(b'', '')])
config_file = metadata.Override(config_file, overrides)
self._config_file = json.dumps(config_file, sort_keys=True)
utf8_encoded_config = self._config_file.encode('utf8')
manifest['config']['digest'] = docker_digest.SHA256(utf8_encoded_config)
manifest['config']['size'] = len(utf8_encoded_config)
self._manifest = json.dumps(manifest, sort_keys=True)
def manifest(self):
"""Override."""
return self._manifest
def config_file(self):
"""Override."""
return self._config_file
def blob(self, digest):
"""Override."""
if digest == self._blob_sum:
return self._blob
return self._base.blob(digest)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
"""Override."""
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Override."""
return

View File

@@ -0,0 +1,32 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package exposes credentials for talking to a Docker registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from containerregistry.client import docker_creds
class Bearer(docker_creds.SchemeProvider):
"""Implementation for providing a transaction's Bearer token as creds."""
def __init__(self, bearer_token):
super(Bearer, self).__init__('Bearer')
self._bearer_token = bearer_token
@property
def suffix(self):
return self._bearer_token

View File

@@ -0,0 +1,26 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package holds a handful of utilities for calculating digests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import hashlib
def SHA256(content, prefix='sha256:'):
"""Return 'sha256:' + hex(sha256(content))."""
return prefix + hashlib.sha256(content).hexdigest()

View File

@@ -0,0 +1,450 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package facilitates HTTP/REST requests to the registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import re
import threading
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_creds as v2_2_creds
import httplib2
import six.moves.http_client
import six.moves.urllib.parse
# Options for docker_http.Transport actions
PULL = 'pull'
PUSH = 'push,pull'
# For now DELETE is PUSH, which is the read/write ACL.
DELETE = PUSH
CATALOG = 'catalog'
ACTIONS = [PULL, PUSH, DELETE, CATALOG]
MANIFEST_SCHEMA1_MIME = 'application/vnd.docker.distribution.manifest.v1+json'
MANIFEST_SCHEMA1_SIGNED_MIME = 'application/vnd.docker.distribution.manifest.v1+prettyjws' # pylint disable=line-too-long
MANIFEST_SCHEMA2_MIME = 'application/vnd.docker.distribution.manifest.v2+json'
MANIFEST_LIST_MIME = 'application/vnd.docker.distribution.manifest.list.v2+json'
LAYER_MIME = 'application/vnd.docker.image.rootfs.diff.tar.gzip'
FOREIGN_LAYER_MIME = 'application/vnd.docker.image.rootfs.foreign.diff.tar.gzip'
CONFIG_JSON_MIME = 'application/vnd.docker.container.image.v1+json'
OCI_MANIFEST_MIME = 'application/vnd.oci.image.manifest.v1+json'
OCI_IMAGE_INDEX_MIME = 'application/vnd.oci.image.index.v1+json'
OCI_LAYER_MIME = 'application/vnd.oci.image.layer.v1.tar'
OCI_GZIP_LAYER_MIME = 'application/vnd.oci.image.layer.v1.tar+gzip'
OCI_NONDISTRIBUTABLE_LAYER_MIME = 'application/vnd.oci.image.layer.nondistributable.v1.tar' # pylint disable=line-too-long
OCI_NONDISTRIBUTABLE_GZIP_LAYER_MIME = 'application/vnd.oci.image.layer.nondistributable.v1.tar+gzip' # pylint disable=line-too-long
OCI_CONFIG_JSON_MIME = 'application/vnd.oci.image.config.v1+json'
MANIFEST_SCHEMA1_MIMES = [MANIFEST_SCHEMA1_MIME, MANIFEST_SCHEMA1_SIGNED_MIME]
MANIFEST_SCHEMA2_MIMES = [MANIFEST_SCHEMA2_MIME]
OCI_MANIFEST_MIMES = [OCI_MANIFEST_MIME]
# OCI and Schema2 are compatible formats.
SUPPORTED_MANIFEST_MIMES = [OCI_MANIFEST_MIME, MANIFEST_SCHEMA2_MIME]
# OCI Image Index and Manifest List are compatible formats.
MANIFEST_LIST_MIMES = [OCI_IMAGE_INDEX_MIME, MANIFEST_LIST_MIME]
# Docker & OCI layer mime types indicating foreign/non-distributable layers.
NON_DISTRIBUTABLE_LAYER_MIMES = [
FOREIGN_LAYER_MIME, OCI_NONDISTRIBUTABLE_LAYER_MIME,
OCI_NONDISTRIBUTABLE_GZIP_LAYER_MIME
]
class Diagnostic(object):
"""Diagnostic encapsulates a Registry v2 diagnostic message.
This captures one of the "errors" from a v2 Registry error response
message, as outlined here:
https://github.com/docker/distribution/blob/master/docs/spec/api.md#errors
Args:
error: the decoded JSON of the "errors" array element.
"""
def __init__(self, error):
self._error = error
def __eq__(self, other):
return (self.code == other.code and
self.message == other.message and
self.detail == other.detail)
@property
def code(self):
return self._error.get('code')
@property
def message(self):
return self._error.get('message')
@property
def detail(self):
return self._error.get('detail')
def _DiagnosticsFromContent(content):
"""Extract and return the diagnostics from content."""
try:
content = content.decode('utf8')
except: # pylint: disable=bare-except
# Assume it's already decoded. Defensive coding for old py2 habits that
# are hard to break. Passing does not make the problem worse.
pass
try:
o = json.loads(content)
return [Diagnostic(d) for d in o.get('errors', [])]
except: # pylint: disable=bare-except
return [Diagnostic({
'code': 'UNKNOWN',
'message': content,
})]
class V2DiagnosticException(Exception):
"""Exceptions when an unexpected HTTP status is returned."""
def __init__(self, resp, content):
self._resp = resp
self._diagnostics = _DiagnosticsFromContent(content)
message = '\n'.join(
['response: %s' % resp] +
['%s: %s' % (d.message, d.detail) for d in self._diagnostics])
super(V2DiagnosticException, self).__init__(message)
@property
def diagnostics(self):
return self._diagnostics
@property
def response(self):
return self._resp
@property
def status(self):
return self._resp.status
class BadStateException(Exception):
"""Exceptions when we have entered an unexpected state."""
class TokenRefreshException(BadStateException):
"""Exception when token refresh fails."""
def _CheckState(predicate, message = None):
if not predicate:
raise BadStateException(message if message else 'Unknown')
_ANONYMOUS = ''
_BASIC = 'Basic'
_BEARER = 'Bearer'
_REALM_PFX = 'realm='
_SERVICE_PFX = 'service='
class Transport(object):
"""HTTP Transport abstraction to handle automatic v2 reauthentication.
In the v2 Registry protocol, all of the API endpoints expect to receive
'Bearer' authentication. These Bearer tokens are generated by exchanging
'Basic' or 'Anonymous' authentication with an authentication endpoint
designated by the opening ping request.
The Bearer tokens are scoped to a resource (typically repository), and
are generated with a set of capabilities embedded (e.g. push, pull).
The Docker client has a baked in 60-second expiration for Bearer tokens,
and upon expiration, registries can reject any request with a 401. The
transport should automatically refresh the Bearer token and reissue the
request.
Args:
name: the structured name of the docker resource being referenced.
creds: the basic authentication credentials to use for authentication
challenge exchanges.
transport: the HTTP transport to use under the hood.
action: One of docker_http.ACTIONS, for which we plan to use this transport
"""
def __init__(self, name,
creds,
transport, action):
self._name = name
self._basic_creds = creds
self._transport = transport
self._action = action
self._lock = threading.Lock()
_CheckState(action in ACTIONS,
'Invalid action supplied to docker_http.Transport: %s' % action)
# Ping once to establish realm, and then get a good credential
# for use with this transport.
self._Ping()
if self._authentication == _BEARER:
self._Refresh()
elif self._authentication == _BASIC:
self._creds = self._basic_creds
else:
self._creds = docker_creds.Anonymous()
def _Ping(self):
"""Ping the v2 Registry.
Only called during transport construction, this pings the listed
v2 registry. The point of this ping is to establish the "realm"
and "service" to use for Basic for Bearer-Token exchanges.
"""
# This initiates the pull by issuing a v2 ping:
# GET H:P/v2/
headers = {
'content-type': 'application/json',
'user-agent': docker_name.USER_AGENT,
}
resp, content = self._transport.request(
'{scheme}://{registry}/v2/'.format(
scheme=Scheme(self._name.registry), registry=self._name.registry),
'GET',
body=None,
headers=headers)
# We expect a www-authenticate challenge.
_CheckState(
resp.status in [
six.moves.http_client.OK, six.moves.http_client.UNAUTHORIZED
], 'Unexpected response pinging the registry: {}\nBody: {}'.format(
resp.status, content or '<empty>'))
# The registry is authenticated iff we have an authentication challenge.
if resp.status == six.moves.http_client.OK:
self._authentication = _ANONYMOUS
self._service = 'none'
self._realm = 'none'
return
challenge = resp['www-authenticate']
_CheckState(' ' in challenge,
'Unexpected "www-authenticate" header form: %s' % challenge)
(self._authentication, remainder) = challenge.split(' ', 1)
# Normalize the authentication scheme to have exactly the first letter
# capitalized. Scheme matching is required to be case insensitive:
# https://tools.ietf.org/html/rfc7235#section-2.1
self._authentication = self._authentication.capitalize()
_CheckState(self._authentication in [_BASIC, _BEARER],
'Unexpected "www-authenticate" challenge type: %s' %
self._authentication)
# Default "_service" to the registry
self._service = self._name.registry
tokens = remainder.split(',')
for t in tokens:
if t.startswith(_REALM_PFX):
self._realm = t[len(_REALM_PFX):].strip('"')
elif t.startswith(_SERVICE_PFX):
self._service = t[len(_SERVICE_PFX):].strip('"')
# Make sure these got set.
_CheckState(self._realm, 'Expected a "%s" in "www-authenticate" '
'header: %s' % (_REALM_PFX, challenge))
def _Scope(self):
"""Construct the resource scope to pass to a v2 auth endpoint."""
return self._name.scope(self._action)
def _Refresh(self):
"""Refreshes the Bearer token credentials underlying this transport.
This utilizes the "realm" and "service" established during _Ping to
set up _creds with up-to-date credentials, by passing the
client-provided _basic_creds to the authorization realm.
This is generally called under two circumstances:
1) When the transport is created (eagerly)
2) When a request fails on a 401 Unauthorized
Raises:
TokenRefreshException: Error during token exchange.
"""
headers = {
'content-type': 'application/json',
'user-agent': docker_name.USER_AGENT,
'Authorization': self._basic_creds.Get()
}
parameters = {
'scope': self._Scope(),
'service': self._service,
}
resp, content = self._transport.request(
# 'realm' includes scheme and path
'{realm}?{query}'.format(
realm=self._realm,
query=six.moves.urllib.parse.urlencode(parameters)),
'GET',
body=None,
headers=headers)
if resp.status != six.moves.http_client.OK:
raise TokenRefreshException('Bad status during token exchange: %d\n%s' %
(resp.status, content))
try:
content = content.decode('utf8')
except: # pylint: disable=bare-except
# Assume it's already decoded. Defensive coding for old py2 habits that
# are hard to break. Passing does not make the problem worse.
pass
wrapper_object = json.loads(content)
token = wrapper_object.get('token') or wrapper_object.get('access_token')
_CheckState(token is not None, 'Malformed JSON response: %s' % content)
with self._lock:
# We have successfully reauthenticated.
self._creds = v2_2_creds.Bearer(token)
# pylint: disable=invalid-name
def Request(self,
url,
accepted_codes = None,
method = None,
body = None,
content_type = None,
accepted_mimes = None
):
"""Wrapper containing much of the boilerplate REST logic for Registry calls.
Args:
url: the URL to which to talk
accepted_codes: the list of acceptable http status codes
method: the HTTP method to use (defaults to GET/PUT depending on
whether body is provided)
body: the body to pass into the PUT request (or None for GET)
content_type: the mime-type of the request (or None for JSON).
content_type is ignored when body is None.
accepted_mimes: the list of acceptable mime-types
Raises:
BadStateException: an unexpected internal state has been encountered.
V2DiagnosticException: an error has occurred interacting with v2.
Returns:
The response of the HTTP request, and its contents.
"""
if not method:
method = 'GET' if not body else 'PUT'
# If the first request fails on a 401 Unauthorized, then refresh the
# Bearer token and retry, if the authentication mode is bearer.
for retry_unauthorized in [self._authentication == _BEARER, False]:
# self._creds may be changed by self._Refresh(), so do
# not hoist this.
headers = {
'user-agent': docker_name.USER_AGENT,
}
auth = self._creds.Get()
if auth:
headers['Authorization'] = auth
if body: # Requests w/ bodies should have content-type.
headers['content-type'] = (
content_type if content_type else 'application/json')
if accepted_mimes is not None:
headers['Accept'] = ','.join(accepted_mimes)
# POST/PUT require a content-length, when no body is supplied.
if method in ('POST', 'PUT') and not body:
headers['content-length'] = '0'
resp, content = self._transport.request(
url, method, body=body, headers=headers)
if (retry_unauthorized and
resp.status == six.moves.http_client.UNAUTHORIZED):
# On Unauthorized, refresh the credential and retry.
self._Refresh()
continue
break
if resp.status not in accepted_codes:
# Use the content returned by GCR as the error message.
raise V2DiagnosticException(resp, content)
return resp, content
def PaginatedRequest(self,
url,
accepted_codes = None,
method = None,
body = None,
content_type = None
):
"""Wrapper around Request that follows Link headers if they exist.
Args:
url: the URL to which to talk
accepted_codes: the list of acceptable http status codes
method: the HTTP method to use (defaults to GET/PUT depending on
whether body is provided)
body: the body to pass into the PUT request (or None for GET)
content_type: the mime-type of the request (or None for JSON)
Yields:
The return value of calling Request for each page of results.
"""
next_page = url
while next_page:
resp, content = self.Request(next_page, accepted_codes, method, body,
content_type)
yield resp, content
next_page = ParseNextLinkHeader(resp)
def ParseNextLinkHeader(resp):
"""Returns "next" link from RFC 5988 Link header or None if not present."""
link = resp.get('link')
if not link:
return None
m = re.match(r'.*<(.+)>;\s*rel="next".*', link)
if not m:
return None
return m.group(1)
def Scheme(endpoint):
"""Returns https scheme for all the endpoints except localhost."""
if endpoint.startswith('localhost:'):
return 'http'
elif re.match(r'.*\.local(?:host)?(?::\d{1,5})?$', endpoint):
return 'http'
else:
return 'https'

View File

@@ -0,0 +1,893 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides DockerImage for examining docker_build outputs."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import gzip
import io
import json
import os
import tarfile
import threading
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_digest
from containerregistry.client.v2_2 import docker_http
import httplib2
import six
from six.moves import zip # pylint: disable=redefined-builtin
import six.moves.http_client
class DigestMismatchedError(Exception):
"""Exception raised when a digest mismatch is encountered."""
class DockerImage(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for implementations that interact with Docker images."""
def fs_layers(self):
"""The ordered collection of filesystem layers that comprise this image."""
manifest = json.loads(self.manifest())
return [x['digest'] for x in reversed(manifest['layers'])]
def diff_ids(self):
"""The ordered list of uncompressed layer hashes (matches fs_layers)."""
cfg = json.loads(self.config_file())
return list(reversed(cfg.get('rootfs', {}).get('diff_ids', [])))
def config_blob(self):
manifest = json.loads(self.manifest())
return manifest['config']['digest']
def blob_set(self):
"""The unique set of blobs that compose to create the filesystem."""
return set(self.fs_layers() + [self.config_blob()])
def distributable_blob_set(self):
"""The unique set of blobs which are distributable."""
manifest = json.loads(self.manifest())
distributable_blobs = {
x['digest']
for x in reversed(manifest['layers'])
if x['mediaType'] not in docker_http.NON_DISTRIBUTABLE_LAYER_MIMES
}
distributable_blobs.add(self.config_blob())
return distributable_blobs
def digest(self):
"""The digest of the manifest."""
return docker_digest.SHA256(self.manifest().encode('utf8'))
def media_type(self):
"""The media type of the manifest."""
manifest = json.loads(self.manifest())
# Since 'mediaType' is optional for OCI images, assume OCI if it's missing.
return manifest.get('mediaType', docker_http.OCI_MANIFEST_MIME)
# pytype: disable=bad-return-type
@abc.abstractmethod
def manifest(self):
"""The JSON manifest referenced by the tag/digest.
Returns:
The raw json manifest
"""
# pytype: enable=bad-return-type
# pytype: disable=bad-return-type
@abc.abstractmethod
def config_file(self):
"""The raw blob bytes of the config file."""
# pytype: enable=bad-return-type
def blob_size(self, digest):
"""The byte size of the raw blob."""
return len(self.blob(digest))
# pytype: disable=bad-return-type
@abc.abstractmethod
def blob(self, digest):
"""The raw blob of the layer.
Args:
digest: the 'algo:digest' of the layer being addressed.
Returns:
The raw blob bytes of the layer.
"""
# pytype: enable=bad-return-type
def uncompressed_blob(self, digest):
"""Same as blob() but uncompressed."""
zipped = self.blob(digest)
buf = io.BytesIO(zipped)
f = gzip.GzipFile(mode='rb', fileobj=buf)
unzipped = f.read()
return unzipped
def _diff_id_to_digest(self, diff_id):
for (this_digest, this_diff_id) in six.moves.zip(self.fs_layers(),
self.diff_ids()):
if this_diff_id == diff_id:
return this_digest
raise ValueError('Unmatched "diff_id": "%s"' % diff_id)
def digest_to_diff_id(self, digest):
for (this_digest, this_diff_id) in six.moves.zip(self.fs_layers(),
self.diff_ids()):
if this_digest == digest:
return this_diff_id
raise ValueError('Unmatched "digest": "%s"' % digest)
def layer(self, diff_id):
"""Like `blob()`, but accepts the `diff_id` instead.
The `diff_id` is the name for the digest of the uncompressed layer.
Args:
diff_id: the 'algo:digest' of the layer being addressed.
Returns:
The raw compressed blob bytes of the layer.
"""
return self.blob(self._diff_id_to_digest(diff_id))
def uncompressed_layer(self, diff_id):
"""Same as layer() but uncompressed."""
return self.uncompressed_blob(self._diff_id_to_digest(diff_id))
# __enter__ and __exit__ allow use as a context manager.
@abc.abstractmethod
def __enter__(self):
"""Open the image for reading."""
@abc.abstractmethod
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Close the image."""
def __str__(self):
"""A human-readable representation of the image."""
return str(type(self))
class Delegate(DockerImage):
"""Forwards calls to the underlying image."""
def __init__(self, image):
"""Constructor.
Args:
image: a DockerImage on which __enter__ has already been called.
"""
super().__init__()
self._image = image
def manifest(self):
"""Override."""
return self._image.manifest()
def media_type(self):
"""Override."""
return self._image.media_type()
def diff_ids(self):
"""Override."""
return self._image.diff_ids()
def fs_layers(self):
"""Override."""
return self._image.fs_layers()
def config_blob(self):
"""Override."""
return self._image.config_blob()
def blob_set(self):
"""Override."""
return self._image.blob_set()
def config_file(self):
"""Override."""
return self._image.config_file()
def blob_size(self, digest):
"""Override."""
return self._image.blob_size(digest)
def blob(self, digest):
"""Override."""
return self._image.blob(digest)
def uncompressed_blob(self, digest):
"""Override."""
return self._image.uncompressed_blob(digest)
def layer(self, diff_id):
"""Override."""
return self._image.layer(diff_id)
def uncompressed_layer(self, diff_id):
"""Override."""
return self._image.uncompressed_layer(diff_id)
def __str__(self):
"""Override."""
return str(self._image)
class FromRegistry(DockerImage):
"""This accesses a docker image hosted on a registry (non-local)."""
def __init__(self,
name,
basic_creds,
transport,
accepted_mimes = docker_http.MANIFEST_SCHEMA2_MIMES):
super().__init__()
self._name = name
self._creds = basic_creds
self._original_transport = transport
self._accepted_mimes = accepted_mimes
self._response = {}
def _content(self,
suffix,
accepted_mimes = None,
cache = True):
"""Fetches content of the resources from registry by http calls."""
if isinstance(self._name, docker_name.Repository):
suffix = '{repository}/{suffix}'.format(
repository=self._name.repository, suffix=suffix)
if suffix in self._response:
return self._response[suffix]
_, content = self._transport.Request(
'{scheme}://{registry}/v2/{suffix}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
suffix=suffix),
accepted_codes=[six.moves.http_client.OK],
accepted_mimes=accepted_mimes)
if cache:
self._response[suffix] = content
return content
def check_usage_only(self):
# See //cloud/containers/registry/proto/v2/registry_usage.proto
# for the full response structure.
response = json.loads(
self._content('tags/list?check_usage_only=true').decode('utf8')
)
if 'usage' not in response:
raise docker_http.BadStateException(
'Malformed JSON response: {}. Missing "usage" field'.format(response)
)
return response.get('usage')
def _tags(self):
# See //cloud/containers/registry/proto/v2/tags.proto
# for the full response structure.
return json.loads(self._content('tags/list').decode('utf8'))
def tags(self):
return self._tags().get('tags', [])
def manifests(self):
payload = self._tags()
if 'manifest' not in payload:
# Only GCR supports this schema.
return {}
return payload['manifest']
def children(self):
payload = self._tags()
if 'child' not in payload:
# Only GCR supports this schema.
return []
return payload['child']
def exists(self):
try:
manifest = json.loads(self.manifest(validate=False))
return (manifest['schemaVersion'] == 2 and 'layers' in manifest and
self.media_type() in self._accepted_mimes)
except docker_http.V2DiagnosticException as err:
if err.status == six.moves.http_client.NOT_FOUND:
return False
raise
def digest(self):
"""The digest of the manifest."""
if isinstance(self._name, docker_name.Digest):
return self._name.digest
return super().digest()
def manifest(self, validate=True):
"""Override."""
# GET server1/v2/<name>/manifests/<tag_or_digest>
if isinstance(self._name, docker_name.Tag):
path = 'manifests/' + self._name.tag
return self._content(path, self._accepted_mimes).decode('utf8')
else:
assert isinstance(self._name, docker_name.Digest)
c = self._content('manifests/' + self._name.digest, self._accepted_mimes)
computed = docker_digest.SHA256(c)
if validate and computed != self._name.digest:
raise DigestMismatchedError(
'The returned manifest\'s digest did not match requested digest, '
'%s vs. %s' % (self._name.digest, computed))
return c.decode('utf8')
def config_file(self):
"""Override."""
return self.blob(self.config_blob()).decode('utf8')
def blob_size(self, digest):
"""The byte size of the raw blob."""
suffix = 'blobs/' + digest
if isinstance(self._name, docker_name.Repository):
suffix = '{repository}/{suffix}'.format(
repository=self._name.repository, suffix=suffix)
resp, unused_content = self._transport.Request(
'{scheme}://{registry}/v2/{suffix}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
suffix=suffix),
method='HEAD',
accepted_codes=[six.moves.http_client.OK])
return int(resp['content-length'])
# Large, do not memoize.
def blob(self, digest):
"""Override."""
# GET server1/v2/<name>/blobs/<digest>
c = self._content('blobs/' + digest, cache=False)
computed = docker_digest.SHA256(c)
if digest != computed:
raise DigestMismatchedError(
'The returned content\'s digest did not match its content-address, '
'%s vs. %s' % (digest, computed if c else '(content was empty)'))
return c
def catalog(self, page_size = 100):
# TODO(user): Handle docker_name.Repository for /v2/<name>/_catalog
if isinstance(self._name, docker_name.Repository):
raise ValueError('Expected docker_name.Registry for "name"')
url = '{scheme}://{registry}/v2/_catalog?n={page_size}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
page_size=page_size)
for _, content in self._transport.PaginatedRequest(
url, accepted_codes=[six.moves.http_client.OK]):
wrapper_object = json.loads(content.decode('utf8'))
if 'repositories' not in wrapper_object:
raise docker_http.BadStateException(
'Malformed JSON response: %s' % content)
# TODO(user): This should return docker_name.Repository
for repo in wrapper_object['repositories']:
yield repo
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
# Create a v2 transport to use for making authenticated requests.
self._transport = docker_http.Transport(
self._name, self._creds, self._original_transport, docker_http.PULL)
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
def __str__(self):
return '<docker_image.FromRegistry name: {}>'.format(str(self._name))
# Gzip injects a timestamp into its output, which makes its output and digest
# non-deterministic. To get reproducible pushes, freeze time.
# This approach is based on the following StackOverflow answer:
# http://stackoverflow.com/questions/264224/setting-the-gzip-timestamp-from-python
class _FakeTime(object):
def time(self):
return 1225856967.109
gzip.time = _FakeTime()
# Checks the contents of a file for magic bytes that indicate that it's gzipped
def is_compressed(name):
return name[0:2] == b'\x1f\x8b'
class FromTarball(DockerImage):
"""This decodes the image tarball output of docker_build for upload."""
def __init__(
self,
tarball,
name = None,
compresslevel = 9,
):
super().__init__()
self._tarball = tarball
self._compresslevel = compresslevel
self._memoize = {}
self._lock = threading.Lock()
self._name = name
self._manifest = None
self._blob_names = None
self._config_blob = None
# Layers can come in two forms, as an uncompressed tar in a directory
# or as a gzipped tar. We need to account for both options, and be able
# to return both uncompressed and compressed data.
def _content(self,
name,
memoize = True,
should_be_compressed = False):
"""Fetches a particular path's contents from the tarball."""
# Check our cache
if memoize:
with self._lock:
if (name, should_be_compressed) in self._memoize:
return self._memoize[(name, should_be_compressed)]
# tarfile is inherently single-threaded:
# https://mail.python.org/pipermail/python-bugs-list/2015-March/265999.html
# so instead of locking, just open the tarfile for each file
# we want to read.
with tarfile.open(name=self._tarball, mode='r') as tar:
try:
# If the layer is compressed and we need to return compressed
# or if it's uncompressed and we need to return uncompressed
# then return the contents as is.
f = tar.extractfile(str(name))
content = f.read() # pytype: disable=attribute-error
except KeyError:
content = tar.extractfile(
str('./' + name)).read() # pytype: disable=attribute-error
# We need to compress before returning. Use gzip.
if should_be_compressed and not is_compressed(content):
buf = io.BytesIO()
zipped = gzip.GzipFile(
mode='wb', compresslevel=self._compresslevel, fileobj=buf)
try:
zipped.write(content)
finally:
zipped.close()
content = buf.getvalue()
# The layer is gzipped but we need to return the uncompressed content
# Open up the gzip and read the contents after.
elif not should_be_compressed and is_compressed(content):
buf = io.BytesIO(content)
raw = gzip.GzipFile(mode='rb', fileobj=buf)
content = raw.read()
# Populate our cache.
if memoize:
with self._lock:
self._memoize[(name, should_be_compressed)] = content
return content
def _gzipped_content(self, name):
"""Returns the result of _content with gzip applied."""
return self._content(name, memoize=False, should_be_compressed=True)
def _populate_manifest_and_blobs(self):
"""Populates self._manifest and self._blob_names."""
config_blob = docker_digest.SHA256(self.config_file().encode('utf8'))
manifest = {
'mediaType': docker_http.MANIFEST_SCHEMA2_MIME,
'schemaVersion': 2,
'config': {
'digest': config_blob,
'mediaType': docker_http.CONFIG_JSON_MIME,
'size': len(self.config_file())
},
'layers': [
# Populated below
]
}
blob_names = {}
config = json.loads(self.config_file())
diff_ids = config['rootfs']['diff_ids']
for i, layer in enumerate(self._layers):
name = None
diff_id = diff_ids[i]
media_type = docker_http.LAYER_MIME
size = 0
urls = []
if diff_id in self._layer_sources:
# _layer_sources contains foreign layers from the base image
name = self._layer_sources[diff_id]['digest']
media_type = self._layer_sources[diff_id]['mediaType']
size = self._layer_sources[diff_id]['size']
if 'urls' in self._layer_sources[diff_id]:
urls = self._layer_sources[diff_id]['urls']
else:
content = self._gzipped_content(layer)
name = docker_digest.SHA256(content)
size = len(content)
blob_names[name] = layer
layer_manifest = {
'digest': name,
'mediaType': media_type,
'size': size,
}
if urls:
layer_manifest['urls'] = urls
manifest['layers'].append(layer_manifest)
with self._lock:
self._manifest = manifest
self._blob_names = blob_names
self._config_blob = config_blob
def manifest(self):
"""Override."""
if not self._manifest:
self._populate_manifest_and_blobs()
return json.dumps(self._manifest, sort_keys=True)
def config_file(self):
"""Override."""
return self._content(self._config_file).decode('utf8')
# Could be large, do not memoize
def uncompressed_blob(self, digest):
"""Override."""
if not self._blob_names:
self._populate_manifest_and_blobs()
assert self._blob_names is not None
return self._content(
self._blob_names[digest],
memoize=False,
should_be_compressed=False)
# Could be large, do not memoize
def blob(self, digest):
"""Override."""
if not self._blob_names:
self._populate_manifest_and_blobs()
if digest == self._config_blob:
return self.config_file().encode('utf8')
assert self._blob_names is not None
return self._gzipped_content(
self._blob_names[digest])
# Could be large, do not memoize
def uncompressed_layer(self, diff_id):
"""Override."""
for (layer, this_diff_id) in zip(reversed(self._layers), self.diff_ids()):
if diff_id == this_diff_id:
return self._content(layer, memoize=False, should_be_compressed=False)
raise ValueError('Unmatched "diff_id": "%s"' % diff_id)
def _resolve_tag(self):
"""Resolve the singleton tag this tarball contains using legacy methods."""
repo_bytes = self._content('repositories', memoize=False)
repositories = json.loads(repo_bytes.decode('utf8'))
if len(repositories) != 1:
raise ValueError('Tarball must contain a single repository, '
'or a name must be specified to FromTarball.')
for (repo, tags) in six.iteritems(repositories):
if len(tags) != 1:
raise ValueError('Tarball must contain a single tag, '
'or a name must be specified to FromTarball.')
for (tag, unused_layer) in six.iteritems(tags):
return '{repository}:{tag}'.format(repository=repo, tag=tag)
raise Exception('unreachable')
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
manifest_json = self._content('manifest.json').decode('utf8')
manifest_list = json.loads(manifest_json)
config = None
layers = []
layer_sources = []
# Find the right entry, either:
# 1) We were supplied with an image name, which we must find in an entry's
# RepoTags, or
# 2) We were not supplied with an image name, and this must have a single
# image defined.
if len(manifest_list) != 1:
if not self._name:
# If we run into this situation, fall back on the legacy repositories
# file to tell us the single tag. We do this because Bazel will apply
# build targets as labels, so each layer will be labelled, but only
# the final label will appear in the resulting repositories file.
self._name = self._resolve_tag()
for entry in manifest_list:
if not self._name or str(self._name) in (entry.get('RepoTags') or []):
config = entry.get('Config')
layers = entry.get('Layers', [])
layer_sources = entry.get('LayerSources', {})
if not config:
raise ValueError('Unable to find %s in provided tarball.' % self._name)
# Metadata from the tarball's configuration we need to construct the image.
self._config_file = config
self._layers = layers
self._layer_sources = layer_sources
# We populate "manifest" and "blobs" lazily for two reasons:
# 1) Allow use of this library for reading the config_file() from the image
# layer shards Bazel produces.
# 2) Performance of the case where all we read is the config_file().
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
class FromDisk(DockerImage):
"""This accesses a more efficient on-disk format than FromTarball.
FromDisk reads an on-disk format optimized for use with push and pull.
It is expected that the number of layers in config_file's rootfs.diff_ids
matches: count(legacy_base.layers) + len(layers).
Layers are drawn from legacy_base first (it is expected to be the base),
and then from layers.
This is effectively the dual of the save.fast method, and is intended for use
with Bazel's rules_docker.
Args:
config_file: the contents of the config file.
layers: a list of pairs. The first element is the path to a file containing
the second element's sha256. The second element is the .tar.gz of a
filesystem layer. These are ordered as they'd appear in the manifest.
uncompressed_layers: Optionally, a list of pairs. The first element is the
path to a file containing the second element's sha256.
The second element is the .tar of a filesystem layer.
legacy_base: Optionally, the path to a legacy base image in FromTarball form
foreign_layers_manifest: Optionally a tar manifest from the base
image that describes the ForeignLayers needed by this image.
"""
def __init__(self,
config_file,
layers,
uncompressed_layers = None,
legacy_base = None,
foreign_layers_manifest = None):
super().__init__()
self._config = config_file
self._manifest = None
self._foreign_layers_manifest = foreign_layers_manifest
self._layers = []
self._layer_to_filename = {}
for (name_file, content_file) in layers:
with io.open(name_file, u'r') as reader:
layer_name = 'sha256:' + reader.read()
self._layers.append(layer_name)
self._layer_to_filename[layer_name] = content_file
self._uncompressed_layers = []
self._uncompressed_layer_to_filename = {}
if uncompressed_layers:
for (name_file, content_file) in uncompressed_layers:
with io.open(name_file, u'r') as reader:
layer_name = 'sha256:' + reader.read()
self._uncompressed_layers.append(layer_name)
self._uncompressed_layer_to_filename[layer_name] = content_file
self._legacy_base = None
if legacy_base:
with FromTarball(legacy_base) as base:
self._legacy_base = base
def _get_foreign_layers(self):
foreign_layers = []
if self._foreign_layers_manifest:
manifest = json.loads(self._foreign_layers_manifest)
if 'layers' in manifest:
for layer in manifest['layers']:
if layer['mediaType'] == docker_http.FOREIGN_LAYER_MIME:
foreign_layers.append(layer)
return foreign_layers
def _get_foreign_layer_by_digest(self, digest):
for foreign_layer in self._get_foreign_layers():
if foreign_layer['digest'] == digest:
return foreign_layer
return None
def _populate_manifest(self):
base_layers = []
if self._legacy_base:
base_layers = json.loads(self._legacy_base.manifest())['layers']
elif self._foreign_layers_manifest:
# Manifest files found in tar files are actually a json list.
# This code iterates through that collection and appends any foreign
# layers described in the order found in the config file.
base_layers += self._get_foreign_layers()
# TODO(user): Update mimes here for oci_compat.
self._manifest = json.dumps(
{
'schemaVersion':
2,
'mediaType':
docker_http.MANIFEST_SCHEMA2_MIME,
'config': {
'mediaType':
docker_http.CONFIG_JSON_MIME,
'size':
len(self.config_file()),
'digest':
docker_digest.SHA256(self.config_file().encode('utf8'))
},
'layers':
base_layers + [{
'mediaType': docker_http.LAYER_MIME,
'size': self.blob_size(digest),
'digest': digest
} for digest in self._layers]
},
sort_keys=True)
def manifest(self):
"""Override."""
if not self._manifest:
self._populate_manifest()
assert self._manifest is not None
return self._manifest
def config_file(self):
"""Override."""
return self._config
# Could be large, do not memoize
def uncompressed_blob(self, digest):
"""Override."""
if digest not in self._layer_to_filename:
if self._get_foreign_layer_by_digest(digest):
return bytes([])
else:
# Leverage the FromTarball fast-path.
return self._checked_legacy_base.uncompressed_blob(digest)
return super(FromDisk, self).uncompressed_blob(digest)
def uncompressed_layer(self, diff_id):
if diff_id in self._uncompressed_layer_to_filename:
with io.open(
self._uncompressed_layer_to_filename[diff_id], 'rb'
) as reader:
# TODO(b/118349036): Remove the disable once the pytype bug is fixed.
return reader.read() # pytype: disable=bad-return-type
if self._legacy_base and diff_id in self._legacy_base.diff_ids():
return self._legacy_base.uncompressed_layer(diff_id)
return super(FromDisk, self).uncompressed_layer(diff_id)
# Could be large, do not memoize
def blob(self, digest):
"""Override."""
if digest not in self._layer_to_filename:
return self._checked_legacy_base.blob(digest)
with open(self._layer_to_filename[digest], 'rb') as reader:
return reader.read()
def blob_size(self, digest):
"""Override."""
if digest not in self._layer_to_filename:
return self._checked_legacy_base.blob_size(digest)
info = os.stat(self._layer_to_filename[digest])
return info.st_size
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
@property
def _checked_legacy_base(self):
if self._legacy_base is None:
raise ValueError(
'self._legacy_base is None. set legacy_base in constructor.'
)
return self._legacy_base
def _in_whiteout_dir(fs, name):
while name:
dirname = os.path.dirname(name)
if name == dirname:
break
if fs.get(dirname):
return True
name = dirname
return False
_WHITEOUT_PREFIX = '.wh.'
def extract(image, tar):
"""Extract the final filesystem from the image into tar.
Args:
image: a docker image whose final filesystem to construct.
tar: the tarfile into which we are writing the final filesystem.
"""
# Maps all of the files we have already added (and should never add again)
# to whether they are a tombstone or not.
fs = {}
# Walk the layers, topmost first and add files. If we've seen them in a
# higher layer then we skip them
for layer in image.diff_ids():
buf = io.BytesIO(image.uncompressed_layer(layer))
with tarfile.open(mode='r:', fileobj=buf) as layer_tar:
for tarinfo in layer_tar:
# If we see a whiteout file, then don't add anything to the tarball
# but ensure that any lower layers don't add a file with the whited
# out name.
basename = os.path.basename(tarinfo.name)
dirname = os.path.dirname(tarinfo.name)
tombstone = basename.startswith(_WHITEOUT_PREFIX)
if tombstone:
basename = basename[len(_WHITEOUT_PREFIX):]
# Before adding a file, check to see whether it (or its whiteout) have
# been seen before.
name = os.path.normpath(os.path.join('.', dirname, basename))
if name in fs:
continue
# Check for a whited out parent directory
if _in_whiteout_dir(fs, name):
continue
# Mark this file as handled by adding its name.
# A non-directory implicitly tombstones any entries with
# a matching (or child) name.
fs[name] = tombstone or not tarinfo.isdir()
if not tombstone:
if tarinfo.isfile():
tar.addfile(tarinfo, fileobj=layer_tar.extractfile(tarinfo))
else:
tar.addfile(tarinfo, fileobj=None)

View File

@@ -0,0 +1,455 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides DockerImageList for examining Manifest Lists."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import json
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_digest
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image as v2_2_image
import httplib2
import six
import six.moves.http_client
class DigestMismatchedError(Exception):
"""Exception raised when a digest mismatch is encountered."""
class InvalidMediaTypeError(Exception):
"""Exception raised when an invalid media type is encountered."""
class Platform(object):
"""Represents runtime requirements for an image.
See: https://docs.docker.com/registry/spec/manifest-v2-2/#manifest-list
"""
def __init__(self, content = None):
self._content = content or {}
def architecture(self):
return self._content.get('architecture', 'amd64')
def os(self):
return self._content.get('os', 'linux')
def os_version(self):
return self._content.get('os.version')
def os_features(self):
return set(self._content.get('os.features', []))
def variant(self):
return self._content.get('variant')
def features(self):
return set(self._content.get('features', []))
def can_run(self, required):
"""Returns True if this platform can run the 'required' platform."""
if not required:
# Some images don't specify 'platform', assume they can always run.
return True
# Required fields.
if required.architecture() != self.architecture():
return False
if required.os() != self.os():
return False
# Optional fields.
if required.os_version() and required.os_version() != self.os_version():
return False
if required.variant() and required.variant() != self.variant():
return False
# Verify any required features are a subset of this platform's features.
if required.os_features() and not required.os_features().issubset(
self.os_features()):
return False
if required.features() and not required.features().issubset(
self.features()):
return False
return True
def compatible_with(self, target):
"""Returns True if this platform can run on the 'target' platform."""
return target.can_run(self)
def __iter__(self):
# Ensure architecture and os are set (for default platform).
self._content['architecture'] = self.architecture()
self._content['os'] = self.os()
return iter(six.iteritems(self._content))
class DockerImageList(six.with_metaclass(abc.ABCMeta, object)):
"""Interface for implementations that interact with Docker manifest lists."""
def digest(self):
"""The digest of the manifest."""
return docker_digest.SHA256(self.manifest().encode('utf8'))
def media_type(self):
"""The media type of the manifest."""
manifest = json.loads(self.manifest())
# Since 'mediaType' is optional for OCI images, assume OCI if it's missing.
return manifest.get('mediaType', docker_http.OCI_IMAGE_INDEX_MIME)
# pytype: disable=bad-return-type
@abc.abstractmethod
def manifest(self):
"""The JSON manifest referenced by the tag/digest.
Returns:
The raw json manifest
"""
# pytype: enable=bad-return-type
# pytype: disable=bad-return-type
@abc.abstractmethod
def resolve_all(
self, target = None):
"""Resolves a manifest list to a list of compatible manifests.
Args:
target: the platform to check for compatibility. If omitted, the target
platform defaults to linux/amd64.
Returns:
A list of images that can be run on the target platform. The images are
sorted by their digest.
"""
# pytype: enable=bad-return-type
def resolve(self,
target = None):
"""Resolves a manifest list to a compatible manifest.
Args:
target: the platform to check for compatibility. If omitted, the target
platform defaults to linux/amd64.
Raises:
Exception: no manifests were compatible with the target platform.
Returns:
An image that can run on the target platform.
"""
if not target:
target = Platform()
images = self.resolve_all(target)
if not images:
raise Exception('Could not resolve manifest list to compatible manifest')
return images[0]
# __enter__ and __exit__ allow use as a context manager.
@abc.abstractmethod
def __enter__(self):
"""Open the image for reading."""
@abc.abstractmethod
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Close the image."""
@abc.abstractmethod
def __iter__(self):
"""Iterate over this manifest list's children."""
class Delegate(DockerImageList):
"""Forwards calls to the underlying image."""
def __init__(self, image):
"""Constructor.
Args:
image: a DockerImageList on which __enter__ has already been called.
"""
self._image = image
super(Delegate, self).__init__()
def manifest(self):
"""Override."""
return self._image.manifest()
def media_type(self):
"""Override."""
return self._image.media_type()
def resolve_all(
self, target = None):
"""Override."""
return self._image.resolve_all(target)
def resolve(self,
target = None):
"""Override."""
return self._image.resolve(target)
def __iter__(self):
"""Override."""
return iter(self._image)
def __str__(self):
"""Override."""
return str(self._image)
class FromRegistry(DockerImageList):
"""This accesses a docker image list hosted on a registry (non-local)."""
def __init__(
self,
name,
basic_creds,
transport,
accepted_mimes = docker_http.MANIFEST_LIST_MIMES):
self._name = name
self._creds = basic_creds
self._original_transport = transport
self._accepted_mimes = accepted_mimes
self._response = {}
super(FromRegistry, self).__init__()
def _content(self,
suffix,
accepted_mimes = None,
cache = True):
"""Fetches content of the resources from registry by http calls."""
if isinstance(self._name, docker_name.Repository):
suffix = '{repository}/{suffix}'.format(
repository=self._name.repository, suffix=suffix)
if suffix in self._response:
return self._response[suffix]
_, content = self._transport.Request(
'{scheme}://{registry}/v2/{suffix}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry,
suffix=suffix),
accepted_codes=[six.moves.http_client.OK],
accepted_mimes=accepted_mimes)
if cache:
self._response[suffix] = content
return content
def _tags(self):
# See //cloud/containers/registry/proto/v2/tags.proto
# for the full response structure.
return json.loads(self._content('tags/list').decode('utf8'))
def tags(self):
return self._tags().get('tags', [])
def manifests(self):
payload = self._tags()
if 'manifest' not in payload:
# Only GCR supports this schema.
return {}
return payload['manifest']
def children(self):
payload = self._tags()
if 'child' not in payload:
# Only GCR supports this schema.
return []
return payload['child']
def images(self):
"""Returns a list of tuples whose elements are (name, platform, image).
Raises:
InvalidMediaTypeError: a child with an unexpected media type was found.
"""
manifests = json.loads(self.manifest())['manifests']
results = []
for entry in manifests:
digest = entry['digest']
base = self._name.as_repository() # pytype: disable=attribute-error
name = docker_name.Digest('{base}@{digest}'.format(
base=base, digest=digest))
media_type = entry['mediaType']
if media_type in docker_http.MANIFEST_LIST_MIMES:
image = FromRegistry(name, self._creds, self._original_transport)
elif media_type in docker_http.SUPPORTED_MANIFEST_MIMES:
image = v2_2_image.FromRegistry(name, self._creds,
self._original_transport, [media_type])
else:
raise InvalidMediaTypeError('Invalid media type: ' + media_type)
platform = Platform(entry['platform']) if 'platform' in entry else None
results.append((name, platform, image))
return results
def resolve_all(
self, target = None):
results = list(self.resolve_all_unordered(target).items())
# Sort by name (which is equivalent as by digest) for deterministic output.
# We could let resolve_all_unordered() to return only a list of image, then
# use image.digest() as the sort key, but FromRegistry.digest() will
# eventually leads to another round trip call to registry. This inefficiency
# becomes worse as the image list has more children images. So we let
# resolve_all_unordered() to return both image names and images.
results.sort(key=lambda name_image: str(name_image[0]))
return [image for (_, image) in results]
def resolve_all_unordered(
self, target = None
):
"""Resolves a manifest list to a list of (digest, image) tuples.
Args:
target: the platform to check for compatibility. If omitted, the target
platform defaults to linux/amd64.
Returns:
A list of (digest, image) tuples that can be run on the target platform.
"""
target = target or Platform()
results = {}
images = self.images()
for name, platform, image in images:
# Recurse on manifest lists.
if isinstance(image, FromRegistry):
with image:
results.update(image.resolve_all_unordered(target))
elif target.can_run(platform):
results[name] = image
return results
def exists(self):
try:
manifest = json.loads(self.manifest(validate=False))
return manifest['schemaVersion'] == 2 and 'manifests' in manifest
except docker_http.V2DiagnosticException as err:
if err.status == six.moves.http_client.NOT_FOUND:
return False
raise
def manifest(self, validate=True):
"""Override."""
# GET server1/v2/<name>/manifests/<tag_or_digest>
if isinstance(self._name, docker_name.Tag):
return self._content('manifests/' + self._name.tag,
self._accepted_mimes).decode('utf8')
else:
assert isinstance(self._name, docker_name.Digest)
c = self._content('manifests/' + self._name.digest, self._accepted_mimes)
computed = docker_digest.SHA256(c)
if validate and computed != self._name.digest:
raise DigestMismatchedError(
'The returned manifest\'s digest did not match requested digest, '
'%s vs. %s' % (self._name.digest, computed))
return c.decode('utf8')
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
# Create a v2 transport to use for making authenticated requests.
self._transport = docker_http.Transport(
self._name, self._creds, self._original_transport, docker_http.PULL)
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
def __str__(self):
return '<docker_image_list.FromRegistry name: {}>'.format(str(self._name))
def __iter__(self):
return iter([(platform, image) for (_, platform, image) in self.images()])
class FromList(DockerImageList):
"""This synthesizes a Manifest List from a list of images."""
def __init__(self, images):
self._images = images
super(FromList, self).__init__()
def manifest(self):
list_body = {
'mediaType': docker_http.MANIFEST_LIST_MIME,
'schemaVersion': 2,
'manifests': []
}
for (platform, manifest) in self._images:
manifest_body = {
'digest': manifest.digest(),
'mediaType': manifest.media_type(),
'size': len(manifest.manifest())
}
if platform:
manifest_body['platform'] = dict(platform)
list_body['manifests'].append(manifest_body)
return json.dumps(list_body, sort_keys=True)
def resolve_all(
self, target = None):
"""Resolves a manifest list to a list of compatible manifests.
Args:
target: the platform to check for compatibility. If omitted, the target
platform defaults to linux/amd64.
Returns:
A list of images that can be run on the target platform.
"""
target = target or Platform()
results = []
for (platform, image) in self._images:
if isinstance(image, DockerImageList):
with image:
results.extend(image.resolve_all(target))
elif target.can_run(platform):
results.append(image)
# Use dictionary to dedup
dgst_img_dict = {img.digest(): img for img in results}
results = []
# It is causing PyType to complain about the return type being
# List[DockerImageList], so we have the pytype disable comment workaround
# TODO(b/67895498)
return [dgst_img_dict[dgst] for dgst in sorted(dgst_img_dict.keys())]
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
def __iter__(self):
return iter(self._images)

View File

@@ -0,0 +1,366 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package manages pushes to and deletes from a v2 docker registry."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import logging
import concurrent.futures
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image
from containerregistry.client.v2_2 import docker_image_list as image_list
import httplib2
import six.moves.http_client
import six.moves.urllib.parse
def _tag_or_digest(name):
if isinstance(name, docker_name.Tag):
return name.tag
else:
assert isinstance(name, docker_name.Digest)
return name.digest
class Push(object):
"""Push encapsulates a Registry v2.2 Docker push session."""
def __init__(self,
name,
creds,
transport,
mount = None,
threads = 1):
"""Constructor.
If multiple threads are used, the caller *must* ensure that the provided
transport is thread-safe, as well as the image that is being uploaded.
It is notable that tarfile and httplib2.Http in Python are NOT threadsafe.
Args:
name: the fully-qualified name of the tag to push
creds: credential provider for authorizing requests
transport: the http transport to use for sending requests
mount: list of repos from which to mount blobs.
threads: the number of threads to use for uploads.
Raises:
ValueError: an incorrectly typed argument was supplied.
"""
self._name = name
self._transport = docker_http.Transport(name, creds, transport,
docker_http.PUSH)
self._mount = mount
self._threads = threads
def name(self):
return self._name
def _scheme_and_host(self):
return '{scheme}://{registry}'.format(
scheme=docker_http.Scheme(self._name.registry),
registry=self._name.registry)
def _base_url(self):
return self._scheme_and_host() + '/v2/{repository}'.format(
repository=self._name.repository)
def _get_absolute_url(self, location):
# If 'location' is an absolute URL (includes host), this will be a no-op.
return six.moves.urllib.parse.urljoin(
base=self._scheme_and_host(), url=location)
def blob_exists(self, digest):
"""Check the remote for the given layer."""
# HEAD the blob, and check for a 200
resp, unused_content = self._transport.Request(
'{base_url}/blobs/{digest}'.format(
base_url=self._base_url(), digest=digest),
method='HEAD',
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
])
return resp.status == six.moves.http_client.OK # pytype: disable=attribute-error
def manifest_exists(
self, image
):
"""Check the remote for the given manifest by digest."""
# GET the manifest by digest, and check for 200
resp, unused_content = self._transport.Request(
'{base_url}/manifests/{digest}'.format(
base_url=self._base_url(), digest=image.digest()),
method='GET',
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
],
accepted_mimes=[image.media_type()])
return resp.status == six.moves.http_client.OK # pytype: disable=attribute-error
def _get_blob(self, image, digest):
if digest == image.config_blob():
return image.config_file().encode('utf8')
return image.blob(digest)
def _monolithic_upload(self, image,
digest):
self._transport.Request(
'{base_url}/blobs/uploads/?digest={digest}'.format(
base_url=self._base_url(), digest=digest),
method='POST',
body=self._get_blob(image, digest),
accepted_codes=[six.moves.http_client.CREATED])
def _add_digest(self, url, digest):
scheme, netloc, path, query_string, fragment = (
six.moves.urllib.parse.urlsplit(url))
qs = six.moves.urllib.parse.parse_qs(query_string)
qs['digest'] = [digest]
query_string = six.moves.urllib.parse.urlencode(qs, doseq=True)
return six.moves.urllib.parse.urlunsplit((scheme, netloc, path, # pytype: disable=bad-return-type
query_string, fragment))
def _put_upload(self, image, digest):
mounted, location = self._start_upload(digest, self._mount)
if mounted:
logging.info('Layer %s mounted.', digest)
return
location = self._add_digest(location, digest)
self._transport.Request(
location,
method='PUT',
body=self._get_blob(image, digest),
accepted_codes=[six.moves.http_client.CREATED])
# pylint: disable=missing-docstring
def patch_upload(self, source,
digest):
mounted, location = self._start_upload(digest, self._mount)
if mounted:
logging.info('Layer %s mounted.', digest)
return
location = self._get_absolute_url(location)
blob = source
if isinstance(source, docker_image.DockerImage):
blob = self._get_blob(source, digest)
resp, unused_content = self._transport.Request(
location,
method='PATCH',
body=blob,
content_type='application/octet-stream',
accepted_codes=[
six.moves.http_client.NO_CONTENT, six.moves.http_client.ACCEPTED,
six.moves.http_client.CREATED
])
location = self._add_digest(resp['location'], digest)
location = self._get_absolute_url(location)
self._transport.Request(
location,
method='PUT',
body=None,
accepted_codes=[six.moves.http_client.CREATED])
def _put_blob(self, image, digest):
"""Upload the aufs .tgz for a single layer."""
# We have a few choices for unchunked uploading:
# POST to /v2/<name>/blobs/uploads/?digest=<digest>
# Fastest, but not supported by many registries.
# self._monolithic_upload(image, digest)
#
# or:
# POST /v2/<name>/blobs/uploads/ (no body*)
# PUT /v2/<name>/blobs/uploads/<uuid> (full body)
# Next fastest, but there is a mysterious bad interaction
# with Bintray. This pattern also hasn't been used in
# clients since 1.8, when they switched to the 3-stage
# method below.
# self._put_upload(image, digest)
# or:
# POST /v2/<name>/blobs/uploads/ (no body*)
# PATCH /v2/<name>/blobs/uploads/<uuid> (full body)
# PUT /v2/<name>/blobs/uploads/<uuid> (no body)
#
# * We attempt to perform a cross-repo mount if any repositories are
# specified in the "mount" parameter. This does a fast copy from a
# repository that is known to contain this blob and skips the upload.
self.patch_upload(image, digest)
def _remote_tag_digest(
self, image
):
"""Check the remote for the given manifest by digest."""
# GET the tag we're pushing
resp, unused_content = self._transport.Request(
'{base_url}/manifests/{tag}'.format(
base_url=self._base_url(),
tag=self._name.tag), # pytype: disable=attribute-error
method='GET',
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.NOT_FOUND
],
accepted_mimes=[image.media_type()])
if resp.status == six.moves.http_client.NOT_FOUND: # pytype: disable=attribute-error
return None
return resp.get('docker-content-digest')
def put_manifest(
self,
image,
use_digest = False):
"""Upload the manifest for this image."""
if use_digest:
tag_or_digest = image.digest()
else:
tag_or_digest = _tag_or_digest(self._name)
self._transport.Request(
'{base_url}/manifests/{tag_or_digest}'.format(
base_url=self._base_url(), tag_or_digest=tag_or_digest),
method='PUT',
body=image.manifest(),
content_type=image.media_type(),
accepted_codes=[
six.moves.http_client.OK, six.moves.http_client.CREATED,
six.moves.http_client.ACCEPTED # pytype: disable=wrong-arg-types
])
def _start_upload(self,
digest,
mount = None
):
"""POST to begin the upload process with optional cross-repo mount param."""
if not mount:
# Do a normal POST to initiate an upload if mount is missing.
url = '{base_url}/blobs/uploads/'.format(base_url=self._base_url())
accepted_codes = [six.moves.http_client.ACCEPTED]
else:
# If we have a mount parameter, try to mount the blob from another repo.
mount_from = '&'.join([
'from=' + six.moves.urllib.parse.quote(repo.repository, '')
for repo in self._mount
])
url = '{base_url}/blobs/uploads/?mount={digest}&{mount_from}'.format(
base_url=self._base_url(), digest=digest, mount_from=mount_from)
accepted_codes = [
six.moves.http_client.CREATED, six.moves.http_client.ACCEPTED
]
resp, unused_content = self._transport.Request(
url, method='POST', body=None, accepted_codes=accepted_codes)
# pytype: disable=attribute-error,bad-return-type
return resp.status == six.moves.http_client.CREATED, resp.get('location')
# pytype: enable=attribute-error,bad-return-type
def _upload_one(self, image, digest):
"""Upload a single layer, after checking whether it exists already."""
if self.blob_exists(digest):
logging.info('Layer %s exists, skipping', digest)
return
self._put_blob(image, digest)
logging.info('Layer %s pushed.', digest)
def upload(self,
image,
use_digest = False):
"""Upload the layers of the given image.
Args:
image: the image to upload.
use_digest: use the manifest digest (i.e. not tag) as the image reference.
"""
# If the manifest (by digest) exists, then avoid N layer existence
# checks (they must exist).
if self.manifest_exists(image):
if isinstance(self._name, docker_name.Tag):
if self._remote_tag_digest(image) == image.digest():
logging.info('Tag points to the right manifest, skipping push.')
return
logging.info('Manifest exists, skipping blob uploads and pushing tag.')
else:
logging.info('Manifest exists, skipping upload.')
elif isinstance(image, image_list.DockerImageList):
for _, child in image:
# TODO(user): Refactor so that the threadpool is shared.
with child:
self.upload(child, use_digest=True)
elif self._threads == 1:
for digest in image.distributable_blob_set():
self._upload_one(image, digest)
else:
with concurrent.futures.ThreadPoolExecutor(
max_workers=self._threads) as executor:
future_to_params = {
executor.submit(self._upload_one, image, digest): (image, digest)
for digest in image.distributable_blob_set()
}
for future in concurrent.futures.as_completed(future_to_params):
future.result()
# This should complete the upload by uploading the manifest.
self.put_manifest(image, use_digest=use_digest)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, exception_type, unused_value, unused_traceback):
if exception_type:
logging.error('Error during upload of: %s', self._name)
return
logging.info('Finished upload of: %s', self._name)
# pylint: disable=invalid-name
def Delete(
name,
creds,
transport
):
"""Delete a tag or digest.
Args:
name: a tag or digest to be deleted.
creds: the creds to use for deletion.
transport: the transport to use to contact the registry.
"""
docker_transport = docker_http.Transport(name, creds, transport,
docker_http.DELETE)
_, unused_content = docker_transport.Request(
'{scheme}://{registry}/v2/{repository}/manifests/{entity}'.format(
scheme=docker_http.Scheme(name.registry),
registry=name.registry,
repository=name.repository,
entity=_tag_or_digest(name)),
method='DELETE',
accepted_codes=[six.moves.http_client.OK, six.moves.http_client.ACCEPTED])

View File

@@ -0,0 +1,171 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides compatibility interfaces for OCI."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image
from containerregistry.client.v2_2 import docker_image_list
class OCIFromV22(docker_image.Delegate):
"""This compatibility interface serves an OCI image from a v2_2 image."""
def manifest(self):
"""Override."""
manifest = json.loads(self._image.manifest())
manifest['mediaType'] = docker_http.OCI_MANIFEST_MIME
manifest['config']['mediaType'] = docker_http.OCI_CONFIG_JSON_MIME
for layer in manifest['layers']:
layer['mediaType'] = docker_http.OCI_LAYER_MIME
return json.dumps(manifest, sort_keys=True)
def media_type(self):
"""Override."""
return docker_http.OCI_MANIFEST_MIME
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Override."""
pass
class V22FromOCI(docker_image.Delegate):
"""This compatibility interface serves a v2_2 image from an OCI image."""
def manifest(self):
"""Override."""
manifest = json.loads(self._image.manifest())
manifest['mediaType'] = docker_http.MANIFEST_SCHEMA2_MIME
manifest['config']['mediaType'] = docker_http.CONFIG_JSON_MIME
for layer in manifest['layers']:
layer['mediaType'] = docker_http.LAYER_MIME
return json.dumps(manifest, sort_keys=True)
def media_type(self):
"""Override."""
return docker_http.MANIFEST_SCHEMA2_MIME
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Override."""
pass
class IndexFromList(docker_image_list.Delegate):
"""This compatibility interface serves an Image Index from a Manifest List."""
def __init__(self,
image,
recursive = True):
"""Constructor.
Args:
image: a DockerImageList on which __enter__ has already been called.
recursive: whether to recursively convert child manifests to OCI types.
"""
super(IndexFromList, self).__init__(image)
self._recursive = recursive
def manifest(self):
"""Override."""
manifest = json.loads(self._image.manifest())
manifest['mediaType'] = docker_http.OCI_IMAGE_INDEX_MIME
return json.dumps(manifest, sort_keys=True)
def media_type(self):
"""Override."""
return docker_http.OCI_IMAGE_INDEX_MIME
def __enter__(self):
if not self._recursive:
return self
converted = []
for platform, child in self._image:
if isinstance(child, docker_image_list.DockerImageList):
with IndexFromList(child) as index:
converted.append((platform, index))
else:
assert isinstance(child, docker_image.DockerImage)
with OCIFromV22(child) as oci:
converted.append((platform, oci))
with docker_image_list.FromList(converted) as index:
self._image = index
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Override."""
pass
class ListFromIndex(docker_image_list.Delegate):
"""This compatibility interface serves a Manifest List from an Image Index."""
def __init__(self,
image,
recursive = True):
"""Constructor.
Args:
image: a DockerImageList on which __enter__ has already been called.
recursive: whether to recursively convert child manifests to Docker types.
"""
super(ListFromIndex, self).__init__(image)
self._recursive = recursive
def manifest(self):
"""Override."""
manifest = json.loads(self._image.manifest())
manifest['mediaType'] = docker_http.MANIFEST_LIST_MIME
return json.dumps(manifest, sort_keys=True)
def media_type(self):
"""Override."""
return docker_http.MANIFEST_LIST_MIME
def __enter__(self):
if not self._recursive:
return self
converted = []
for platform, child in self._image:
if isinstance(child, docker_image_list.DockerImageList):
with ListFromIndex(child) as image_list:
converted.append((platform, image_list))
else:
assert isinstance(child, docker_image.DockerImage)
with V22FromOCI(child) as v22:
converted.append((platform, v22))
with docker_image_list.FromList(converted) as image_list:
self._image = image_list
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
"""Override."""
pass

View File

@@ -0,0 +1,342 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides tools for saving docker images."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import errno
import io
import json
import os
import tarfile
import concurrent.futures
from containerregistry.client import docker_name
from containerregistry.client.v1 import docker_image as v1_image
from containerregistry.client.v1 import save as v1_save
from containerregistry.client.v2 import v1_compat
from containerregistry.client.v2_2 import docker_digest
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import v2_compat
import six
def _diff_id(v1_img, blob):
try:
return v1_img.diff_id(blob)
except ValueError:
unzipped = v1_img.uncompressed_layer(blob)
return docker_digest.SHA256(unzipped)
def multi_image_tarball(
tag_to_image,
tar,
tag_to_v1_image = None
):
"""Produce a "docker save" compatible tarball from the DockerImages.
Args:
tag_to_image: A dictionary of tags to the images they label.
tar: the open tarfile into which we are writing the image tarball.
tag_to_v1_image: A dictionary of tags to the v1 form of the images
they label. If this isn't provided, the image is simply converted.
"""
def add_file(filename, contents):
contents_bytes = contents.encode('utf8')
info = tarfile.TarInfo(filename)
info.size = len(contents_bytes)
tar.addfile(tarinfo=info, fileobj=io.BytesIO(contents_bytes))
tag_to_v1_image = tag_to_v1_image or {}
# The manifest.json file contains a list of the images to load
# and how to tag them. Each entry consists of three fields:
# - Config: the name of the image's config_file() within the
# saved tarball.
# - Layers: the list of filenames for the blobs constituting
# this image. The order is the reverse of the v1
# ancestry ordering.
# - RepoTags: the list of tags to apply to this image once it
# is loaded.
manifests = []
for (tag, image) in six.iteritems(tag_to_image):
# The config file is stored in a blob file named with its digest.
digest = docker_digest.SHA256(image.config_file().encode('utf8'), '')
add_file(digest + '.json', image.config_file())
cfg = json.loads(image.config_file())
diffs = set(cfg.get('rootfs', {}).get('diff_ids', []))
v1_img = tag_to_v1_image.get(tag)
if not v1_img:
v2_img = v2_compat.V2FromV22(image)
v1_img = v1_compat.V1FromV2(v2_img)
tag_to_v1_image[tag] = v1_img
# Add the manifests entry for this image.
manifest = {
'Config':
digest + '.json',
'Layers': [
layer_id + '/layer.tar'
# We don't just exclude the empty tar because we leave its diff_id
# in the set when coming through v2_compat.V22FromV2
for layer_id in reversed(v1_img.ancestry(v1_img.top()))
if _diff_id(v1_img, layer_id) in diffs and
not json.loads(v1_img.json(layer_id)).get('throwaway')
],
'RepoTags': [str(tag)]
}
layer_sources = {}
input_manifest = json.loads(image.manifest())
input_layers = input_manifest['layers']
for input_layer in input_layers:
if input_layer['mediaType'] == docker_http.FOREIGN_LAYER_MIME:
diff_id = image.digest_to_diff_id(input_layer['digest'])
layer_sources[diff_id] = input_layer
if layer_sources:
manifest['LayerSources'] = layer_sources
manifests.append(manifest)
# v2.2 tarballs are a superset of v1 tarballs, so delegate
# to v1 to save itself.
v1_save.multi_image_tarball(tag_to_v1_image, tar)
add_file('manifest.json', json.dumps(manifests, sort_keys=True))
def tarball(name, image,
tar):
"""Produce a "docker save" compatible tarball from the DockerImage.
Args:
name: The tag name to write into repositories and manifest.json
image: a docker image to save.
tar: the open tarfile into which we are writing the image tarball.
"""
multi_image_tarball({name: image}, tar, {})
def fast(
image,
directory,
threads = 1,
cache_directory = None
):
"""Produce a FromDisk compatible file layout under the provided directory.
After calling this, the following filesystem will exist:
directory/
config.json <-- only *.json, the image's config
digest <-- sha256 digest of the image's manifest
manifest.json <-- the image's manifest
001.tar.gz <-- the first layer's .tar.gz filesystem delta
001.sha256 <-- the sha256 of 1.tar.gz with a "sha256:" prefix.
...
N.tar.gz <-- the Nth layer's .tar.gz filesystem delta
N.sha256 <-- the sha256 of N.tar.gz with a "sha256:" prefix.
We pad layer indices to only 3 digits because of a known ceiling on the number
of filesystem layers Docker supports.
Args:
image: a docker image to save.
directory: an existing empty directory under which to save the layout.
threads: the number of threads to use when performing the upload.
cache_directory: directory that stores file cache.
Returns:
A tuple whose first element is the path to the config file, and whose second
element is an ordered list of tuples whose elements are the filenames
containing: (.sha256, .tar.gz) respectively.
"""
def write_file(name, accessor,
arg):
with io.open(name, u'wb') as f:
f.write(accessor(arg))
def write_file_and_store(name, accessor,
arg, cached_layer):
write_file(cached_layer, accessor, arg)
link(cached_layer, name)
def link(source, dest):
"""Creates a symbolic link dest pointing to source.
Unlinks first to remove "old" layers if needed
e.g., image A latest has layers 1, 2 and 3
after a while it has layers 1, 2 and 3'.
Since in both cases the layers are named 001, 002 and 003,
unlinking promises the correct layers are linked in the image directory.
Args:
source: image directory source.
dest: image directory destination.
"""
try:
os.symlink(source, dest)
except OSError as e:
if e.errno == errno.EEXIST:
os.unlink(dest)
os.symlink(source, dest)
else:
raise e
def valid(cached_layer, digest):
with io.open(cached_layer, u'rb') as f:
current_digest = docker_digest.SHA256(f.read(), '')
return current_digest == digest
with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
future_to_params = {}
config_file = os.path.join(directory, 'config.json')
f = executor.submit(write_file, config_file,
lambda unused: image.config_file().encode('utf8'),
'unused')
future_to_params[f] = config_file
executor.submit(write_file, os.path.join(directory, 'digest'),
lambda unused: image.digest().encode('utf8'), 'unused')
executor.submit(write_file, os.path.join(directory, 'manifest.json'),
lambda unused: image.manifest().encode('utf8'),
'unused')
idx = 0
layers = []
for blob in reversed(image.fs_layers()):
# Create a local copy
layer_name = os.path.join(directory, '%03d.tar.gz' % idx)
digest_name = os.path.join(directory, '%03d.sha256' % idx)
# Strip the sha256: prefix
digest = blob[7:].encode('utf8')
f = executor.submit(
write_file,
digest_name,
lambda blob: blob[7:].encode('utf8'),
blob)
future_to_params[f] = digest_name
digest_str = str(digest)
if cache_directory:
# Search for a local cached copy
cached_layer = os.path.join(cache_directory, digest_str)
if os.path.exists(cached_layer) and valid(cached_layer, digest_str):
f = executor.submit(link, cached_layer, layer_name)
future_to_params[f] = layer_name
else:
f = executor.submit(write_file_and_store, layer_name, image.blob,
blob, cached_layer)
future_to_params[f] = layer_name
else:
f = executor.submit(write_file, layer_name, image.blob, blob)
future_to_params[f] = layer_name
layers.append((digest_name, layer_name))
idx += 1
# Wait for completion.
for future in concurrent.futures.as_completed(future_to_params):
future.result()
return (config_file, layers)
def uncompressed(image,
directory,
threads = 1):
"""Produce a format similar to `fast()`, but with uncompressed blobs.
After calling this, the following filesystem will exist:
directory/
config.json <-- only *.json, the image's config
digest <-- sha256 digest of the image's manifest
manifest.json <-- the image's manifest
001.tar <-- the first layer's .tar filesystem delta
001.sha256 <-- the sha256 of 001.tar with a "sha256:" prefix.
...
NNN.tar <-- the NNNth layer's .tar filesystem delta
NNN.sha256 <-- the sha256 of NNN.tar with a "sha256:" prefix.
We pad layer indices to only 3 digits because of a known ceiling on the number
of filesystem layers Docker supports.
Args:
image: a docker image to save.
directory: an existing empty directory under which to save the layout.
threads: the number of threads to use when performing the upload.
Returns:
A tuple whose first element is the path to the config file, and whose second
element is an ordered list of tuples whose elements are the filenames
containing: (.sha256, .tar) respectively.
"""
def write_file(name, accessor,
arg):
with io.open(name, u'wb') as f:
f.write(accessor(arg))
with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor:
future_to_params = {}
config_file = os.path.join(directory, 'config.json')
f = executor.submit(write_file, config_file,
lambda unused: image.config_file().encode('utf8'),
'unused')
future_to_params[f] = config_file
executor.submit(write_file, os.path.join(directory, 'digest'),
lambda unused: image.digest().encode('utf8'), 'unused')
executor.submit(write_file, os.path.join(directory, 'manifest.json'),
lambda unused: image.manifest().encode('utf8'),
'unused')
idx = 0
layers = []
for diff_id in reversed(image.diff_ids()):
# Create a local copy
digest_name = os.path.join(directory, '%03d.sha256' % idx)
f = executor.submit(
write_file,
digest_name,
# Strip the sha256: prefix
lambda diff_id: diff_id[7:].encode('utf8'),
diff_id)
future_to_params[f] = digest_name
layer_name = os.path.join(directory, '%03d.tar' % idx)
f = executor.submit(write_file, layer_name, image.uncompressed_layer,
diff_id)
future_to_params[f] = layer_name
layers.append((digest_name, layer_name))
idx += 1
# Wait for completion.
for future in concurrent.futures.as_completed(future_to_params):
future.result()
return (config_file, layers)

View File

@@ -0,0 +1,321 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package provides compatibility interfaces for v1/v2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
from containerregistry.client.v2 import docker_image as v2_image
from containerregistry.client.v2 import util as v2_util
from containerregistry.client.v2_2 import docker_digest
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image as v2_2_image
class BadDigestException(Exception):
"""Exceptions when a bad digest is supplied."""
EMPTY_TAR_DIGEST = 'sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4'
EMPTY_TAR_BYTES = b'\x1f\x8b\x08\x00\x00\tn\x88\x00\xffb\x18\x05\xa3`\x14\x8cX\x00\x08\x00\x00\xff\xff.\xaf\xb5\xef\x00\x04\x00\x00' # pylint: disable=line-too-long
# Expose a way of constructing the config file given just the v1 compat list
# and a list of diff ids. This is used for compatibility with v2 images (below)
# but is also useful for scenarios where we are handling 'docker save' tarballs
# since those don't know their v2/v2.2 blob names and gzipping to compute them
# is wasteful because we don't actually need them if we are just going to
# re-save the image. While we don't provide it here, this can be used to
# synthesize a v2.2 config_file directly from a v1.docker_image.DockerImage.
def config_file(v1_compats,
diff_ids):
"""Compute the v2.2 config file given the history and diff ids."""
# We want the first (last reversed) v1 compatibility field, from which
# we will draw additional fields.
v1_compatibility = {}
histories = []
for v1_compat in v1_compats:
v1_compatibility = v1_compat
# created_by in history is the cmd which was run to create the layer.
# Cmd in container config may be empty array.
history = {}
if 'container_config' in v1_compatibility:
container_config = v1_compatibility.get('container_config')
if container_config.get('Cmd'): # pytype: disable=attribute-error
history['created_by'] = container_config['Cmd'][0]
if 'created' in v1_compatibility:
history['created'] = v1_compatibility.get('created')
histories += [history]
config = {
'history': histories,
'rootfs': {
'diff_ids': diff_ids,
'type': 'layers'
}
}
for key in [
'architecture', 'config', 'container', 'container_config',
'docker_version', 'os'
]:
if key in v1_compatibility:
config[key] = v1_compatibility[key]
if 'created' in v1_compatibility:
config['created'] = v1_compatibility.get('created')
return json.dumps(config, sort_keys=True)
class V22FromV2(v2_2_image.DockerImage):
"""This compatibility interface serves the v2 interface from a v2_2 image."""
def __init__(self, v2_img):
"""Constructor.
Args:
v2_img: a v2 DockerImage on which __enter__ has already been called.
Raises:
ValueError: an incorrectly typed argument was supplied.
"""
self._v2_image = v2_img
self._ProcessImage()
def _ProcessImage(self):
"""Constructs schema 2 manifest from schema 1 manifest."""
raw_manifest_schema1 = self._v2_image.manifest()
manifest_schema1 = json.loads(raw_manifest_schema1)
self._config_file = config_file([
json.loads(history.get('v1Compatibility', '{}'))
for history in reversed(manifest_schema1.get('history', []))
], [
self._GetDiffId(digest)
for digest in reversed(self._v2_image.fs_layers())
])
config_bytes = self._config_file.encode('utf8')
config_descriptor = {
'mediaType': docker_http.CONFIG_JSON_MIME,
'size': len(config_bytes),
'digest': docker_digest.SHA256(config_bytes)
}
manifest_schema2 = {
'schemaVersion': 2,
'mediaType': docker_http.MANIFEST_SCHEMA2_MIME,
'config': config_descriptor,
'layers': [
{
'mediaType': docker_http.LAYER_MIME,
'size': self._v2_image.blob_size(digest),
'digest': digest
}
for digest in reversed(self._v2_image.fs_layers())
]
}
self._manifest = json.dumps(manifest_schema2, sort_keys=True)
def _GetDiffId(self, digest):
"""Hash the uncompressed layer blob."""
return docker_digest.SHA256(self._v2_image.uncompressed_blob(digest))
def manifest(self):
"""Override."""
return self._manifest
def config_file(self):
"""Override."""
return self._config_file
def uncompressed_blob(self, digest):
"""Override."""
return self._v2_image.uncompressed_blob(digest)
def blob(self, digest):
"""Override."""
return self._v2_image.blob(digest)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass
class V2FromV22(v2_image.DockerImage):
"""This compatibility interface serves the v2 interface from a v2_2 image."""
def __init__(self, v2_2_img):
"""Constructor.
Args:
v2_2_img: a v2_2 DockerImage on which __enter__ has already been called.
Raises:
ValueError: an incorrectly typed argument was supplied.
"""
self._v2_2_image = v2_2_img
self._ProcessImage()
def _ProcessImage(self):
"""Constructs schema 1 manifest from schema 2 manifest and config file."""
manifest_schema2 = json.loads(self._v2_2_image.manifest())
raw_config = self._v2_2_image.config_file()
config = json.loads(raw_config)
parent = ''
histories = config.get('history', {})
layer_count = len(histories)
v2_layer_index = 0
layers = manifest_schema2.get('layers', {})
# from base to top
fs_layers = []
v1_histories = []
for v1_layer_index, history in enumerate(histories):
digest, media_type, v2_layer_index = self._GetSchema1LayerDigest(
history, layers, v1_layer_index, v2_layer_index)
if v1_layer_index != layer_count - 1:
layer_id = self._GenerateV1LayerId(digest, parent)
v1_compatibility = self._BuildV1Compatibility(layer_id, parent, history)
else:
layer_id = self._GenerateV1LayerId(digest, parent, raw_config)
v1_compatibility = self._BuildV1CompatibilityForTopLayer(
layer_id, parent, history, config)
parent = layer_id
fs_layers = [{'blobSum': digest, 'mediaType': media_type}] + fs_layers
v1_histories = [{'v1Compatibility': v1_compatibility}] + v1_histories
manifest_schema1 = {
'schemaVersion': 1,
'name': 'unused',
'tag': 'unused',
'fsLayers': fs_layers,
'history': v1_histories
}
if 'architecture' in config:
manifest_schema1['architecture'] = config['architecture']
self._manifest = v2_util.Sign(json.dumps(manifest_schema1, sort_keys=True))
def _GenerateV1LayerId(self,
digest,
parent,
raw_config = None):
parts = digest.rsplit(':', 1)
if len(parts) != 2:
raise BadDigestException('Invalid Digest: %s, must be in '
'algorithm : blobSumHex format.' % (digest))
data = parts[1] + ' ' + parent
if raw_config:
data += ' ' + raw_config
return docker_digest.SHA256(data.encode('utf8'), '')
def _BuildV1Compatibility(self, layer_id, parent,
history):
v1_compatibility = {'id': layer_id}
if parent:
v1_compatibility['parent'] = parent
if 'empty_layer' in history:
v1_compatibility['throwaway'] = True
if 'created_by' in history:
v1_compatibility['container_config'] = {'Cmd': [history['created_by']]}
for key in ['created', 'comment', 'author']:
if key in history:
v1_compatibility[key] = history[key]
return json.dumps(v1_compatibility, sort_keys=True)
def _BuildV1CompatibilityForTopLayer(self, layer_id, parent,
history,
config):
v1_compatibility = {'id': layer_id}
if parent:
v1_compatibility['parent'] = parent
if 'empty_layer' in history:
v1_compatibility['throwaway'] = True
for key in [
'architecture', 'container', 'docker_version', 'os', 'config',
'container_config', 'created'
]:
if key in config:
v1_compatibility[key] = config[key]
return json.dumps(v1_compatibility, sort_keys=True)
def _GetSchema1LayerDigest(
self, history, layers,
v1_layer_index, v2_layer_index):
if 'empty_layer' in history:
return (EMPTY_TAR_DIGEST, docker_http.LAYER_MIME, v2_layer_index)
else:
return (
layers[v2_layer_index]['digest'],
layers[v2_layer_index]['mediaType'],
v2_layer_index + 1
)
def manifest(self):
"""Override."""
return self._manifest
def uncompressed_blob(self, digest):
"""Override."""
if digest == EMPTY_TAR_DIGEST:
# See comment in blob().
return super(V2FromV22, self).uncompressed_blob(EMPTY_TAR_DIGEST)
return self._v2_2_image.uncompressed_blob(digest)
def diff_id(self, digest):
"""Gets v22 diff_id."""
return self._v2_2_image.digest_to_diff_id(digest)
def blob(self, digest):
"""Override."""
if digest == EMPTY_TAR_DIGEST:
# We added this blobsum for 'empty_layer' annotated layers, but the
# underlying v2.2 image doesn't necessarily expose them. So
# when we get a request for this special layer, return the raw
# bytes ourselves.
return EMPTY_TAR_BYTES
return self._v2_2_image.blob(digest)
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
pass

View File

@@ -0,0 +1,62 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.tools']
from containerregistry.tools import patched_
setattr(x, 'patched', patched_)
from containerregistry.tools import platform_args_
setattr(x, 'platform_args', platform_args_)
from containerregistry.tools import logging_setup_
setattr(x, 'logging_setup', logging_setup_)
from containerregistry.tools import docker_appender_
setattr(x, 'docker_appender', docker_appender_)
from containerregistry.tools import docker_puller_
setattr(x, 'docker_puller', docker_puller_)
from containerregistry.tools import docker_pusher_
setattr(x, 'docker_pusher', docker_pusher_)
from containerregistry.tools import fast_puller_
setattr(x, 'fast_puller', fast_puller_)
from containerregistry.tools import fast_flatten_
setattr(x, 'fast_flatten', fast_flatten_)
from containerregistry.tools import fast_importer_
setattr(x, 'fast_importer', fast_importer_)
from containerregistry.tools import fast_pusher_
setattr(x, 'fast_pusher', fast_pusher_)
from containerregistry.tools import image_digester_
setattr(x, 'image_digester', image_digester_)

View File

@@ -0,0 +1,86 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package appends a tarball to an image in a Docker Registry."""
from __future__ import absolute_import
from __future__ import print_function
import argparse
import logging
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import append
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import docker_session
from containerregistry.tools import logging_setup
from containerregistry.tools import patched
from containerregistry.transport import transport_pool
import httplib2
parser = argparse.ArgumentParser(
description='Append tarballs to an image in a Docker Registry.')
parser.add_argument(
'--src-image',
action='store',
help='The name of the docker image to append to.',
required=True)
parser.add_argument('--tarball', action='store', help='The tarball to append.',
required=True)
parser.add_argument(
'--dst-image', action='store', help='The name of the new image.',
required=True)
_THREADS = 8
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
transport = transport_pool.Http(httplib2.Http, size=_THREADS)
# This library can support push-by-digest, but the likelihood of a user
# correctly providing us with the digest without using this library
# directly is essentially nil.
src = docker_name.Tag(args.src_image)
dst = docker_name.Tag(args.dst_image)
# Resolve the appropriate credential to use based on the standard Docker
# client logic.
creds = docker_creds.DefaultKeychain.Resolve(src)
logging.info('Pulling v2.2 image from %r ...', src)
with v2_2_image.FromRegistry(src, creds, transport) as src_image:
with open(args.tarball, 'rb') as f:
new_img = append.Layer(src_image, f.read())
creds = docker_creds.DefaultKeychain.Resolve(dst)
with docker_session.Push(dst, creds, transport, threads=_THREADS,
mount=[src.as_repository()]) as session:
logging.info('Starting upload ...')
session.upload(new_img)
digest = new_img.digest()
print(('{name} was published with digest: {digest}'.format(
name=dst, digest=digest)))
if __name__ == '__main__':
with patched.Httplib2():
main()

View File

@@ -0,0 +1,139 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package pulls images from a Docker Registry."""
import argparse
import logging
import sys
import tarfile
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2 import docker_image as v2_image
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import docker_image_list as image_list
from containerregistry.client.v2_2 import save
from containerregistry.client.v2_2 import v2_compat
from containerregistry.tools import logging_setup
from containerregistry.tools import patched
from containerregistry.tools import platform_args
from containerregistry.transport import retry
from containerregistry.transport import transport_pool
import httplib2
parser = argparse.ArgumentParser(
description='Pull images from a Docker Registry.')
parser.add_argument(
'--name',
action='store',
help=('The name of the docker image to pull and save. '
'Supports fully-qualified tag or digest references.'),
required=True)
parser.add_argument(
'--tarball', action='store', help='Where to save the image tarball.',
required=True)
platform_args.AddArguments(parser)
_DEFAULT_TAG = 'i-was-a-digest'
# Today save.tarball expects a tag, which is emitted into one or more files
# in the resulting tarball. If we don't translate the digest into a tag then
# the tarball format leaves us no good way to represent this information and
# folks are left having to tag the resulting image ID (yuck). As a datapoint
# `docker save -o /tmp/foo.tar bar@sha256:deadbeef` omits the v1 "repositories"
# file and emits `null` for the `RepoTags` key in "manifest.json". By doing
# this we leave a trivial breadcrumb of what the image was named (and the digest
# is recoverable once the image is loaded), which is a strictly better UX IMO.
# We do not need to worry about collisions by doing this here because this tool
# only packages a single image, so this is preferable to doing something similar
# in save.py itself.
def _make_tag_if_digest(
name):
if isinstance(name, docker_name.Tag):
return name
return docker_name.Tag('{repo}:{tag}'.format(
repo=str(name.as_repository()), tag=_DEFAULT_TAG))
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
retry_factory = retry.Factory()
retry_factory = retry_factory.WithSourceTransportCallable(httplib2.Http)
transport = transport_pool.Http(retry_factory.Build, size=8)
if '@' in args.name:
name = docker_name.Digest(args.name)
else:
name = docker_name.Tag(args.name)
# OCI Image Manifest is compatible with Docker Image Manifest Version 2,
# Schema 2. We indicate support for both formats by passing both media types
# as 'Accept' headers.
#
# For reference:
# OCI: https://github.com/opencontainers/image-spec
# Docker: https://docs.docker.com/registry/spec/manifest-v2-2/
accept = docker_http.SUPPORTED_MANIFEST_MIMES
# Resolve the appropriate credential to use based on the standard Docker
# client logic.
try:
creds = docker_creds.DefaultKeychain.Resolve(name)
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error resolving credentials for %s: %s', name, e)
sys.exit(1)
try:
with tarfile.open(name=args.tarball, mode='w:') as tar:
logging.info('Pulling manifest list from %r ...', name)
with image_list.FromRegistry(name, creds, transport) as img_list:
if img_list.exists():
platform = platform_args.FromArgs(args)
# pytype: disable=wrong-arg-types
with img_list.resolve(platform) as default_child:
save.tarball(_make_tag_if_digest(name), default_child, tar)
return
# pytype: enable=wrong-arg-types
logging.info('Pulling v2.2 image from %r ...', name)
with v2_2_image.FromRegistry(name, creds, transport, accept) as v2_2_img:
if v2_2_img.exists():
save.tarball(_make_tag_if_digest(name), v2_2_img, tar)
return
logging.info('Pulling v2 image from %r ...', name)
with v2_image.FromRegistry(name, creds, transport) as v2_img:
with v2_compat.V22FromV2(v2_img) as v2_2_img:
save.tarball(_make_tag_if_digest(name), v2_2_img, tar)
return
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error pulling and saving image %s: %s', name, e)
sys.exit(1)
if __name__ == '__main__':
with patched.Httplib2():
main()

View File

@@ -0,0 +1,125 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package pushes images to a Docker Registry."""
from __future__ import absolute_import
from __future__ import print_function
import argparse
import logging
import sys
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import docker_session
from containerregistry.client.v2_2 import oci_compat
from containerregistry.tools import logging_setup
from containerregistry.tools import patched
from containerregistry.transport import retry
from containerregistry.transport import transport_pool
import httplib2
parser = argparse.ArgumentParser(
description='Push images to a Docker Registry.')
parser.add_argument(
'--name', action='store', help='The name of the docker image to push.',
required=True)
parser.add_argument(
'--tarball', action='store', help='Where to load the image tarball.',
required=True)
parser.add_argument(
'--stamp-info-file',
action='append',
required=False,
help=('A list of files from which to read substitutions '
'to make in the provided --name, e.g. {BUILD_USER}'))
parser.add_argument(
'--oci', action='store_true', help='Push the image with an OCI Manifest.')
_THREADS = 8
def Tag(name, files):
"""Perform substitutions in the provided tag name."""
format_args = {}
for infofile in files or []:
with open(infofile) as info:
for line in info:
line = line.strip('\n')
key, value = line.split(' ', 1)
if key in format_args:
print(('WARNING: Duplicate value for key "%s": '
'using "%s"' % (key, value)))
format_args[key] = value
formatted_name = name.format(**format_args)
return docker_name.Tag(formatted_name)
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
retry_factory = retry.Factory()
retry_factory = retry_factory.WithSourceTransportCallable(httplib2.Http)
transport = transport_pool.Http(retry_factory.Build, size=_THREADS)
# This library can support push-by-digest, but the likelihood of a user
# correctly providing us with the digest without using this library
# directly is essentially nil.
name = Tag(args.name, args.stamp_info_file)
logging.info('Reading v2.2 image from tarball %r', args.tarball)
with v2_2_image.FromTarball(args.tarball) as v2_2_img:
# Resolve the appropriate credential to use based on the standard Docker
# client logic.
try:
creds = docker_creds.DefaultKeychain.Resolve(name)
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error resolving credentials for %s: %s', name, e)
sys.exit(1)
try:
with docker_session.Push(
name, creds, transport, threads=_THREADS) as session:
logging.info('Starting upload ...')
if args.oci:
with oci_compat.OCIFromV22(v2_2_img) as oci_img:
session.upload(oci_img)
digest = oci_img.digest()
else:
session.upload(v2_2_img)
digest = v2_2_img.digest()
print(('{name} was published with digest: {digest}'.format(
name=name, digest=digest)))
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error publishing %s: %s', name, e)
sys.exit(1)
if __name__ == '__main__':
with patched.Httplib2():
main()

View File

@@ -0,0 +1,104 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package flattens image metadata into a single tarball."""
from __future__ import absolute_import
from __future__ import print_function
import argparse
import logging
import tarfile
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.tools import logging_setup
from six.moves import zip # pylint: disable=redefined-builtin
parser = argparse.ArgumentParser(description='Flatten container images.')
# The name of this flag was chosen for compatibility with docker_pusher.py
parser.add_argument(
'--tarball', action='store', help='An optional legacy base image tarball.')
parser.add_argument(
'--config',
action='store',
help='The path to the file storing the image config.')
parser.add_argument(
'--digest',
action='append',
help='The list of layer digest filenames in order.')
parser.add_argument(
'--layer',
action='append',
help='The list of compressed layer filenames in order.')
parser.add_argument(
'--uncompressed_layer',
action='append',
help='The list of uncompressed layer filenames in order.')
parser.add_argument(
'--diff_id', action='append', help='The list of diff_ids in order.')
# Output arguments.
parser.add_argument(
'--filesystem',
action='store',
help='The name of where to write the filesystem tarball.')
parser.add_argument(
'--metadata',
action='store',
help=('The name of where to write the container '
'startup metadata.'))
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
# If config is specified, use that. Otherwise, fall back on reading
# the config from the tarball.
if args.config:
logging.info('Reading config from %r', args.config)
with open(args.config, 'r') as reader:
config = reader.read()
elif args.tarball:
logging.info('Reading config from tarball %r', args.tarball)
with v2_2_image.FromTarball(args.tarball) as base:
config = base.config_file()
else:
config = args.config
layers = list(zip(args.digest or [], args.layer or []))
uncompressed_layers = list(
zip(args.diff_id or [], args.uncompressed_layer or []))
logging.info('Loading v2.2 image From Disk ...')
with v2_2_image.FromDisk(
config_file=config,
layers=layers,
uncompressed_layers=uncompressed_layers,
legacy_base=args.tarball) as v2_2_img:
with tarfile.open(args.filesystem, 'w:', encoding='utf-8') as tar:
v2_2_image.extract(v2_2_img, tar)
with open(args.metadata, 'w') as f:
f.write(v2_2_img.config_file())
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,67 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package imports images from a 'docker save' tarball.
Unlike 'docker save' the format this uses is proprietary.
"""
import argparse
import logging
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import save
from containerregistry.tools import logging_setup
from containerregistry.tools import patched
parser = argparse.ArgumentParser(
description='Import images from a tarball into our faaaaaast format.')
parser.add_argument(
'--tarball',
action='store',
help=('The tarball containing the docker image to rewrite '
'into our fast on-disk format.'),
required=True)
parser.add_argument(
'--format',
action='store',
default='tar',
choices=['tar', 'tar.gz'],
help='The form in which to save layers.')
parser.add_argument(
'--directory', action='store', help='Where to save the image\'s files.',
required=True)
_THREADS = 32
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
method = save.uncompressed
if args.format == 'tar.gz':
method = save.fast
logging.info('Reading v2.2 image from tarball %r', args.tarball)
with v2_2_image.FromTarball(args.tarball) as v2_2_img:
method(v2_2_img, args.directory, threads=_THREADS)
if __name__ == '__main__':
with patched.Httplib2():
main()

View File

@@ -0,0 +1,146 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package pulls images from a Docker Registry.
Unlike docker_puller the format this uses is proprietary.
"""
import argparse
import logging
import sys
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2 import docker_image as v2_image
from containerregistry.client.v2_2 import docker_http
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import docker_image_list as image_list
from containerregistry.client.v2_2 import save
from containerregistry.client.v2_2 import v2_compat
from containerregistry.tools import logging_setup
from containerregistry.tools import patched
from containerregistry.tools import platform_args
from containerregistry.transport import retry
from containerregistry.transport import transport_pool
import httplib2
parser = argparse.ArgumentParser(
description='Pull images from a Docker Registry, faaaaast.')
parser.add_argument(
'--name',
action='store',
help=('The name of the docker image to pull and save. '
'Supports fully-qualified tag or digest references.'),
required=True)
parser.add_argument(
'--directory', action='store', help='Where to save the image\'s files.',
required=True)
platform_args.AddArguments(parser)
parser.add_argument(
'--client-config-dir',
action='store',
help='The path to the directory where the client configuration files are '
'located. Overiddes the value from DOCKER_CONFIG')
parser.add_argument(
'--cache', action='store', help='Image\'s files cache directory.')
_THREADS = 8
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
retry_factory = retry.Factory()
retry_factory = retry_factory.WithSourceTransportCallable(httplib2.Http)
transport = transport_pool.Http(retry_factory.Build, size=_THREADS)
if '@' in args.name:
name = docker_name.Digest(args.name)
else:
name = docker_name.Tag(args.name)
# If the user provided a client config directory, instruct the keychain
# resolver to use it to look for the docker client config
if args.client_config_dir is not None:
docker_creds.DefaultKeychain.setCustomConfigDir(args.client_config_dir)
# OCI Image Manifest is compatible with Docker Image Manifest Version 2,
# Schema 2. We indicate support for both formats by passing both media types
# as 'Accept' headers.
#
# For reference:
# OCI: https://github.com/opencontainers/image-spec
# Docker: https://docs.docker.com/registry/spec/manifest-v2-2/
accept = docker_http.SUPPORTED_MANIFEST_MIMES
# Resolve the appropriate credential to use based on the standard Docker
# client logic.
try:
creds = docker_creds.DefaultKeychain.Resolve(name)
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error resolving credentials for %s: %s', name, e)
sys.exit(1)
try:
logging.info('Pulling manifest list from %r ...', name)
with image_list.FromRegistry(name, creds, transport) as img_list:
if img_list.exists():
platform = platform_args.FromArgs(args)
# pytype: disable=wrong-arg-types
with img_list.resolve(platform) as default_child:
save.fast(
default_child,
args.directory,
threads=_THREADS,
cache_directory=args.cache)
return
# pytype: enable=wrong-arg-types
logging.info('Pulling v2.2 image from %r ...', name)
with v2_2_image.FromRegistry(name, creds, transport, accept) as v2_2_img:
if v2_2_img.exists():
save.fast(
v2_2_img,
args.directory,
threads=_THREADS,
cache_directory=args.cache)
return
logging.info('Pulling v2 image from %r ...', name)
with v2_image.FromRegistry(name, creds, transport) as v2_img:
with v2_compat.V22FromV2(v2_img) as v2_2_img:
save.fast(
v2_2_img,
args.directory,
threads=_THREADS,
cache_directory=args.cache)
return
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error pulling and saving image %s: %s', name, e)
sys.exit(1)
if __name__ == '__main__':
with patched.Httplib2():
main()

View File

@@ -0,0 +1,199 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package pushes images to a Docker Registry.
The format this tool *expects* to deal with is (unlike docker_pusher)
proprietary, however, unlike {fast,docker}_puller the signature of this tool is
compatible with docker_pusher.
"""
from __future__ import absolute_import
from __future__ import print_function
import argparse
import logging
import sys
from containerregistry.client import docker_creds
from containerregistry.client import docker_name
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import docker_session
from containerregistry.client.v2_2 import oci_compat
from containerregistry.tools import logging_setup
from containerregistry.tools import patched
from containerregistry.transport import retry
from containerregistry.transport import transport_pool
import httplib2
from six.moves import zip # pylint: disable=redefined-builtin
parser = argparse.ArgumentParser(
description='Push images to a Docker Registry, faaaaaast.')
parser.add_argument(
'--name', action='store', help='The name of the docker image to push.',
required=True)
# The name of this flag was chosen for compatibility with docker_pusher.py
parser.add_argument(
'--tarball', action='store', help='An optional legacy base image tarball.')
parser.add_argument(
'--config',
action='store',
help='The path to the file storing the image config.')
parser.add_argument(
'--manifest',
action='store',
required=False,
help='The path to the file storing the image manifest.')
parser.add_argument(
'--digest',
action='append',
help='The list of layer digest filenames in order.')
parser.add_argument(
'--layer', action='append', help='The list of layer filenames in order.')
parser.add_argument(
'--stamp-info-file',
action='append',
required=False,
help=('A list of files from which to read substitutions '
'to make in the provided --name, e.g. {BUILD_USER}'))
parser.add_argument(
'--oci', action='store_true', help='Push the image with an OCI Manifest.')
parser.add_argument(
'--client-config-dir',
action='store',
help='The path to the directory where the client configuration files are '
'located. Overiddes the value from DOCKER_CONFIG')
_THREADS = 8
def Tag(name, files):
"""Perform substitutions in the provided tag name."""
format_args = {}
for infofile in files or []:
with open(infofile) as info:
for line in info:
line = line.strip('\n')
key, value = line.split(' ', 1)
if key in format_args:
print(('WARNING: Duplicate value for key "%s": '
'using "%s"' % (key, value)))
format_args[key] = value
formatted_name = name.format(**format_args)
if files:
print(('{name} was resolved to {fname}'.format(
name=name, fname=formatted_name)))
return docker_name.Tag(formatted_name)
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
# This library can support push-by-digest, but the likelihood of a user
# correctly providing us with the digest without using this library
# directly is essentially nil.
name = Tag(args.name, args.stamp_info_file)
if not args.config and (args.layer or args.digest):
logging.fatal(
'Using --layer or --digest requires --config to be specified.')
sys.exit(1)
if not args.config and not args.tarball:
logging.fatal('Either --config or --tarball must be specified.')
sys.exit(1)
# If config is specified, use that. Otherwise, fallback on reading
# the config from the tarball.
config = args.config
manifest = args.manifest
if args.config:
logging.info('Reading config from %r', args.config)
with open(args.config, 'r') as reader:
config = reader.read()
elif args.tarball:
logging.info('Reading config from tarball %r', args.tarball)
with v2_2_image.FromTarball(args.tarball) as base:
config = base.config_file()
if args.manifest:
with open(args.manifest, 'r') as reader:
manifest = reader.read()
if len(args.digest or []) != len(args.layer or []):
logging.fatal('--digest and --layer must have matching lengths.')
sys.exit(1)
# If the user provided a client config directory, instruct the keychain
# resolver to use it to look for the docker client config
if args.client_config_dir is not None:
docker_creds.DefaultKeychain.setCustomConfigDir(args.client_config_dir)
retry_factory = retry.Factory()
retry_factory = retry_factory.WithSourceTransportCallable(httplib2.Http)
transport = transport_pool.Http(retry_factory.Build, size=_THREADS)
logging.info('Loading v2.2 image from disk ...')
with v2_2_image.FromDisk(
config,
list(zip(args.digest or [], args.layer or [])),
legacy_base=args.tarball,
foreign_layers_manifest=manifest) as v2_2_img:
# Resolve the appropriate credential to use based on the standard Docker
# client logic.
try:
creds = docker_creds.DefaultKeychain.Resolve(name)
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error resolving credentials for %s: %s', name, e)
sys.exit(1)
try:
with docker_session.Push(
name, creds, transport, threads=_THREADS) as session:
logging.info('Starting upload ...')
if args.oci:
with oci_compat.OCIFromV22(v2_2_img) as oci_img:
session.upload(oci_img)
digest = oci_img.digest()
else:
session.upload(v2_2_img)
digest = v2_2_img.digest()
print(('{name} was published with digest: {digest}'.format(
name=name, digest=digest)))
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error publishing %s: %s', name, e)
sys.exit(1)
if __name__ == '__main__':
with patched.Httplib2():
main()

View File

@@ -0,0 +1,126 @@
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package calculates the digest of an image.
The format this tool *expects* to deal with is proprietary.
Image digests aren't stable upon gzip implementation/configuration.
This tool is expected to be only self-consistent.
"""
from __future__ import absolute_import
from __future__ import print_function
import argparse
import logging
import sys
from containerregistry.client.v2_2 import docker_image as v2_2_image
from containerregistry.client.v2_2 import oci_compat
from containerregistry.tools import logging_setup
from six.moves import zip # pylint: disable=redefined-builtin
parser = argparse.ArgumentParser(
description='Calculate digest for a container image.')
parser.add_argument(
'--tarball', action='store', help='An optional legacy base image tarball.')
parser.add_argument(
'--output-digest',
required=True,
action='store',
help='Filename to store digest in.')
parser.add_argument(
'--config',
action='store',
help='The path to the file storing the image config.')
parser.add_argument(
'--manifest',
action='store',
help='The path to the file storing the image manifest.')
parser.add_argument(
'--digest',
action='append',
help='The list of layer digest filenames in order.')
parser.add_argument(
'--layer', action='append', help='The list of layer filenames in order.')
parser.add_argument(
'--oci', action='store_true', help='Image has an OCI Manifest.')
def main():
logging_setup.DefineCommandLineArgs(parser)
args = parser.parse_args()
logging_setup.Init(args=args)
if not args.config and (args.layer or args.digest):
logging.fatal(
'Using --layer or --digest requires --config to be specified.')
sys.exit(1)
if not args.config and not args.tarball:
logging.fatal('Either --config or --tarball must be specified.')
sys.exit(1)
# If config is specified, use that. Otherwise, fallback on reading
# the config from the tarball.
config = args.config
manifest = args.manifest
if args.config:
logging.info('Reading config from %r', args.config)
with open(args.config, 'r') as reader:
config = reader.read()
elif args.tarball:
logging.info('Reading config from tarball %r', args.tarball)
with v2_2_image.FromTarball(args.tarball) as base:
config = base.config_file()
if args.manifest:
with open(args.manifest, 'r') as reader:
manifest = reader.read()
if len(args.digest or []) != len(args.layer or []):
logging.fatal('--digest and --layer must have matching lengths.')
sys.exit(1)
logging.info('Loading v2.2 image from disk ...')
with v2_2_image.FromDisk(
config,
list(zip(args.digest or [], args.layer or [])),
legacy_base=args.tarball,
foreign_layers_manifest=manifest) as v2_2_img:
try:
if args.oci:
with oci_compat.OCIFromV22(v2_2_img) as oci_img:
digest = oci_img.digest()
else:
digest = v2_2_img.digest()
with open(args.output_digest, 'w+') as digest_file:
digest_file.write(digest)
# pylint: disable=broad-except
except Exception as e:
logging.fatal('Error getting digest: %s', e)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,61 @@
# Copyright 2017 Stripe Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package sets up the Python logging system."""
import logging
import sys
# Based on glog.
FORMAT = ('%(shortlevel)s%(asctime)s.%(time_millis)06d %(process_str)s '
'%(filename)s:%(lineno)d] %(message)s')
# Based on glog.
TIMESTAMP_FORMAT = '%m%d %H:%M:%S'
def DefineCommandLineArgs(argparser):
argparser.add_argument(
'--stderrthreshold',
action='store',
help=('Write log events at or above this level to stderr.'))
def Init(args=None):
handler = logging.StreamHandler(stream=sys.stderr)
handler.setFormatter(Formatter())
logging.root.addHandler(handler)
if args is not None:
if args.stderrthreshold is not None:
logging.root.setLevel(args.stderrthreshold)
class Formatter(logging.Formatter):
def __init__(self):
super(Formatter, self).__init__(fmt=FORMAT, datefmt=TIMESTAMP_FORMAT)
def format(self, record):
# Injecting fields into the record seems to be fine, it's how the upstream
# logging.Formatter adds timestamps and such.
if record.levelname == 'CRITICAL':
record.shortlevel = 'F' # FATAL
else:
record.shortlevel = record.levelname[0]
record.time_millis = (record.created - int(record.created)) * 1000000
if record.process is None:
record.process_str = '???????'
else:
record.process_str = '% 7d' % (record.process,)
return super(Formatter, self).format(record)

View File

@@ -0,0 +1,55 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Context managers for patching libraries for use in PAR files."""
import os
import pkgutil
import shutil
import tempfile
import httplib2
def _monkey_patch_httplib2(extract_dir):
"""Patch things so that httplib2 works properly in a PAR.
Manually extract certificates to file to make OpenSSL happy and avoid error:
ssl.SSLError: [Errno 185090050] _ssl.c:344: error:0B084002:x509 ...
Args:
extract_dir: the directory into which we extract the necessary files.
"""
if os.path.isfile(httplib2.CA_CERTS):
# Not inside of a PAR file, so don't bother.
return
cacerts_contents = pkgutil.get_data('httplib2', 'cacerts.txt')
cacerts_filename = os.path.join(extract_dir, 'cacerts.txt')
with open(cacerts_filename, 'wb') as f:
f.write(cacerts_contents)
httplib2.CA_CERTS = cacerts_filename
class Httplib2(object):
def __init__(self):
self._tmpdir = tempfile.mkdtemp()
# __enter__ and __exit__ allow use as a context manager.
def __enter__(self):
_monkey_patch_httplib2(self._tmpdir)
return self
def __exit__(self, unused_type, unused_value, unused_traceback):
shutil.rmtree(self._tmpdir)

View File

@@ -0,0 +1,77 @@
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package defines a few functions to add and parse platforms arguments.
These arguments are used to select the image to pull when given a Docker
manifest list.
"""
import argparse
from containerregistry.client.v2_2 import docker_image_list
def AddArguments(parser):
"""Adds command-line arguments for platform fields.
Args:
parser: argparser.ArgumentParser object.
"""
parser.add_argument(
'--os',
help=('For multi-platform manifest lists, specifies the operating '
'system.'))
parser.add_argument(
'--os-version',
help=('For multi-platform manifest lists, specifies the operating system '
'version.'))
parser.add_argument(
'--os-features',
nargs='*',
help=('For multi-platform manifest lists, specifies operating system '
'features.'))
parser.add_argument(
'--architecture',
help=('For multi-platform manifest lists, specifies the CPU '
'architecture.'))
parser.add_argument(
'--variant',
help='For multi-platform manifest lists, specifies the CPU variant.')
parser.add_argument(
'--features',
nargs='*',
help='For multi-platform manifest lists, specifies CPU features.')
def FromArgs(args):
"""Populates a docker_image_list.Platform object from the provided args."""
p = {}
def _SetField(k, v):
if v is not None:
p[k] = v
_SetField('os', args.os)
_SetField('os.version', args.os_version)
_SetField('os.features', args.os_features)
_SetField('architecture', args.architecture)
_SetField('variant', args.variant)
_SetField('features', args.features)
return docker_image_list.Platform(p)

View File

@@ -0,0 +1,18 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.transform']

View File

@@ -0,0 +1,22 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.transform.v1']
from containerregistry.transform.v1 import metadata_
setattr(x, 'metadata', metadata_)

View File

@@ -0,0 +1,200 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package manipulates Docker image metadata."""
from __future__ import absolute_import
from __future__ import print_function
from collections import namedtuple
import copy
import os
import six
_OverridesT = namedtuple('OverridesT', [
'name', 'parent', 'size', 'entrypoint', 'cmd', 'env', 'labels', 'ports',
'volumes', 'workdir', 'user'
])
class Overrides(_OverridesT):
"""Docker image layer metadata options."""
def __new__(cls,
name = None,
parent = None,
size = None,
entrypoint = None,
cmd = None,
user = None,
labels = None,
env = None,
ports = None,
volumes = None,
workdir = None):
"""Constructor."""
return super(Overrides, cls).__new__(
cls,
name=name,
parent=parent,
size=size,
entrypoint=entrypoint,
cmd=cmd,
user=user,
labels=labels,
env=env,
ports=ports,
volumes=volumes,
workdir=workdir)
# NOT THREADSAFE
def _Resolve(value, environment):
"""Resolves environment variables embedded in the given value."""
outer_env = os.environ
try:
os.environ = environment
return os.path.expandvars(value)
finally:
os.environ = outer_env
# TODO(user): Use a typing.Generic?
def _DeepCopySkipNull(data):
"""Do a deep copy, skipping null entry."""
if type(data) == type(dict()): # pylint: disable=unidiomatic-typecheck
return dict((_DeepCopySkipNull(k), _DeepCopySkipNull(v))
for k, v in six.iteritems(data)
if v is not None)
return copy.deepcopy(data)
def _KeyValueToDict(pair):
"""Converts an iterable object of key=value pairs to dictionary."""
d = dict()
for kv in pair:
(k, v) = kv.split('=', 1)
d[k] = v
return d
def _DictToKeyValue(d):
return ['%s=%s' % (k, d[k]) for k in sorted(d.keys())]
def Override(data,
options,
docker_version = '1.5.0',
architecture = 'amd64',
operating_system = 'linux'):
"""Rewrite and return a copy of the input data according to options.
Args:
data: The dict of Docker image layer metadata we're copying and rewriting.
options: The changes this layer makes to the overall image's metadata, which
first appears in this layer's version of the metadata
docker_version: The version of docker write in the metadata (default: 1.5.0)
architecture: The architecture to write in the metadata (default: amd64)
operating_system: The os to write in the metadata (default: linux)
Returns:
A deep copy of data, which has been updated to reflect the metadata
additions of this layer.
Raises:
Exception: a required option was missing.
"""
output = _DeepCopySkipNull(data)
if not options.name:
raise Exception('Missing required option: name')
output['id'] = options.name
if options.parent:
output['parent'] = options.parent
elif data:
raise Exception(
'Expected empty input object when parent is omitted, got: %s' % data)
if options.size:
output['Size'] = options.size
elif 'Size' in output:
del output['Size']
if 'config' not in output:
output['config'] = {}
if options.entrypoint:
output['config']['Entrypoint'] = options.entrypoint
if options.cmd:
output['config']['Cmd'] = options.cmd
if options.user:
output['config']['User'] = options.user
output['docker_version'] = docker_version
output['architecture'] = architecture
output['os'] = operating_system
if options.env:
# Build a dictionary of existing environment variables (used by _Resolve).
environ_dict = _KeyValueToDict(output['config'].get('Env', []))
# Merge in new environment variables, resolving references.
for k, v in six.iteritems(options.env):
# _Resolve handles scenarios like "PATH=$PATH:...".
environ_dict[k] = _Resolve(v, environ_dict)
output['config']['Env'] = _DictToKeyValue(environ_dict)
if options.labels:
label_dict = _KeyValueToDict(output['config'].get('Label', []))
for k, v in six.iteritems(options.labels):
label_dict[k] = v
output['config']['Label'] = _DictToKeyValue(label_dict)
if options.ports:
if 'ExposedPorts' not in output['config']:
output['config']['ExposedPorts'] = {}
for p in options.ports:
if '/' in p:
# The port spec has the form 80/tcp, 1234/udp
# so we simply use it as the key.
output['config']['ExposedPorts'][p] = {}
else:
# Assume tcp for naked ports.
output['config']['ExposedPorts'][p + '/tcp'] = {}
if options.volumes:
if 'Volumes' not in output['config']:
output['config']['Volumes'] = {}
for p in options.volumes:
output['config']['Volumes'][p] = {}
if options.workdir:
output['config']['WorkingDir'] = options.workdir
# TODO(user): comment, created, container_config
# container_config contains information about the container
# that was used to create this layer, so it shouldn't
# propagate from the parent to child. This is where we would
# annotate information that can be extract by tools like Blubber
# or Quay.io's UI to gain insight into the source that generated
# the layer. A Dockerfile might produce something like:
# # (nop) /bin/sh -c "apt-get update"
# We might consider encoding the fully-qualified bazel build target:
# //tools/build_defs/docker:image
# However, we should be sensitive to leaking data through this field.
if 'container_config' in output:
del output['container_config']
return output

View File

@@ -0,0 +1,22 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.transform.v2_2']
from containerregistry.transform.v2_2 import metadata_
setattr(x, 'metadata', metadata_)

View File

@@ -0,0 +1,231 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package manipulates v2.2 image configuration metadata."""
from __future__ import absolute_import
from __future__ import print_function
from collections import namedtuple
import copy
import hashlib
import os
import six
_OverridesT = namedtuple('OverridesT', [
'layers', 'entrypoint', 'cmd', 'env', 'labels', 'ports', 'volumes',
'workdir', 'user', 'author', 'created_by', 'creation_time'
])
# Unix epoch 0, representable in 32 bits.
_DEFAULT_TIMESTAMP = '1970-01-01T00:00:00Z'
_EMPTY_LAYER = hashlib.sha256(b'').hexdigest()
class Overrides(_OverridesT):
"""Docker image configuration options."""
def __new__(cls,
layers = None,
entrypoint = None,
cmd = None,
user = None,
labels = None,
env = None,
ports = None,
volumes = None,
workdir = None,
author = None,
created_by = None,
creation_time = None):
"""Constructor."""
return super(Overrides, cls).__new__(
cls,
layers=layers,
entrypoint=entrypoint,
cmd=cmd,
user=user,
labels=labels,
env=env,
ports=ports,
volumes=volumes,
workdir=workdir,
author=author,
created_by=created_by,
creation_time=creation_time)
def Override(self,
layers = None,
entrypoint = None,
cmd = None,
user = None,
labels = None,
env = None,
ports = None,
volumes = None,
workdir = None,
author = None,
created_by = None,
creation_time = None):
return Overrides(
layers=layers or self.layers,
entrypoint=entrypoint or self.entrypoint,
cmd=cmd or self.cmd,
user=user or self.user,
labels=labels or self.labels,
env=env or self.env,
ports=ports or self.ports,
volumes=volumes or self.volumes,
workdir=workdir or self.workdir,
author=author or self.author,
created_by=created_by or self.created_by,
creation_time=creation_time or self.creation_time) # pytype: disable=bad-return-type # b/228241343
# NOT THREADSAFE
def _Resolve(value, environment):
"""Resolves environment variables embedded in the given value."""
outer_env = os.environ
try:
os.environ = environment
return os.path.expandvars(value)
finally:
os.environ = outer_env
# TODO(user): Use a typing.Generic?
def _DeepCopySkipNull(data):
"""Do a deep copy, skipping null entry."""
if isinstance(data, dict):
return dict((_DeepCopySkipNull(k), _DeepCopySkipNull(v))
for k, v in six.iteritems(data)
if v is not None)
return copy.deepcopy(data)
def _KeyValueToDict(pair):
"""Converts an iterable object of key=value pairs to dictionary."""
d = dict()
for kv in pair:
(k, v) = kv.split('=', 1)
d[k] = v
return d
def _DictToKeyValue(d):
return ['%s=%s' % (k, d[k]) for k in sorted(d.keys())]
def Override(data,
options,
architecture = 'amd64',
operating_system = 'linux'):
"""Create an image config possibly based on an existing one.
Args:
data: A dict of Docker image config to base on top of.
options: Options specific to this image which will be merged with any
existing data
architecture: The architecture to write in the metadata (default: amd64)
operating_system: The os to write in the metadata (default: linux)
Returns:
Image config for the new image
"""
defaults = _DeepCopySkipNull(data)
# dont propagate non-spec keys
output = dict()
output['created'] = options.creation_time or _DEFAULT_TIMESTAMP
output['author'] = options.author or 'Unknown'
output['architecture'] = architecture
output['os'] = operating_system
if 'os.version' in defaults:
output['os.version'] = defaults['os.version']
output['config'] = defaults.get('config', {})
# pytype: disable=attribute-error,unsupported-operands
if options.entrypoint:
output['config']['Entrypoint'] = options.entrypoint
if options.cmd:
output['config']['Cmd'] = options.cmd
if options.user:
output['config']['User'] = options.user
if options.env:
# Build a dictionary of existing environment variables (used by _Resolve).
environ_dict = _KeyValueToDict(output['config'].get('Env', []))
# Merge in new environment variables, resolving references.
for k, v in six.iteritems(options.env):
# Resolve handles scenarios like "PATH=$PATH:...".
environ_dict[k] = _Resolve(v, environ_dict)
output['config']['Env'] = _DictToKeyValue(environ_dict)
# TODO(user) Label is currently docker specific
if options.labels:
label_dict = output['config'].get('Labels', {})
for k, v in six.iteritems(options.labels):
label_dict[k] = v
output['config']['Labels'] = label_dict
if options.ports:
if 'ExposedPorts' not in output['config']:
output['config']['ExposedPorts'] = {}
for p in options.ports:
if '/' in p:
# The port spec has the form 80/tcp, 1234/udp
# so we simply use it as the key.
output['config']['ExposedPorts'][p] = {}
else:
# Assume tcp
output['config']['ExposedPorts'][p + '/tcp'] = {}
if options.volumes:
if 'Volumes' not in output['config']:
output['config']['Volumes'] = {}
for p in options.volumes:
output['config']['Volumes'][p] = {}
if options.workdir:
output['config']['WorkingDir'] = options.workdir
# pytype: enable=attribute-error,unsupported-operands
# diff_ids are ordered from bottom-most to top-most
diff_ids = defaults.get('rootfs', {}).get('diff_ids', [])
if options.layers:
layers = options.layers
diff_ids += ['sha256:%s' % l for l in layers if l != _EMPTY_LAYER]
output['rootfs'] = {
'type': 'layers',
'diff_ids': diff_ids,
}
# The length of history is expected to match the length of diff_ids.
history = defaults.get('history', [])
for l in layers:
cfg = {
'created': options.creation_time or _DEFAULT_TIMESTAMP,
'created_by': options.created_by or 'Unknown',
'author': options.author or 'Unknown'
}
if l == _EMPTY_LAYER:
cfg['empty_layer'] = True
history.insert(0, cfg)
output['history'] = history
return output

View File

@@ -0,0 +1,30 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
x=sys.modules['containerregistry.transport']
from containerregistry.transport import nested_
setattr(x, 'nested', nested_)
from containerregistry.transport import retry_
setattr(x, 'retry', retry_)
from containerregistry.transport import transport_pool_
setattr(x, 'transport_pool', transport_pool_)

View File

@@ -0,0 +1,44 @@
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An httplib2.Http extending and composing an inner httplib2.Http transport.
"""
import httplib2
class NestedTransport(httplib2.Http):
"""Extends and composes an inner httplib2.Http transport."""
def __init__(self, source_transport):
self.source_transport = source_transport
def __getstate__(self):
raise NotImplementedError()
def __setstate__(self, state):
# Don't want to bother reflectivley instantiating the source_transport.
# Don't serialize your transports.
raise NotImplementedError()
def add_credentials(self, *args, **kwargs):
self.source_transport.add_credentials(*args, **kwargs)
def add_certificate(self, *args, **kwargs):
self.source_transport.add_certificate(*args, **kwargs)
def clear_credentials(self):
self.source_transport.clear_credentials()
def request(self, *args, **kwargs):
return self.source_transport.request(*args, **kwargs)

View File

@@ -0,0 +1,114 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package facilitates retries for HTTP/REST requests to the registry."""
import logging
import time
from containerregistry.transport import nested
import httplib2
import six.moves.http_client
DEFAULT_SOURCE_TRANSPORT_CALLABLE = httplib2.Http
DEFAULT_MAX_RETRIES = 2
DEFAULT_BACKOFF_FACTOR = 0.5
if six.PY3:
import builtins # pylint: disable=g-import-not-at-top,import-error
BrokenPipeError = builtins.BrokenPipeError
RETRYABLE_EXCEPTION_TYPES = [
BrokenPipeError,
six.moves.http_client.IncompleteRead,
six.moves.http_client.ResponseNotReady
]
else:
RETRYABLE_EXCEPTION_TYPES = [
six.moves.http_client.IncompleteRead,
six.moves.http_client.ResponseNotReady
]
def ShouldRetry(err):
for exception_type in RETRYABLE_EXCEPTION_TYPES:
if isinstance(err, exception_type):
return True
return False
class Factory(object):
"""A factory for creating RetryTransports."""
def __init__(self):
self.kwargs = {}
self.source_transport_callable = DEFAULT_SOURCE_TRANSPORT_CALLABLE
def WithSourceTransportCallable(self, source_transport_callable):
self.source_transport_callable = source_transport_callable
return self
def WithMaxRetries(self, max_retries):
self.kwargs['max_retries'] = max_retries
return self
def WithBackoffFactor(self, backoff_factor):
self.kwargs['backoff_factor'] = backoff_factor
return self
def WithShouldRetryFunction(self, should_retry_fn):
self.kwargs['should_retry_fn'] = should_retry_fn
return self
def Build(self):
"""Returns a RetryTransport constructed with the given values."""
return RetryTransport(self.source_transport_callable(), **self.kwargs)
class RetryTransport(nested.NestedTransport):
"""A wrapper for the given transport which automatically retries errors."""
def __init__(self,
source_transport,
max_retries = DEFAULT_MAX_RETRIES,
backoff_factor = DEFAULT_BACKOFF_FACTOR,
should_retry_fn = ShouldRetry):
super(RetryTransport, self).__init__(source_transport)
self._max_retries = max_retries
self._backoff_factor = backoff_factor
self._should_retry = should_retry_fn
def request(self, *args, **kwargs):
"""Does the request, exponentially backing off and retrying as appropriate.
Backoff is backoff_factor * (2 ^ (retry #)) seconds.
Args:
*args: The sequence of positional arguments to forward to the source
transport.
**kwargs: The keyword arguments to forward to the source transport.
Returns:
The response of the HTTP request, and its contents.
"""
retries = 0
while True:
try:
return self.source_transport.request(*args, **kwargs)
except Exception as err: # pylint: disable=broad-except
if retries >= self._max_retries or not self._should_retry(err):
raise
logging.error('Retrying after exception %s.', err)
retries += 1
time.sleep(self._backoff_factor * (2**retries))
continue

View File

@@ -0,0 +1,64 @@
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A threadsafe pool of httplib2.Http handlers."""
from __future__ import absolute_import
from __future__ import print_function
import threading
import httplib2
from six.moves import range # pylint: disable=redefined-builtin
class Http(httplib2.Http):
"""A threadsafe pool of httplib2.Http transports."""
def __init__(self, transport_factory, size=2):
self._condition = threading.Condition(threading.Lock())
self._transports = [transport_factory() for _ in range(size)]
def _get_transport(self):
with self._condition:
while True:
if self._transports:
return self._transports.pop()
# Nothing is available, wait until it is.
# This releases the lock until a notification occurs.
self._condition.wait()
def _return_transport(self, transport):
with self._condition:
self._transports.append(transport)
# We returned an item, notify a waiting thread.
self._condition.notify(n=1)
def request(self, *args, **kwargs):
"""This awaits a transport and delegates the request call.
Args:
*args: arguments to request.
**kwargs: named arguments to request.
Returns:
tuple of response and content.
"""
transport = self._get_transport()
try:
return transport.request(*args, **kwargs)
finally:
self._return_transport(transport)