Commit 8efca395 authored by hinoka@google.com's avatar hinoka@google.com

Added gsutil/gslib to depot_tools/third_party

This is needed for https://chromiumcodereview.appspot.com/12042069/
Which uses gsutil to download objects from Google Storage based on SHA1 sums

Continuation of: https://chromiumcodereview.appspot.com/12317103/
Rietveld didn't like a giant CL with all of gsutil (kept crashing on upload),
The CL is being split into three parts

Related:
https://chromiumcodereview.appspot.com/12755026 (gsutil/boto)
https://codereview.chromium.org/12685009/ (gsutil/)

BUG=

Review URL: https://codereview.chromium.org/12685010

git-svn-id: svn://svn.chromium.org/chrome/trunk/tools/depot_tools@188842 0039d316-1c4b-4281-b951-d872f2087c98
parent 50f1d2a1
This directory contains library code used by gsutil. Users are cautioned not
to write programs that call the internal interfaces defined in here; these
interfaces were defined only for use by gsutil, and are subject to change
without notice. Moreover, Google supports this library only when used by
gsutil, not when the library interfaces are called directly by other programs.
# Copyright 2010 Google Inc. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
"""Package marker file."""
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Package marker file."""
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW</B>
Access Control Lists (ACLs) allow you to control who can read and write
your data, and who can read and write the ACLs themselves.
If not specified at the time an object is uploaded (e.g., via the gsutil cp
-a option), objects will be created with a default object ACL set on the
bucket (see "gsutil help setdefacl"). You can replace the ACL on an object
or bucket using the gsutil setacl command (see "gsutil help setacl"), or
modify the existing ACL using the gsutil chacl command (see "gsutil help
chacl").
<B>BUCKET VS OBJECT ACLS</B>
In Google Cloud Storage, the bucket ACL works as follows:
- Users granted READ access are allowed to list the bucket contents.
- Users granted WRITE access are allowed READ access and also are
allowed to write and delete objects in that bucket -- including
overwriting previously written objects.
- Users granted FULL_CONTROL access are allowed WRITE access and also
are allowed to read and write the bucket's ACL.
The object ACL works as follows:
- Users granted READ access are allowed to read the object's data and
metadata.
- Users granted FULL_CONTROL access are allowed READ access and also
are allowed to read and write the object's ACL.
A couple of points are worth noting, that sometimes surprise users:
1. There is no WRITE access for objects; attempting to set an ACL with WRITE
permission for an object will result in an error.
2. The bucket ACL plays no role in determining who can read objects; only the
object ACL matters for that purpose. This is different from how things
work in Linux file systems, where both the file and directory permission
control file read access. It also means, for example, that someone with
FULL_CONTROL over the bucket may not have read access to objects in
the bucket. This is by design, and supports useful cases. For example,
you might want to set up bucket ownership so that a small group of
administrators have FULL_CONTROL on the bucket (with the ability to
delete data to control storage costs), but not grant those users read
access to the object data (which might be sensitive data that should
only be accessed by a different specific group of users).
<B>CANNED ACLS</B>
The simplest way to set an ACL on a bucket or object is using a "canned
ACL". The available canned ACLs are:
project-private Gives permission to the project team based on their
roles. Anyone who is part of the team has READ
permission, and project owners and project editors
have FULL_CONTROL permission. This is the default
ACL for newly created buckets. This is also the
default ACL for newly created objects unless the
default object ACL for that bucket has been
changed. For more details see
"gsutil help projects".
private Gives the requester (and only the requester)
FULL_CONTROL permission for a bucket or object.
public-read Gives the requester FULL_CONTROL permission and
gives all users READ permission. When you apply
this to an object, anyone on the Internet can
read the object without authenticating.
public-read-write Gives the requester FULL_CONTROL permission and
gives all users READ and WRITE permission. This
ACL applies only to buckets.
authenticated-read Gives the requester FULL_CONTROL permission and
gives all authenticated Google account holders
READ permission.
bucket-owner-read Gives the requester FULL_CONTROL permission and
gives the bucket owner READ permission. This is
used only with objects.
bucket-owner-full-control Gives the requester FULL_CONTROL permission and
gives the bucket owner FULL_CONTROL
permission. This is used only with objects.
<B>ACL XML</B>
When you use a canned ACL, it is translated into an XML representation
that can later be retrieved and edited to specify more fine-grained
detail about who can read and write buckets and objects. By running
the gsutil getacl command you can retrieve the ACL XML, and edit it to
customize the permissions.
As an example, if you create an object in a bucket that has no default
object ACL set and then retrieve the ACL on the object, it will look
something like this:
<AccessControlList>
<Owner>
<ID>
00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7
</ID>
</Owner>
<Entries>
<Entry>
<Scope type="GroupById">
<ID>
00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7
</ID>
</Scope>
<Permission>
FULL_CONTROL
</Permission>
</Entry>
<Entry>
<Scope type="GroupById">
<ID>
00b4903a977fd817e9da167bc81306489181a110456bb635f466d71cf90a0d51
</ID>
</Scope>
<Permission>
FULL_CONTROL
</Permission>
</Entry>
<Entry>
<Scope type="GroupById">
<ID>
00b4903a974898cc8fc309f2f2835308ba3d3df1b889d3fc7e33e187d52d8e71
</ID>
</Scope>
<Permission>
READ
</Permission>
</Entry>
</Entries>
</AccessControlList>
The ACL consists of an Owner element and a collection of Entry elements,
each of which specifies a Scope and a Permission. Scopes are the way you
specify an individual or group of individuals, and Permissions specify what
access they're permitted.
This particular ACL grants FULL_CONTROL to two groups (which means members
of those groups are allowed to read the object and read and write the ACL),
and READ permission to a third group. The project groups are (in order)
the owners group, editors group, and viewers group.
The 64 digit hex identifiers used in this ACL are called canonical IDs,
and are used to identify predefined groups associated with the project that
owns the bucket. For more information about project groups, see "gsutil
help projects".
Here's an example of an ACL specified using the GroupByEmail and GroupByDomain
scopes:
<AccessControlList>
<Entries>
<Entry>
<Permission>
FULL_CONTROL
</Permission>
<Scope type="GroupByEmail">
<EmailAddress>travel-companion-owners@googlegroups.com</EmailAddress>
</Scope>
</Entry>
<Entry>
<Permission>
READ
</Permission>
<Scope type="GroupByDomain">
<Domain>example.com</Domain>
</Scope>
</Entry>
</Entries>
</AccessControlList>
This ACL grants members of an email group FULL_CONTROL, and grants READ
access to any user in a domain (which must be a Google Apps for Business
domain). By applying email group grants to a collection of objects
you can edit access control for large numbers of objects at once via
http://groups.google.com. That way, for example, you can easily and quickly
change access to a group of company objects when employees join and leave
your company (i.e., without having to individually change ACLs across
potentially millions of objects).
<B>SHARING SCENARIOS</B>
For more detailed examples how to achieve various useful sharing use
cases see https://developers.google.com/storage/docs/collaboration
""")
class CommandOptions(HelpProvider):
"""Additional help about Access Control Lists."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'acls',
# List of help name aliases.
HELP_NAME_ALIASES : ['acl', 'ACL', 'access control', 'access control list',
'authorization', 'canned', 'canned acl'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Working with Access Control Lists',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW</B>
gsutil users can access publicly readable data without obtaining
credentials. For example, the gs://uspto-pair bucket contains a number
of publicly readable objects, so any user can run the following command
without first obtaining credentials:
gsutil ls gs://uspto-pair/applications/0800401*
Users can similarly download objects they find via the above gsutil ls
command.
If a user without credentials attempts to access protected data using gsutil,
they will be prompted to run "gsutil config" to obtain credentials.
See "gsutil help acls" for more details about data protection.
""")
class CommandOptions(HelpProvider):
"""Additional help about Access Control Lists."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'anon',
# List of help name aliases.
HELP_NAME_ALIASES : ['anonymous', 'public'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY :
'Accessing public data without obtaining credentials',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>SYNOPSIS</B>
Top-level gsutil Options
<B>DESCRIPTION</B>
gsutil supports separate options for the top-level gsutil command and
the individual sub-commands (like cp, rm, etc.) The top-level options
control behavior of gsutil that apply across commands. For example, in
the command:
gsutil -m cp -p file gs://bucket/obj
the -m option applies to gsutil, while the -p option applies to the cp
sub-command.
<B>OPTIONS</B>
-d Shows HTTP requests/headers.
-D Shows HTTP requests/headers plus additional debug info needed when
posting support requests.
-DD Shows HTTP requests/headers plus additional debug info plus HTTP
upstream payload.
-h Allows you to specify additional HTTP headers, for example:
gsutil -h "Cache-Control:public,max-age=3600" \\
-h "Content-Type:text/html" cp ...
Note that you need to quote the headers/values that
contain spaces (such as "Content-Disposition: attachment;
filename=filename.ext"), to avoid having the shell split them
into separate arguments.
Note that because the -h option allows you to specify any HTTP
header, it is both powerful and potentially dangerous:
- It is powerful because it allows you to specify headers that
gsutil doesn't currently know about (e.g., to request
service features from a different storage service provider
than Google); or to override the values gsutil would normally
send with different values.
- It is potentially dangerous because you can specify headers
that cause gsutil to send invalid requests, or that in
other ways change the behavior of requests.
Thus, you should be sure you understand the underlying storage
service HTTP API (and what impact the headers you specify will
have) before using the gsutil -h option.
See also "gsutil help setmeta" for the ability to set metadata
fields on objects after they have been uploaded.
-m Causes supported operations (cp, mv, rm, setacl, setmeta) to run
in parallel. This can significantly improve performance if you are
uploading, downloading, moving, removing, or changing ACLs on
a large number of files over a fast network connection.
gsutil performs the specified operation using a combination of
multi-threading and multi-processing, using a number of threads
and processors determined by the parallel_thread_count and
parallel_process_count values set in the boto configuration
file. You might want to experiment with these value, as the
best value can vary based on a number of factors, including
network speed, number of CPUs, and available memory.
Using the -m option may make your performance worse if you
are using a slower network, such as the typical network speeds
offered by non-business home network plans.
If a download or upload operation using parallel transfer fails
before the entire transfer is complete (e.g. failing after 300 of
1000 files have been transferred), you will need to restart the
entire transfer.
-s Tells gsutil to use a simulated storage provider (for testing).
""")
class CommandOptions(HelpProvider):
"""Additional help about gsutil command-level options."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'options',
# List of help name aliases.
HELP_NAME_ALIASES : ['arg', 'args', 'cli', 'opt', 'opts'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'gsutil-level command line options',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW</B>
We're open to incorporating gsutil code changes authored by users. Here
are some guidelines:
1. Before we can accept code submissions, we have to jump a couple of legal
hurdles. Please fill out either the individual or corporate Contributor
License Agreement:
- If you are an individual writing original source code and you're
sure you own the intellectual property,
then you'll need to sign an individual CLA
(http://code.google.com/legal/individual-cla-v1.0.html).
- If you work for a company that wants to allow you to contribute your
work to gsutil, then you'll need to sign a corporate CLA
(http://code.google.com/legal/corporate-cla-v1.0.html)
Follow either of the two links above to access the appropriate CLA and
instructions for how to sign and return it. Once we receive it, we'll
add you to the official list of contributors and be able to accept
your patches.
2. If you found a bug or have an idea for a feature enhancement, we suggest
you check http://code.google.com/p/gsutil/issues/list to see if it has
already been reported by another user. From there you can also add yourself
to the Cc list for an issue, so you will find out about any developments.
3. It's usually worthwhile to send email to gs-team@google.com about your
idea before sending actual code. Often we can discuss the idea and help
propose things that could save you later revision work.
4. We tend to avoid adding command line options that are of use to only
a very small fraction of users, especially if there's some other way
to accommodate such needs. Adding such options complicates the code and
also adds overhead to users having to read through an "alphabet soup"
list of option documentation.
5. While gsutil has a number of features specific to Google Cloud Storage,
it can also be used with other cloud storage providers. We're open to
including changes for making gsutil support features specific to other
providers, as long as those changes don't make gsutil work worse for Google
Cloud Storage. If you do make such changes we recommend including someone
with knowledge of the specific provider as a code reviewer (see below).
6. You can check out the gsutil code from svn - see
http://code.google.com/p/gsutil/source/checkout. Then change directories
into gsutil/src, and check out the boto code from github:
git clone git://github.com/boto/boto.git
7. Please make sure to run all tests against your modified code. To
do this, change directories into the gsutil top-level directory and run:
./gsutil test
The above tests take a long time to run because they send many requests to
the production service. The gsutil test command has a -u argument that will
only run unit tests. These run quickly, as they are executed with an
in-memory mock storage service implementation. To run only the unit tests,
run:
./gsutil test -u
If you made mods to boto please run the boto tests. For these tests you
need to use HMAC credentials (from gsutil config -a), because the current
boto test suite doesn't import the OAuth2 handler. You'll also need to
install some python modules: change directories into the top-level gsutil
directory and run:
pip install -qr boto/requirements.txt
(You probably need to run this commad using sudo.)
Make sure each of the individual installations succeeded. If they don't
you may need to run individual ones again, e.g.,
pip install unittest2
Then ensure your .boto file has HMAC credentials defined (the boto tests
don't load the OAUTH2 plugin), and then change directories into boto/tests
and run:
python test.py unit
python test.py -t s3 -t gs -t ssl
8. Please consider contributing test code for your change, especially if the
change impacts any of the core gsutil code (like the gsutil cp command).
9. When it's time to send us code, please use the Rietveld code review tool
rather than simply sending us a code patch. Do this as follows:
- check out the gsutil code from at
http://code.google.com/p/gsutil/source/checkout and apply your changes
in the checked out directory.
- download the "upload.py" script from
http://code.google.com/p/rietveld/wiki/UploadPyUsage
- run upload.py from the above gsutil svn directory.
- click the codereview.appspot.com link it generates, click "Edit Issue",
and add mfschwartz@google.com as a reviewer, and Cc gs-team@google.com.
- click Publish+Mail Comments.
""")
class CommandOptions(HelpProvider):
"""Additional help about Access Control Lists."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'dev',
# List of help name aliases.
HELP_NAME_ALIASES : ['development', 'developer', 'code', 'mods',
'software'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Making modifications to gsutil',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW OF METADATA</B>
Objects can have associated metadata, which control aspects of how
GET requests are handled, including Content-Type, Cache-Control,
Content-Disposition, and Content-Encoding (discussed in more detail in
the subsections below). In addition, you can set custom metadata that
can be used by applications (e.g., tagging that particular objects possess
some property).
There are two ways to set metadata on objects:
- at upload time you can specify one or more headers to associate with
objects, using the gsutil -h option. For example, the following command
would cause gsutil to set the Content-Type and Cache-Control for each
of the files being uploaded:
gsutil -h "Content-Type:text/html" -h "Cache-Control:public, max-age=3600" cp -r images gs://bucket/images
Note that -h is an option on the gsutil command, not the cp sub-command.
- You can set or remove metadata fields from already uploaded objects using
the gsutil setmeta command. See "gsutil help setmeta".
More details about specific pieces of metadata are discussed below.
<B>CONTENT TYPE</B>
The most commonly set metadata is Content-Type (also known as MIME type),
which allows browsers to render the object properly.
gsutil sets the Content-Type
automatically at upload time, based on each filename extension. For
example, uploading files with names ending in .txt will set Content-Type
to text/plain. If you're running gsutil on Linux or MacOS and would prefer
to have content type set based on naming plus content examination, see the
use_magicfile configuration variable in the gsutil/boto configuration file
(See also "gsutil help config"). In general, using use_magicfile is more
robust and configurable, but is not available on Windows.
If you specify a -h header when uploading content (like the example gsutil
command given in the previous section), it overrides the Content-Type that
would have been set based on filename extension or content. This can be
useful if the Content-Type detection algorithm doesn't work as desired
for some of your files.
You can also completely suppress content type detection in gsutil, by
specifying an empty string on the Content-Type header:
gsutil -h 'Content-Type:' cp -r images gs://bucket/images
In this case, the Google Cloud Storage service will attempt to detect
the content type. In general this approach will work better than using
filename extension-based content detection in gsutil, because the list of
filename extensions is kept more current in the server-side content detection
system than in the Python library upon which gsutil content type detection
depends. (For example, at the time of writing this, the filename extension
".webp" was recognized by the server-side content detection system, but
not by gsutil.)
<B>CACHE-CONTROL</B>
Another commonly set piece of metadata is Cache-Control, which allows
you to control whether and for how long browser and Internet caches are
allowed to cache your objects. Cache-Control only applies to objects with
a public-read ACL. Non-public data are not cacheable.
Here's an example of uploading an object set to allow caching:
gsutil -h "Cache-Control:public,max-age=3600" cp -a public-read -r html gs://bucket/html
This command would upload all files in the html directory (and subdirectories)
and make them publicly readable and cacheable, with cache expiration of
one hour.
Note that if you allow caching, at download time you may see older versions
of objects after uploading a newer replacement object. Note also that because
objects can be cached at various places on the Internet there is no way to
force a cached object to expire globally (unlike the way you can force your
browser to refresh its cache).
<B>CONTENT-ENCODING</B>
You could specify Content-Encoding to indicate that an object is compressed,
using a command like:
gsutil -h "Content-Encoding:gzip" cp *.gz gs://bucket/compressed
Note that Google Cloud Storage does not compress or decompress objects. If
you use this header to specify a compression type or compression algorithm
(for example, deflate), Google Cloud Storage preserves the header but does
not compress or decompress the object. Instead, you need to ensure that
the files have been compressed using the specified Content-Encoding before
using gsutil to upload them.
For compressible content, using Content-Encoding:gzip saves network and
storage costs, and improves content serving performance (since most browsers
are able to decompress objects served this way).
Note also that gsutil provides an easy way to cause content to be compressed
and stored with Content-Encoding:gzip: see the -z option in "gsutil help cp".
<B>CONTENT-DISPOSITION</B>
You can set Content-Disposition on your objects, to specify presentation
information about the data being transmitted. Here's an example:
gsutil -h 'Content-Disposition:attachment; filename=filename.ext' \\
cp -r attachments gs://bucket/attachments
Setting the Content-Disposition allows you to control presentation style
of the content, for example determining whether an attachment should be
automatically displayed vs should require some form of action from the user to
open it. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1
for more details about the meaning of Content-Disposition.
<B>CUSTOM METADATA</B>
You can add your own custom metadata (e.g,. for use by your application)
to an object by setting a header that starts with "x-goog-meta", for example:
gsutil -h x-goog-meta-reviewer:jane cp mycode.java gs://bucket/reviews
You can add multiple differently named custom metadata fields to each object.
<B>SETTABLE FIELDS; FIELD VALUES</B>
You can't set some metadata fields, such as ETag and Content-Length. The
fields you can set are:
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-MD5
- Content-Type
- Any field starting with X-GOOG-META- (i.e., custom metadata).
Header names are case-insensitive.
X-GOOG-META- fields can have data set to arbitrary Unicode values. All
other fields must have ASCII values.
<B>VIEWING CURRENTLY SET METADATA</B>
You can see what metadata is currently set on an object by using:
gsutil ls -L gs://the_bucket/the_object
""")
class CommandOptions(HelpProvider):
"""Additional help about object metadata."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'metadata',
# List of help name aliases.
HELP_NAME_ALIASES : ['cache-control', 'caching', 'content type',
'mime type', 'mime', 'type'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Working with object metadata',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>BUCKET NAME REQUIREMENTS</B>
Google Cloud Storage has a single namespace, so you will not be allowed
to create a bucket with a name already in use by another user. You can,
however, carve out parts of the bucket name space corresponding to your
company's domain name (see "DOMAIN NAMED BUCKETS").
Bucket names must conform to standard DNS naming conventions. This is
because a bucket name can appear in a DNS record as part of a CNAME
redirect. In addition to meeting DNS naming requirements, Google Cloud
Storage imposes other requirements on bucket naming. At a minimum, your
bucket names must meet the following requirements:
- Bucket names must contain only lowercase letters, numbers, dashes (-), and
dots (.).
- Bucket names must start and end with a number or letter.
- Bucket names must contain 3 to 63 characters. Names containing dots can
contain up to 222 characters, but each dot-separated component can be
no longer than 63 characters.
- Bucket names cannot be represented as an IPv4 address in dotted-decimal
notation (for example, 192.168.5.4).
- Bucket names cannot begin with the "goog" prefix.
- For DNS compliance, you should not have a period adjacent to another
period or dash. For example, ".." or "-." or ".-" are not acceptable.
<B>OBJECT NAME REQUIREMENTS</B>
Object names can contain any sequence of Unicode characters, of length 1-1024
bytes when UTF-8 encoded. Object names must not contain CarriageReturn,
CarriageReturnLineFeed, or the XML-disallowed surrogate blocks (xFFFE
or xFFFF).
We highly recommend that you avoid using control characters that are illegal
in XML 1.0 in your object names. These characters will cause XML listing
issues when you try to list your objects.
<B>DOMAIN NAMED BUCKETS</B>
You can carve out parts of the Google Cloud Storage bucket name space
by creating buckets with domain names (like "example.com").
Before you can create a bucket name containing one or more '.' characters,
the following rules apply:
- If the name is a syntactically valid DNS name ending with a
currently-recognized top-level domain (such as .com), you will be required
to verify domain ownership.
- Otherwise you will be disallowed from creating the bucket.
If your project needs to use a domain-named bucket, you need to have
a team member both verify the domain and create the bucket. This is
because Google Cloud Storage checks for domain ownership against the
user who creates the bucket, so the user who creates the bucket must
also be verified as an owner or manager of the domain.
To verify as the owner or manager of a domain, use the Google Webmaster
Tools verification process. The Webmaster Tools verification process
provides three methods for verifying an owner or manager of a domain:
1. Adding a special Meta tag to a site's homepage.
2. Uploading a special HTML file to a site.
3. Adding a DNS TXT record to a domain's DNS configuration.
Meta tag verification and HTML file verification are easier to perform and
are probably adequate for most situations. DNS TXT record verification is
a domain-based verification method that is useful in situations where a
site wants to tightly control who can create domain-named buckets. Once
a site creates a DNS TXT record to verify ownership of a domain, it takes
precedence over meta tag and HTML file verification. For example, you might
have two IT staff members who are responsible for managing your site, called
"example.com." If they complete the DNS TXT record verification, only they
would be able to create buckets called "example.com", "reports.example.com",
"downloads.example.com", and other domain-named buckets.
Site-Based Verification
If you have administrative control over the HTML files that make up a site,
you can use one of the site-based verification methods to verify that you
control or own a site. When you do this, Google Cloud Storage lets you
create buckets representing the verified site and any sub-sites - provided
nobody has used the DNS TXT record method to verify domain ownership of a
parent of the site.
As an example, assume that nobody has used the DNS TXT record method to verify
ownership of the following domains: abc.def.example.com, def.example.com,
and example.com. In this case, Google Cloud Storage lets you create a bucket
named abc.def.example.com if you verify that you own or control any of the
following sites:
http://abc.def.example.com
http://def.example.com
http://example.com
Domain-Based Verification
If you have administrative control over a domain's DNS configuration, you can
use the DNS TXT record verification method to verify that you own or control a
domain. When you use the domain-based verification method to verify that you
own or control a domain, Google Cloud Storage lets you create buckets that
represent any subdomain under the verified domain. Furthermore, Google Cloud
Storage prevents anybody else from creating buckets under that domain unless
you add their name to the list of verified domain owners or they have verified
their domain ownership by using the DNS TXT record verification method.
For example, if you use the DNS TXT record verification method to verify your
ownership of the domain example.com, Google Cloud Storage will let you create
bucket names that represent any subdomain under the example.com domain, such
as abc.def.example.com, example.com/music/jazz, or abc.example.com/music/jazz.
Using the DNS TXT record method to verify domain ownership supersedes
verification by site-based verification methods. For example, if you
use the Meta tag method or HTML file method to verify domain ownership
of http://example.com, but someone else uses the DNS TXT record method
to verify ownership of the example.com domain, Google Cloud Storage will
not allow you to create a bucket named example.com. To create the bucket
example.com, the domain owner who used the DNS TXT method to verify domain
ownership must add you to the list of verified domain owners for example.com.
The DNS TXT record verification method is particularly useful if you manage
a domain for a large organization that has numerous subdomains because it
lets you control who can create buckets representing those domain names.
Note: If you use the DNS TXT record verification method to verify ownership of
a domain, you cannot create a CNAME record for that domain. RFC 1034 disallows
inclusion of any other resource records if there is a CNAME resource record
present. If you want to create a CNAME resource record for a domain, you must
use the Meta tag verification method or the HTML file verification method.
""")
class CommandOptions(HelpProvider):
"""Additional help about gsutil object and bucket naming."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'naming',
# List of help name aliases.
HELP_NAME_ALIASES : ['domain', 'limits', 'name', 'names'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Object and bucket naming',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW</B>
If you use gsutil in large production tasks (such as uploading or
downloading many GB of data each night), there are a number of things
you can do to help ensure success. Specifically, this section discusses
how to script large production tasks around gsutil's resumable transfer
mechanism.
<B>BACKGROUND ON RESUMABLE TRANSFERS</B>
First, it's helpful to understand gsutil's resumable transfer mechanism,
and how your script needs to be implemented around this mechanism to work
reliably. gsutil uses the resumable transfer support in the boto library
when you attempt to upload or download a file larger than a configurable
threshold (by default, this threshold is 1MB). When a transfer fails
partway through (e.g., because of an intermittent network problem),
boto uses a randomized binary exponential backoff-and-retry strategy:
wait a random period between [0..1] seconds and retry; if that fails,
wait a random period between [0..2] seconds and retry; and if that
fails, wait a random period between [0..4] seconds, and so on, up to a
configurable number of times (the default is 6 times). Thus, the retry
actually spans a randomized period up to 1+2+4+8+16+32=63 seconds.
If the transfer fails each of these attempts with no intervening
progress, gsutil gives up on the transfer, but keeps a "tracker" file
for it in a configurable location (the default location is ~/.gsutil/,
in a file named by a combination of the SHA1 hash of the name of the
bucket and object being transferred and the last 16 characters of the
file name). When transfers fail in this fashion, you can rerun gsutil
at some later time (e.g., after the networking problem has been
resolved), and the resumable transfer picks up where it left off.
<B>SCRIPTING DATA TRANSFER TASKS</B>
To script large production data transfer tasks around this mechanism,
you can implement a script that runs periodically, determines which file
transfers have not yet succeeded, and runs gsutil to copy them. Below,
we offer a number of suggestions about how this type of scripting should
be implemented:
1. When resumable transfers fail without any progress 6 times in a row
over the course of up to 63 seconds, it probably won't work to simply
retry the transfer immediately. A more successful strategy would be to
have a cron job that runs every 30 minutes, determines which transfers
need to be run, and runs them. If the network experiences intermittent
problems, the script picks up where it left off and will eventually
succeed (once the network problem has been resolved).
2. If your business depends on timely data transfer, you should consider
implementing some network monitoring. For example, you can implement
a task that attempts a small download every few minutes and raises an
alert if the attempt fails for several attempts in a row (or more or less
frequently depending on your requirements), so that your IT staff can
investigate problems promptly. As usual with monitoring implementations,
you should experiment with the alerting thresholds, to avoid false
positive alerts that cause your staff to begin ignoring the alerts.
3. There are a variety of ways you can determine what files remain to be
transferred. We recommend that you avoid attempting to get a complete
listing of a bucket containing many objects (e.g., tens of thousands
or more). One strategy is to structure your object names in a way that
represents your transfer process, and use gsutil prefix wildcards to
request partial bucket listings. For example, if your periodic process
involves downloading the current day's objects, you could name objects
using a year-month-day-object-ID format and then find today's objects by
using a command like gsutil ls gs://bucket/2011-09-27-*. Note that it
is more efficient to have a non-wildcard prefix like this than to use
something like gsutil ls gs://bucket/*-2011-09-27. The latter command
actually requests a complete bucket listing and then filters in gsutil,
while the former asks Google Storage to return the subset of objects
whose names start with everything up to the *.
For data uploads, another technique would be to move local files from a "to
be processed" area to a "done" area as your script successfully copies files
to the cloud. You can do this in parallel batches by using a command like:
gsutil -m cp -R to_upload/subdir_$i gs://bucket/subdir_$i
where i is a shell loop variable. Make sure to check the shell $status
variable is 0 after each gsutil cp command, to detect if some of the copies
failed, and rerun the affected copies.
With this strategy, the file system keeps track of all remaining work to
be done.
4. If you have really large numbers of objects in a single bucket
(say hundreds of thousands or more), you should consider tracking your
objects in a database instead of using bucket listings to enumerate
the objects. For example this database could track the state of your
downloads, so you can determine what objects need to be downloaded by
your periodic download script by querying the database locally instead
of performing a bucket listing.
5. Make sure you don't delete partially downloaded files after a transfer
fails: gsutil picks up where it left off (and performs an MD5 check of
the final downloaded content to ensure data integrity), so deleting
partially transferred files will cause you to lose progress and make
more wasteful use of your network. You should also make sure whatever
process is waiting to consume the downloaded data doesn't get pointed
at the partially downloaded files. One way to do this is to download
into a staging directory and then move successfully downloaded files to
a directory where consumer processes will read them.
6. If you have a fast network connection, you can speed up the transfer of
large numbers of files by using the gsutil -m (multi-threading /
multi-processing) option. Be aware, however, that gsutil doesn't attempt to
keep track of which files were downloaded successfully in cases where some
files failed to download. For example, if you use multi-threaded transfers
to download 100 files and 3 failed to download, it is up to your scripting
process to determine which transfers didn't succeed, and retry them. A
periodic check-and-run approach like outlined earlier would handle this case.
If you use parallel transfers (gsutil -m) you might want to experiment with
the number of threads being used (via the parallel_thread_count setting
in the .boto config file). By default, gsutil uses 24 threads. Depending
on your network speed, available memory, CPU load, and other conditions,
this may or may not be optimal. Try experimenting with higher or lower
numbers of threads, to find the best number of threads for your environment.
""")
class CommandOptions(HelpProvider):
"""Additional help about using gsutil for production tasks."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'prod',
# List of help name aliases.
HELP_NAME_ALIASES : ['production', 'resumable', 'resumable upload',
'resumable transfer', 'resumable download',
'scripts', 'scripting'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Scripting production data transfers with gsutil',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW</B>
This section discusses how to work with projects in Google Cloud Storage.
For more information about using the Google APIs Console to administer
project memberships (which are automatically included in ACLs for buckets
you create) see https://code.google.com/apis/console#:storage:access.
<B>PROJECT MEMBERS AND PERMISSIONS</B>
There are three groups of users associated with each project:
- Project Owners are allowed to list, create, and delete buckets,
and can also perform administrative tasks like adding and removing team
members and changing billing. The project owners group is the owner
of all buckets within a project, regardless of who may be the original
bucket creator.
- Project Editors are allowed to list, create, and delete buckets.
- All Project Team Members are allowed to list buckets within a project.
These projects make it easy to set up a bucket and start uploading objects
with access control appropriate for a project at your company, as the three
group memberships can be configured by your administrative staff. Control
over projects and their associated memberships is provided by the Google
APIs Console (https://code.google.com/apis/console).
<B>HOW PROJECT MEMBERSHIP IS REFLECTED IN BUCKET ACLS</B>
When you create a bucket without specifying an ACL the bucket is given a
"project-private" ACL, which grants the permissions described in the previous
section. Here's an example of such an ACL:
<AccessControlList>
<Owner>
<ID>
00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7
</ID>
</Owner>
<Entries>
<Entry>
<Scope type="GroupById">
<ID>
00b4903a9740e42c29800f53bd5a9a62a2f96eb3f64a4313a115df3f3a776bf7
</ID>
</Scope>
<Permission>
FULL_CONTROL
</Permission>
</Entry>
<Entry>
<Scope type="GroupById">
<ID>
00b4903a977fd817e9da167bc81306489181a110456bb635f466d71cf90a0d51
</ID>
</Scope>
<Permission>
FULL_CONTROL
</Permission>
</Entry>
<Entry>
<Scope type="GroupById">
<ID>
00b4903a974898cc8fc309f2f2835308ba3d3df1b889d3fc7e33e187d52d8e71
</ID>
</Scope>
<Permission>
READ
</Permission>
</Entry>
</Entries>
</AccessControlList>
The three "GroupById" scopes are the canonical IDs for the Project Owners,
Project Editors, and All Project Team Members groups.
You can edit the bucket ACL if you want to (see "gsutil help setacl"),
but for many cases you'll never need to, and instead can change group
membership via the APIs console.
<B>IDENTIFYING PROJECTS WHEN CREATING AND LISTING BUCKETS</B>
When you create a bucket or list your buckets, you need to provide the
project ID that want to create or list (using the gsutil mb -p option or
the gsutil ls -p option, respectively). The project's name shown in the
Google APIs Console is a user-friendly name that you can choose; this is
not the project ID required by the gsutil mb and ls commands. To find the
project ID, go to the Storage Access pane in the Google APIs Console. Your
project ID is listed under Identifying your project.
""")
class CommandOptions(HelpProvider):
"""Additional help about Access Control Lists."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'projects',
# List of help name aliases.
HELP_NAME_ALIASES : ['apis console', 'console', 'dev console', 'project',
'proj', 'project-id'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Working with projects',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>OVERVIEW</B>
This section provides details about how subdirectories work in gsutil.
Most users probably don't need to know these details, and can simply use
the commands (like cp -R) that work with subdirectories. We provide this
additional documentation to help users understand how gsutil handles
subdirectories differently than most GUI / web-based tools (e.g., why
those other tools create "dir_$folder$" objects), and also to explain cost and
performance implications of the gsutil approach, for those interested in such
details.
gsutil provides the illusion of a hierarchical file tree atop the "flat"
name space supported by the Google Cloud Storage service. To the service,
the object gs://bucket/abc/def/ghi.txt is just an object that happens to have
"/" characters in its name. There are no "abc" or "abc/def" directories;
just a single object with the given name.
gsutil achieves the hierarchical file tree illusion by applying a variety of
rules, to try to make naming work the way users would expect. For example, in
order to determine whether to treat a destination URI as an object name or the
root of a directory under which objects should be copied gsutil uses these
rules:
1. If the destination object ends with a "/" gsutil treats it as a directory.
For example, if you run the command:
gsutil cp file gs://bucket/abc/
gsutil will create the object gs://bucket/abc/file.
2. If you attempt to copy multiple source files to a destination URI, gsutil
treats the destination URI as a directory. For example, if you run
the command:
gsutil cp -R dir gs://bucket/abc
gsutil will create objects like gs://bucket/abc/dir/file1, etc. (assuming
file1 is a file under the source dir).
3. If neither of the above rules applies, gsutil performs a bucket listing to
determine if the target of the operation is a prefix match to the
specified string. For example, if you run the command:
gsutil cp file gs://bucket/abc
gsutil will make a bucket listing request for the named bucket, using
delimiter="/" and prefix="abc". It will then examine the bucket listing
results and determine whether there are objects in the bucket whose path
starts with gs://bucket/abc/, to determine whether to treat the target as
an object name or a directory name. In turn this impacts the name of the
object you create: If the above check indicates there is an "abc" directory
you will end up with the object gs://bucket/abc/file; otherwise you will
end up with the object gs://bucket/abc. (See "HOW NAMES ARE CONSTRUCTED"
under "gsutil help cp" for more details.)
This rule-based approach stands in contrast to the way many tools work, which
create objects to mark the existence of folders (such as "dir_$folder$").
gsutil understands several conventions used by such tools but does not
require such marker objects to implement naming behavior consistent with
UNIX commands.
A downside of the gsutil approach is it requires an extra bucket listing
before performing the needed cp or mv command. However those listings are
relatively inexpensive, because they use delimiter and prefix parameters to
limit result data. Moreover, gsutil makes only one bucket listing request
per cp/mv command, and thus amortizes the bucket listing cost across all
transferred objects (e.g., when performing a recursive copy of a directory
to the cloud).
""")
class CommandOptions(HelpProvider):
"""Additional help about subdirectory handling in gsutil."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'subdirs',
# List of help name aliases.
HELP_NAME_ALIASES : ['dirs', 'directory', 'directories', 'folder',
'folders', 'hierarchy', 'subdir', 'subdirectory',
'subdirectories'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'How subdirectories work in gsutil',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>TECHNICAL SUPPORT</B>
If you have any questions or encounter any problems with Google Cloud Storage,
please first read the FAQ at https://developers.google.com/storage/docs/faq.
If you still need help, please post your question to the gs-discussion forum
(https://developers.google.com/storage/forum) or to Stack Overflow with the
Google Cloud Storage tag
(http://stackoverflow.com/questions/tagged/google-cloud-storage). Our support
team actively monitors these forums and we'll do our best to respond. To help
us diagnose any issues you encounter, please provide these details in addition
to the description of your problem:
- The resource you are attempting to access (bucket name, object name)
- The operation you attempted (GET, PUT, etc.)
- The time and date (including timezone) at which you encountered the problem
- The tool or library you use to interact with Google Cloud Storage
- If you can use gsutil to reproduce your issue, specify the -D option to
display your request's HTTP details. Provide these details with your post
to the forum as they can help us further troubleshoot your issue.
Warning: The gsutil -D, -d, and -DD options will also print the authentication
header with authentication credentials for your Google Cloud Storage account.
Make sure to remove any "Authorization:" headers before you post HTTP details
to the forum.
If you make any local modifications to gsutil, please make sure to use
a released copy of gsutil (instead of your locally modified copy) when
providing the gsutil -D output noted above. We cannot support versions
of gsutil that include local modifications. (However, we're open to user
contributions; see "gsutil help dev".)
As an alternative to posting to the gs-discussion forum, we also
actively monitor http://stackoverflow.com for questions tagged with
"google-cloud-storage".
<B>BILLING AND ACCOUNT QUESTIONS</B>
For questions about billing or account issues, please visit
http://code.google.com/apis/console-help/#billing. If you want to cancel
billing, you can do so on the Billing pane of the Google APIs Console. For
more information, see
http://code.google.com/apis/console-help/#BillingCancelled. Caution: When you
disable billing, you also disable the Google Cloud Storage service. Make sure
you want to disable the Google Cloud Storage service before you disable
billing.
""")
class CommandOptions(HelpProvider):
"""Additional help about tech and billing support."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'support',
# List of help name aliases.
HELP_NAME_ALIASES : ['techsupport', 'tech support', 'technical support',
'billing', 'faq', 'questions'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'How to get Google Cloud Storage support',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
This diff is collapsed.
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HelpProvider
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>DESCRIPTION</B>
gsutil supports URI wildcards. For example, the command:
gsutil cp gs://bucket/data/abc* .
will copy all objects that start with gs://bucket/data/abc followed by any
number of characters within that subdirectory.
<B>DIRECTORY BY DIRECTORY VS RECURSIVE WILDCARDS</B>
The "*" wildcard only matches up to the end of a path within
a subdirectory. For example, if bucket contains objects
named gs://bucket/data/abcd, gs://bucket/data/abcdef,
and gs://bucket/data/abcxyx, as well as an object in a sub-directory
(gs://bucket/data/abc/def) the above gsutil cp command would match the
first 3 object names but not the last one.
If you want matches to span directory boundaries, use a '**' wildcard:
gsutil cp gs://bucket/data/abc** .
will match all four objects above.
Note that gsutil supports the same wildcards for both objects and file names.
Thus, for example:
gsutil cp data/abc* gs://bucket
will match all names in the local file system. Most command shells also
support wildcarding, so if you run the above command probably your shell
is expanding the matches before running gsutil. However, most shells do not
support recursive wildcards ('**'), and you can cause gsutil's wildcarding
support to work for such shells by single-quoting the arguments so they
don't get interpreted by the shell before being passed to gsutil:
gsutil cp 'data/abc**' gs://bucket
<B>BUCKET WILDCARDS</B>
You can specify wildcards for bucket names. For example:
gsutil ls gs://data*.example.com
will list the contents of all buckets whose name starts with "data" and
ends with ".example.com".
You can also combine bucket and object name wildcards. For example this
command will remove all ".txt" files in any of your Google Cloud Storage
buckets:
gsutil rm gs://*/**.txt
<B>OTHER WILDCARD CHARACTERS</B>
In addition to '*', you can use these wildcards:
? Matches a single character. For example "gs://bucket/??.txt"
only matches objects with two characters followed by .txt.
[chars] Match any of the specified characters. For example
"gs://bucket/[aeiou].txt" matches objects that contain a single vowel
character followed by .txt
[char range] Match any of the range of characters. For example
"gs://bucket/[a-m].txt" matches objects that contain letters
a, b, c, ... or m, and end with .txt.
You can combine wildcards to provide more powerful matches, for example:
gs://bucket/[a-m]??.j*g
<B>EFFICIENCY CONSIDERATION: USING WILDCARDS OVER MANY OBJECTS</B>
It is more efficient, faster, and less network traffic-intensive
to use wildcards that have a non-wildcard object-name prefix, like:
gs://bucket/abc*.txt
than it is to use wildcards as the first part of the object name, like:
gs://bucket/*abc.txt
This is because the request for "gs://bucket/abc*.txt" asks the server
to send back the subset of results whose object names start with "abc",
and then gsutil filters the result list for objects whose name ends with
".txt". In contrast, "gs://bucket/*abc.txt" asks the server for the complete
list of objects in the bucket and then filters for those objects whose name
ends with "abc.txt". This efficiency consideration becomes increasingly
noticeable when you use buckets containing thousands or more objects. It is
sometimes possible to set up the names of your objects to fit with expected
wildcard matching patterns, to take advantage of the efficiency of doing
server-side prefix requests. See, for example "gsutil help prod" for a
concrete use case example.
<B>EFFICIENCY CONSIDERATION: USING MID-PATH WILDCARDS</B>
Suppose you have a bucket with these objects:
gs://bucket/obj1
gs://bucket/obj2
gs://bucket/obj3
gs://bucket/obj4
gs://bucket/dir1/obj5
gs://bucket/dir2/obj6
If you run the command:
gsutil ls gs://bucket/*/obj5
gsutil will perform a /-delimited top-level bucket listing and then one bucket
listing for each subdirectory, for a total of 3 bucket listings:
GET /bucket/?delimiter=/
GET /bucket/?prefix=dir1/obj5&delimiter=/
GET /bucket/?prefix=dir2/obj5&delimiter=/
The more bucket listings your wildcard requires, the slower and more expensive
it will be. The number of bucket listings required grows as:
- the number of wildcard components (e.g., "gs://bucket/a??b/c*/*/d"
has 3 wildcard components);
- the number of subdirectories that match each component; and
- the number of results (pagination is implemented using one GET
request per 1000 results, specifying markers for each).
If you want to use a mid-path wildcard, you might try instead using a
recursive wildcard, for example:
gsutil ls gs://bucket/**/obj5
This will match more objects than gs://bucket/*/obj5 (since it spans
directories), but is implemented using a delimiter-less bucket listing
request (which means fewer bucket requests, though it will list the entire
bucket and filter locally, so that could require a non-trivial amount of
network traffic).
""")
class CommandOptions(HelpProvider):
"""Additional help about wildcards."""
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'wildcards',
# List of help name aliases.
HELP_NAME_ALIASES : ['wildcard', '*', '**'],
# Type of help:
HELP_TYPE : HelpType.ADDITIONAL_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Wildcard support',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
import time
class BucketListingRef(object):
"""
Container that holds a reference to one result from a bucket listing, allowing
polymorphic iteration over wildcard-iterated URIs, Keys, or Prefixes. At a
minimum, every reference contains a StorageUri. If the reference came from a
bucket listing (as opposed to a manually instantiated ref that might populate
only the StorageUri), it will additionally contain either a Key or a Prefix,
depending on whether it was a reference to an object or was just a prefix of a
path (i.e., bucket subdirectory). The latter happens when the bucket was
listed using delimiter='/'.
Note that Keys are shallow-populated, based on the contents extracted from
parsing a bucket listing. This includes name, length, and other fields
(basically, the info listed by gsutil ls -l), but does not include information
like ACL and location (which require separate server requests, which is why
there's a separate gsutil ls -L option to get this more detailed info).
"""
def __init__(self, uri, key=None, prefix=None, headers=None):
"""Instantiate BucketListingRef from uri and (if available) key or prefix.
Args:
uri: StorageUri for the object (required).
key: Key for the object, or None if not available.
prefix: Prefix for the subdir, or None if not available.
headers: Dictionary containing optional HTTP headers to pass to boto
(which happens when GetKey() is called on an BucketListingRef which
has no constructor-populated Key), or None if not available.
At most one of key and prefix can be populated.
"""
assert key is None or prefix is None
self.uri = uri
self.key = key
self.prefix = prefix
self.headers = headers or {}
def GetUri(self):
"""Get URI form of listed URI.
Returns:
StorageUri.
"""
return self.uri
def GetUriString(self):
"""Get string URI form of listed URI.
Returns:
String.
"""
return self.uri.uri
def NamesBucket(self):
"""Determines if this BucketListingRef names a bucket.
Returns:
bool indicator.
"""
return self.key is None and self.prefix is None and self.uri.names_bucket()
def IsLatest(self):
"""Determines if this BucketListingRef names the latest version of an
object.
Returns:
bool indicator.
"""
return hasattr(self.uri, 'is_latest') and self.uri.is_latest
def GetRStrippedUriString(self):
"""Get string URI form of listed URI, stripped of any right trailing
delims, and without version string.
Returns:
String.
"""
return self.uri.versionless_uri.rstrip('/')
def HasKey(self):
"""Return bool indicator of whether this BucketListingRef has a Key."""
return bool(self.key)
def HasPrefix(self):
"""Return bool indicator of whether this BucketListingRef has a Prefix."""
return bool(self.prefix)
def GetKey(self):
"""Get Key form of listed URI.
Returns:
Subclass of boto.s3.key.Key.
Raises:
BucketListingRefException: for bucket-only uri.
"""
# For gsutil ls -l gs://bucket self.key will be populated from (boto)
# parsing the bucket listing. But as noted and handled below there are
# cases where self.key isn't populated.
if not self.key:
if not self.uri.names_object():
raise BucketListingRefException(
'Attempt to call GetKey() on Key-less BucketListingRef (uri=%s) ' %
self.uri)
# This case happens when we do gsutil ls -l on a object name-ful
# StorageUri with no object-name wildcard. Since the ls command
# implementation only reads bucket info we need to read the object
# for this case.
self.key = self.uri.get_key(validate=False, headers=self.headers)
# When we retrieve the object this way its last_modified timestamp
# is formatted in RFC 1123 format, which is different from when we
# retrieve from the bucket listing (which uses ISO 8601 format), so
# convert so we consistently return ISO 8601 format.
tuple_time = (time.strptime(self.key.last_modified,
'%a, %d %b %Y %H:%M:%S %Z'))
self.key.last_modified = time.strftime('%Y-%m-%dT%H:%M:%S', tuple_time)
return self.key
def GetPrefix(self):
"""Get Prefix form of listed URI.
Returns:
boto.s3.prefix.Prefix.
Raises:
BucketListingRefException: if this object has no Prefix.
"""
if not self.prefix:
raise BucketListingRefException(
'Attempt to call GetPrefix() on Prefix-less BucketListingRef '
'(uri=%s)' % self.uri)
return self.prefix
def __repr__(self):
"""Returns string representation of BucketListingRef."""
return 'BucketListingRef(%s, HasKey=%s, HasPrefix=%s)' % (
self.uri, self.HasKey(), self.HasPrefix())
class BucketListingRefException(StandardError):
"""Exception thrown for invalid BucketListingRef requests."""
def __init__(self, reason):
StandardError.__init__(self)
self.reason = reason
def __repr__(self):
return 'BucketListingRefException: %s' % self.reason
def __str__(self):
return 'BucketListingRefException: %s' % self.reason
This diff is collapsed.
#!/usr/bin/env python
# coding=utf8
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Class that runs a named gsutil command."""
import boto
import os
from boto.storage_uri import BucketStorageUri
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.exception import CommandException
class CommandRunner(object):
def __init__(self, gsutil_bin_dir, boto_lib_dir, config_file_list,
gsutil_ver, bucket_storage_uri_class=BucketStorageUri):
"""
Args:
gsutil_bin_dir: Bin dir from which gsutil is running.
boto_lib_dir: Lib dir where boto runs.
config_file_list: Config file list returned by _GetBotoConfigFileList().
gsutil_ver: Version string of currently running gsutil command.
bucket_storage_uri_class: Class to instantiate for cloud StorageUris.
Settable for testing/mocking.
"""
self.gsutil_bin_dir = gsutil_bin_dir
self.boto_lib_dir = boto_lib_dir
self.config_file_list = config_file_list
self.gsutil_ver = gsutil_ver
self.bucket_storage_uri_class = bucket_storage_uri_class
self.command_map = self._LoadCommandMap()
def _LoadCommandMap(self):
"""Returns dict mapping each command_name to implementing class."""
# Walk gslib/commands and find all commands.
commands_dir = os.path.join(self.gsutil_bin_dir, 'gslib', 'commands')
for f in os.listdir(commands_dir):
# Handles no-extension files, etc.
(module_name, ext) = os.path.splitext(f)
if ext == '.py':
__import__('gslib.commands.%s' % module_name)
command_map = {}
# Only include Command subclasses in the dict.
for command in Command.__subclasses__():
command_map[command.command_spec[COMMAND_NAME]] = command
for command_name_aliases in command.command_spec[COMMAND_NAME_ALIASES]:
command_map[command_name_aliases] = command
return command_map
def RunNamedCommand(self, command_name, args=None, headers=None, debug=0,
parallel_operations=False, test_method=None):
"""Runs the named command. Used by gsutil main, commands built atop
other commands, and tests .
Args:
command_name: The name of the command being run.
args: Command-line args (arg0 = actual arg, not command name ala bash).
headers: Dictionary containing optional HTTP headers to pass to boto.
debug: Debug level to pass in to boto connection (range 0..3).
parallel_operations: Should command operations be executed in parallel?
test_method: Optional general purpose method for testing purposes.
Application and semantics of this method will vary by
command and test type.
Raises:
CommandException: if errors encountered.
"""
if not args:
args = []
# Include api_version header in all commands.
api_version = boto.config.get_value('GSUtil', 'default_api_version', '1')
if not headers:
headers = {}
headers['x-goog-api-version'] = api_version
if command_name not in self.command_map:
raise CommandException('Invalid command "%s".' % command_name)
command_class = self.command_map[command_name]
command_inst = command_class(self, args, headers, debug,
parallel_operations, self.gsutil_bin_dir,
self.boto_lib_dir, self.config_file_list,
self.gsutil_ver, self.bucket_storage_uri_class,
test_method)
return command_inst.RunCommand()
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Package marker file."""
# Copyright 2011 Google Inc. All Rights Reserved.
# Copyright 2011, Nexenta Systems Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.exception import CommandException
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
from gslib.util import NO_MAX
from gslib.wildcard_iterator import ContainsWildcard
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil cat [-h] uri...
<B>DESCRIPTION</B>
The cat command outputs the contents of one or more URIs to stdout.
It is equivalent to doing:
gsutil cp uri... -
(The final '-' causes gsutil to stream the output to stdout.)
<B>OPTIONS</B>
-h Prints short header for each object. For example:
gsutil cat -h gs://bucket/meeting_notes/2012_Feb/*.txt
""")
class CatCommand(Command):
"""Implementation of gsutil cat command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'cat',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 0,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : NO_MAX,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : 'hv',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'cat',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help:
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Concatenate object content to stdout',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
show_header = False
if self.sub_opts:
for o, unused_a in self.sub_opts:
if o == '-h':
show_header = True
elif o == '-v':
self.THREADED_LOGGER.info('WARNING: The %s -v option is no longer'
' needed, and will eventually be removed.\n'
% self.command_name)
printed_one = False
# We manipulate the stdout so that all other data other than the Object
# contents go to stderr.
cat_outfd = sys.stdout
sys.stdout = sys.stderr
did_some_work = False
for uri_str in self.args:
for uri in self.WildcardIterator(uri_str).IterUris():
if not uri.names_object():
raise CommandException('"%s" command must specify objects.' %
self.command_name)
did_some_work = True
if show_header:
if printed_one:
print
print '==> %s <==' % uri.__str__()
printed_one = True
key = uri.get_key(False, self.headers)
key.get_file(cat_outfd, self.headers)
sys.stdout = cat_outfd
if not did_some_work:
raise CommandException('No URIs matched')
return 0
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.exception import CommandException
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
from gslib.util import NO_MAX
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil disablelogging uri...
<B>DESCRIPTION</B>
This command will disable access logging of the buckets named by the
specified uris. All URIs must name buckets (e.g., gs://bucket).
No logging data is removed from the log buckets when you disable logging,
but Google Cloud Storage will stop delivering new logs once you have
run this command.
""")
class DisableLoggingCommand(Command):
"""Implementation of disablelogging command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'disablelogging',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : NO_MAX,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : '',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'disablelogging',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help:
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Disable logging on buckets',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
did_some_work = False
for uri_str in self.args:
for uri in self.WildcardIterator(uri_str).IterUris():
if uri.names_object():
raise CommandException('disablelogging cannot be applied to objects')
did_some_work = True
print 'Disabling logging on %s...' % uri
self.proj_id_handler.FillInProjectHeaderIfNeeded('disablelogging',
uri, self.headers)
uri.disable_logging(False, self.headers)
if not did_some_work:
raise CommandException('No URIs matched')
return 0
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.exception import CommandException
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
from gslib.util import NO_MAX
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil enablelogging -b logging_bucket [-o log_object_prefix] uri...
<B>DESCRIPTION</B>
Google Cloud Storage offers access logs and storage data in the form of
CSV files that you can download and view. Access logs provide information
for all of the requests made on a specified bucket in the last 24 hours,
while the storage logs provide information about the storage consumption of
that bucket for the last 24 hour period. The logs and storage data files
are automatically created as new objects in a bucket that you specify, in
24 hour intervals.
The gsutil enablelogging command will enable access logging of the
buckets named by the specified uris, outputting log files in the specified
logging_bucket. logging_bucket must already exist, and all URIs must name
buckets (e.g., gs://bucket). For example, the command:
gsutil enablelogging -b gs://my_logging_bucket -o AccessLog \\
gs://my_bucket1 gs://my_bucket2
will cause all read and write activity to objects in gs://mybucket1 and
gs://mybucket2 to be logged to objects prefixed with the name "AccessLog",
with those log objects written to the bucket gs://my_logging_bucket.
Note that log data may contain sensitive information, so you should make
sure to set an appropriate default bucket ACL to protect that data. (See
"gsutil help setdefacl".)
You can check logging status using the gsutil getlogging command. For log
format details see "gsutil help getlogging".
<B>OPTIONS</B>
-b bucket Specifies the log bucket.
-o prefix Specifies the prefix for log object names. Default value
is the bucket name.
""")
class EnableLoggingCommand(Command):
"""Implementation of gsutil enablelogging command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'enablelogging',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : NO_MAX,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : 'b:o:',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'enablelogging',
# List of help name aliases.
HELP_NAME_ALIASES : ['logging', 'logs', 'log'],
# Type of help:
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Enable logging on buckets',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
# Disallow multi-provider enablelogging calls, because the schemas
# differ.
storage_uri = self.UrisAreForSingleProvider(self.args)
if not storage_uri:
raise CommandException('enablelogging command spanning providers not '
'allowed.')
target_bucket_uri = None
target_prefix = None
for opt, opt_arg in self.sub_opts:
if opt == '-b':
target_bucket_uri = self.suri_builder.StorageUri(opt_arg)
if opt == '-o':
target_prefix = opt_arg
if not target_bucket_uri:
raise CommandException('enablelogging requires \'-b <log_bucket>\' '
'option')
if not target_bucket_uri.names_bucket():
raise CommandException('-b option must specify a bucket uri')
did_some_work = False
for uri_str in self.args:
for uri in self.WildcardIterator(uri_str).IterUris():
if uri.names_object():
raise CommandException('enablelogging cannot be applied to objects')
did_some_work = True
print 'Enabling logging on %s...' % uri
self.proj_id_handler.FillInProjectHeaderIfNeeded(
'enablelogging', storage_uri, self.headers)
uri.enable_logging(target_bucket_uri.bucket_name, target_prefix, False,
self.headers)
if not did_some_work:
raise CommandException('No URIs matched')
return 0
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil getacl uri
<B>DESCRIPTION</B>
Gets ACL XML for a bucket or object, which you can save and edit for the
setacl command.
""")
class GetAclCommand(Command):
"""Implementation of gsutil getacl command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'getacl',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : 1,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : 'v',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'getacl',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help:
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Get ACL XML for a bucket or object',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
self.GetAclCommandHelper()
return 0
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import xml
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.exception import CommandException
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil getcors uri
<B>DESCRIPTION</B>
Gets the Cross-Origin Resource Sharing (CORS) configuration for a given
bucket. This command is supported for buckets only, not objects and you
can get the CORS settings for only one bucket at a time. The output from
getcors can be redirected into a file, edited and then updated via the
setcors sub-command. The CORS configuration is expressed by an XML document
with the following structure:
<?xml version="1.0" ?>
<CorsConfig>
<Cors>
<Origins>
<Origin>origin1.example.com</Origin>
</Origins>
<Methods>
<Method>GET</Method>
</Methods>
<ResponseHeaders>
<ResponseHeader>Content-Type</ResponseHeader>
</ResponseHeaders>
</Cors>
</CorsConfig>
For more info about CORS, see http://www.w3.org/TR/cors/.
""")
class GetCorsCommand(Command):
"""Implementation of gsutil getcors command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'getcors',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : 1,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : '',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'getcors',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help)
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Get a bucket\'s CORS XML document',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
# Wildcarding is allowed but must resolve to just one bucket.
uris = list(self.WildcardIterator(self.args[0]).IterUris())
if len(uris) == 0:
raise CommandException('No URIs matched')
if len(uris) != 1:
raise CommandException('%s matched more than one URI, which is not\n'
'allowed by the %s command' % (self.args[0], self.command_name))
uri = uris[0]
if not uri.names_bucket():
raise CommandException('"%s" command must specify a bucket' %
self.command_name)
cors = uri.get_cors(False, self.headers)
# Pretty-print the XML to make it more easily human editable.
parsed_xml = xml.dom.minidom.parseString(cors.to_xml().encode('utf-8'))
sys.stdout.write(parsed_xml.toprettyxml(indent=' '))
return 0
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.exception import CommandException
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil getdefacl uri
<B>DESCRIPTION</B>
Gets the default ACL XML for a bucket, which you can save and edit
for use with the setdefacl command.
""")
class GetDefAclCommand(Command):
"""Implementation of gsutil getdefacl command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'getdefacl',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : 1,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : '',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'getdefacl',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help:
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Get default ACL XML for a bucket',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
if not self.suri_builder.StorageUri(self.args[-1]).names_bucket():
raise CommandException('URI must name a bucket for the %s command' %
self.command_name)
self.GetAclCommandHelper()
return 0
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil getlogging uri
<B>DESCRIPTION</B>
If logging is enabled for the specified bucket uri, the server responds
with a <Logging> XML element that looks something like this:
<?xml version="1.0" ?>
<Logging>
<LogBucket>
logs-bucket
</LogBucket>
<LogObjectPrefix>
my-logs-enabled-bucket
</LogObjectPrefix>
</Logging>
If logging is not enabled, an empty <Logging> element is returned.
You can download log data from your log bucket using the gsutil cp command.
<B>ACCESS LOG FIELDS</B>
Field Type Description
time_micros integer The time that the request was completed, in
microseconds since the Unix epoch.
c_ip string The IP address from which the request was made.
The "c" prefix indicates that this is information
about the client.
c_ip_type integer The type of IP in the c_ip field:
A value of 1 indicates an IPV4 address.
A value of 2 indicates an IPV6 address.
c_ip_region string Reserved for future use.
cs_method string The HTTP method of this request. The "cs" prefix
indicates that this information was sent from the
client to the server.
cs_uri string The URI of the request.
sc_status integer The HTTP status code the server sent in response.
The "sc" prefix indicates that this information
was sent from the server to the client.
cs_bytes integer The number of bytes sent in the request.
sc_bytes integer The number of bytes sent in the response.
time_taken_micros integer The time it took to serve the request in
microseconds.
cs_host string The host in the original request.
cs_referrer string The HTTP referrer for the request.
cs_user_agent string The User-Agent of the request.
s_request_id string The request identifier.
cs_operation string The Google Cloud Storage operation e.g.
GET_Object.
cs_bucket string The bucket specified in the request. If this is a
list buckets request, this can be null.
cs_object string The object specified in this request. This can be
null.
<B>STORAGE DATA FIELDS</B>
Field Type Description
bucket string The name of the bucket.
storage_byte_hours integer Average size in bytes/per hour of that bucket.
""")
class GetLoggingCommand(Command):
"""Implementation of gsutil getlogging command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'getlogging',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : 1,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : '',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 0,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'getlogging',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help:
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Get logging configuration for a bucket',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
self.GetXmlSubresource('logging', self.args[0])
return 0
# Copyright 2012 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gslib.command import Command
from gslib.command import COMMAND_NAME
from gslib.command import COMMAND_NAME_ALIASES
from gslib.command import CONFIG_REQUIRED
from gslib.command import FILE_URIS_OK
from gslib.command import MAX_ARGS
from gslib.command import MIN_ARGS
from gslib.command import PROVIDER_URIS_OK
from gslib.command import SUPPORTED_SUB_ARGS
from gslib.command import URIS_START_ARG
from gslib.exception import CommandException
from gslib.help_provider import HELP_NAME
from gslib.help_provider import HELP_NAME_ALIASES
from gslib.help_provider import HELP_ONE_LINE_SUMMARY
from gslib.help_provider import HELP_TEXT
from gslib.help_provider import HelpType
from gslib.help_provider import HELP_TYPE
_detailed_help_text = ("""
<B>SYNOPSIS</B>
gsutil getversioning bucket_uri
<B>DESCRIPTION</B>
The Versioning Configuration feature enables you to configure a Google Cloud
Storage bucket to keep old versions of objects.
The gsutil getversioning command gets the versioning configuration for a
bucket, and displays an XML representation of the configuration.
In Google Cloud Storage, this would look like:
<?xml version="1.0" ?>
<VersioningConfiguration>
<Status>
Enabled
</Status>
</VersioningConfiguration>
""")
class GetVersioningCommand(Command):
"""Implementation of gsutil getversioning command."""
# Command specification (processed by parent class).
command_spec = {
# Name of command.
COMMAND_NAME : 'getversioning',
# List of command name aliases.
COMMAND_NAME_ALIASES : [],
# Min number of args required by this command.
MIN_ARGS : 1,
# Max number of args required by this command, or NO_MAX.
MAX_ARGS : 1,
# Getopt-style string specifying acceptable sub args.
SUPPORTED_SUB_ARGS : '',
# True if file URIs acceptable for this command.
FILE_URIS_OK : False,
# True if provider-only URIs acceptable for this command.
PROVIDER_URIS_OK : False,
# Index in args of first URI arg.
URIS_START_ARG : 1,
# True if must configure gsutil before running command.
CONFIG_REQUIRED : True,
}
help_spec = {
# Name of command or auxiliary help info for which this help applies.
HELP_NAME : 'getversioning',
# List of help name aliases.
HELP_NAME_ALIASES : [],
# Type of help)
HELP_TYPE : HelpType.COMMAND_HELP,
# One line summary of this help.
HELP_ONE_LINE_SUMMARY : 'Get the versioning configuration '
'for one or more buckets',
# The full help text.
HELP_TEXT : _detailed_help_text,
}
# Command entry point.
def RunCommand(self):
uri_args = self.args
# Iterate over URIs, expanding wildcards, and getting the website
# configuration on each.
some_matched = False
for uri_str in uri_args:
for blr in self.WildcardIterator(uri_str):
uri = blr.GetUri()
if not uri.names_bucket():
raise CommandException('URI %s must name a bucket for the %s command'
% (str(uri), self.command_name))
some_matched = True
uri_str = '%s://%s' % (uri.scheme, uri.bucket_name)
if uri.get_versioning_config():
print '%s: Enabled' % uri_str
else:
print '%s: Suspended' % uri_str
if not some_matched:
raise CommandException('No URIs matched')
return 0
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment