awscli 1.38.7__py3-none-any.whl → 1.38.9__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of awscli might be problematic. Click here for more details.

Files changed (62) hide show
  1. awscli/__init__.py +1 -1
  2. awscli/examples/cloudformation/_package_description.rst +1 -1
  3. awscli/examples/codecommit/get-merge-commit.rst +1 -2
  4. awscli/examples/cognito-idp/get-identity-provider-by-identifier.rst +35 -0
  5. awscli/examples/cognito-idp/get-log-delivery-configuration.rst +32 -0
  6. awscli/examples/cognito-idp/get-signing-certificate.rst +14 -13
  7. awscli/examples/cognito-idp/get-ui-customization.rst +22 -19
  8. awscli/examples/cognito-idp/get-user-attribute-verification-code.rst +19 -0
  9. awscli/examples/cognito-idp/get-user-auth-factors.rst +20 -0
  10. awscli/examples/cognito-idp/get-user-pool-mfa-config.rst +33 -0
  11. awscli/examples/cognito-idp/get-user.rst +56 -0
  12. awscli/examples/cognito-idp/global-sign-out.rst +10 -0
  13. awscli/examples/cognito-idp/initiate-auth.rst +27 -0
  14. awscli/examples/cognito-idp/list-devices.rst +23 -25
  15. awscli/examples/cognito-idp/list-groups.rst +32 -0
  16. awscli/examples/cognito-idp/list-identity-providers.rst +29 -0
  17. awscli/examples/cognito-idp/list-resource-servers.rst +43 -0
  18. awscli/examples/cognito-idp/list-tags-for-resource.rst +17 -0
  19. awscli/examples/cognito-idp/list-user-import-jobs.rst +61 -57
  20. awscli/examples/cognito-idp/list-user-pool-clients.rst +32 -0
  21. awscli/examples/cognito-idp/list-user-pools.rst +48 -22
  22. awscli/examples/cognito-idp/list-users.rst +98 -35
  23. awscli/examples/cognito-idp/list-web-authn-credentials.rst +22 -0
  24. awscli/examples/cognito-idp/respond-to-auth-challenge.rst +78 -27
  25. awscli/examples/cognito-idp/revoke-token.rst +11 -0
  26. awscli/examples/cognito-idp/set-log-delivery-configuration.rst +33 -0
  27. awscli/examples/cognito-idp/set-risk-configuration.rst +136 -23
  28. awscli/examples/cognito-idp/set-ui-customization.rst +45 -18
  29. awscli/examples/cognito-idp/set-user-mfa-preference.rst +6 -5
  30. awscli/examples/cognito-idp/set-user-pool-mfa-config.rst +38 -0
  31. awscli/examples/cognito-idp/start-user-import-job.rst +27 -29
  32. awscli/examples/cognito-idp/start-web-authn-registration.rst +47 -0
  33. awscli/examples/cognito-idp/stop-user-import-job.rst +29 -31
  34. awscli/examples/ecs/create-cluster.rst +46 -42
  35. awscli/examples/ecs/put-account-setting.rst +8 -5
  36. awscli/examples/ecs/update-cluster-settings.rst +6 -6
  37. awscli/examples/ecs/update-service.rst +235 -7
  38. awscli/examples/emr/add-steps.rst +8 -8
  39. awscli/examples/emr/create-cluster-examples.rst +5 -5
  40. awscli/examples/rds/cancel-export-task.rst +22 -22
  41. awscli/examples/rds/describe-export-tasks.rst +40 -40
  42. awscli/examples/rds/restore-db-cluster-from-s3.rst +64 -64
  43. awscli/examples/rds/start-export-task.rst +26 -26
  44. awscli/examples/s3/_concepts.rst +2 -2
  45. awscli/examples/s3/cp.rst +30 -30
  46. awscli/examples/s3/ls.rst +7 -7
  47. awscli/examples/s3/mb.rst +6 -6
  48. awscli/examples/s3/mv.rst +21 -21
  49. awscli/examples/s3/rb.rst +8 -8
  50. awscli/examples/s3/rm.rst +12 -12
  51. awscli/examples/s3/sync.rst +27 -27
  52. awscli/examples/s3api/get-bucket-policy.rst +2 -2
  53. {awscli-1.38.7.dist-info → awscli-1.38.9.dist-info}/METADATA +2 -2
  54. {awscli-1.38.7.dist-info → awscli-1.38.9.dist-info}/RECORD +62 -44
  55. {awscli-1.38.7.data → awscli-1.38.9.data}/scripts/aws +0 -0
  56. {awscli-1.38.7.data → awscli-1.38.9.data}/scripts/aws.cmd +0 -0
  57. {awscli-1.38.7.data → awscli-1.38.9.data}/scripts/aws_bash_completer +0 -0
  58. {awscli-1.38.7.data → awscli-1.38.9.data}/scripts/aws_completer +0 -0
  59. {awscli-1.38.7.data → awscli-1.38.9.data}/scripts/aws_zsh_completer.sh +0 -0
  60. {awscli-1.38.7.dist-info → awscli-1.38.9.dist-info}/LICENSE.txt +0 -0
  61. {awscli-1.38.7.dist-info → awscli-1.38.9.dist-info}/WHEEL +0 -0
  62. {awscli-1.38.7.dist-info → awscli-1.38.9.dist-info}/top_level.txt +0 -0
@@ -1,64 +1,64 @@
1
- **To restore an Amazon Aurora DB cluster from Amazon S3**
2
-
3
- The following ``restore-db-cluster-from-s3`` example restores an Amazon Aurora MySQL version 5.7-compatible DB cluster from a MySQL 5.7 DB backup file in Amazon S3. ::
4
-
5
- aws rds restore-db-cluster-from-s3 \
6
- --db-cluster-identifier cluster-s3-restore \
7
- --engine aurora-mysql \
8
- --master-username admin \
9
- --master-user-password mypassword \
10
- --s3-bucket-name mybucket \
11
- --s3-prefix test-backup \
12
- --s3-ingestion-role-arn arn:aws:iam::123456789012:role/service-role/TestBackup \
13
- --source-engine mysql \
14
- --source-engine-version 5.7.28
15
-
16
- Output::
17
-
18
- {
19
- "DBCluster": {
20
- "AllocatedStorage": 1,
21
- "AvailabilityZones": [
22
- "us-west-2c",
23
- "us-west-2a",
24
- "us-west-2b"
25
- ],
26
- "BackupRetentionPeriod": 1,
27
- "DBClusterIdentifier": "cluster-s3-restore",
28
- "DBClusterParameterGroup": "default.aurora-mysql5.7",
29
- "DBSubnetGroup": "default",
30
- "Status": "creating",
31
- "Endpoint": "cluster-s3-restore.cluster-co3xyzabc123.us-west-2.rds.amazonaws.com",
32
- "ReaderEndpoint": "cluster-s3-restore.cluster-ro-co3xyzabc123.us-west-2.rds.amazonaws.com",
33
- "MultiAZ": false,
34
- "Engine": "aurora-mysql",
35
- "EngineVersion": "5.7.12",
36
- "Port": 3306,
37
- "MasterUsername": "admin",
38
- "PreferredBackupWindow": "11:15-11:45",
39
- "PreferredMaintenanceWindow": "thu:12:19-thu:12:49",
40
- "ReadReplicaIdentifiers": [],
41
- "DBClusterMembers": [],
42
- "VpcSecurityGroups": [
43
- {
44
- "VpcSecurityGroupId": "sg-########",
45
- "Status": "active"
46
- }
47
- ],
48
- "HostedZoneId": "Z1PVIF0EXAMPLE",
49
- "StorageEncrypted": false,
50
- "DbClusterResourceId": "cluster-SU5THYQQHOWCXZZDGXREXAMPLE",
51
- "DBClusterArn": "arn:aws:rds:us-west-2:123456789012:cluster:cluster-s3-restore",
52
- "AssociatedRoles": [],
53
- "IAMDatabaseAuthenticationEnabled": false,
54
- "ClusterCreateTime": "2020-07-27T14:22:08.095Z",
55
- "EngineMode": "provisioned",
56
- "DeletionProtection": false,
57
- "HttpEndpointEnabled": false,
58
- "CopyTagsToSnapshot": false,
59
- "CrossAccountClone": false,
60
- "DomainMemberships": []
61
- }
62
- }
63
-
64
- For more information, see `Migrating Data from MySQL by Using an Amazon S3 Bucket <https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.ExtMySQL.html#AuroraMySQL.Migrating.ExtMySQL.S3>`__ in the *Amazon Aurora User Guide*.
1
+ **To restore an Amazon Aurora DB cluster from Amazon S3**
2
+
3
+ The following ``restore-db-cluster-from-s3`` example restores an Amazon Aurora MySQL version 5.7-compatible DB cluster from a MySQL 5.7 DB backup file in Amazon S3. ::
4
+
5
+ aws rds restore-db-cluster-from-s3 \
6
+ --db-cluster-identifier cluster-s3-restore \
7
+ --engine aurora-mysql \
8
+ --master-username admin \
9
+ --master-user-password mypassword \
10
+ --s3-bucket-name amzn-s3-demo-bucket \
11
+ --s3-prefix test-backup \
12
+ --s3-ingestion-role-arn arn:aws:iam::123456789012:role/service-role/TestBackup \
13
+ --source-engine mysql \
14
+ --source-engine-version 5.7.28
15
+
16
+ Output::
17
+
18
+ {
19
+ "DBCluster": {
20
+ "AllocatedStorage": 1,
21
+ "AvailabilityZones": [
22
+ "us-west-2c",
23
+ "us-west-2a",
24
+ "us-west-2b"
25
+ ],
26
+ "BackupRetentionPeriod": 1,
27
+ "DBClusterIdentifier": "cluster-s3-restore",
28
+ "DBClusterParameterGroup": "default.aurora-mysql5.7",
29
+ "DBSubnetGroup": "default",
30
+ "Status": "creating",
31
+ "Endpoint": "cluster-s3-restore.cluster-co3xyzabc123.us-west-2.rds.amazonaws.com",
32
+ "ReaderEndpoint": "cluster-s3-restore.cluster-ro-co3xyzabc123.us-west-2.rds.amazonaws.com",
33
+ "MultiAZ": false,
34
+ "Engine": "aurora-mysql",
35
+ "EngineVersion": "5.7.12",
36
+ "Port": 3306,
37
+ "MasterUsername": "admin",
38
+ "PreferredBackupWindow": "11:15-11:45",
39
+ "PreferredMaintenanceWindow": "thu:12:19-thu:12:49",
40
+ "ReadReplicaIdentifiers": [],
41
+ "DBClusterMembers": [],
42
+ "VpcSecurityGroups": [
43
+ {
44
+ "VpcSecurityGroupId": "sg-########",
45
+ "Status": "active"
46
+ }
47
+ ],
48
+ "HostedZoneId": "Z1PVIF0EXAMPLE",
49
+ "StorageEncrypted": false,
50
+ "DbClusterResourceId": "cluster-SU5THYQQHOWCXZZDGXREXAMPLE",
51
+ "DBClusterArn": "arn:aws:rds:us-west-2:123456789012:cluster:cluster-s3-restore",
52
+ "AssociatedRoles": [],
53
+ "IAMDatabaseAuthenticationEnabled": false,
54
+ "ClusterCreateTime": "2020-07-27T14:22:08.095Z",
55
+ "EngineMode": "provisioned",
56
+ "DeletionProtection": false,
57
+ "HttpEndpointEnabled": false,
58
+ "CopyTagsToSnapshot": false,
59
+ "CrossAccountClone": false,
60
+ "DomainMemberships": []
61
+ }
62
+ }
63
+
64
+ For more information, see `Migrating Data from MySQL by Using an Amazon S3 Bucket <https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.ExtMySQL.html#AuroraMySQL.Migrating.ExtMySQL.S3>`__ in the *Amazon Aurora User Guide*.
@@ -1,26 +1,26 @@
1
- **To export a snapshot to Amazon S3**
2
-
3
- The following ``start-export-task`` example exports a DB snapshot named ``db5-snapshot-test`` to the Amazon S3 bucket named ``mybucket``. ::
4
-
5
- aws rds start-export-task \
6
- --export-task-identifier my-s3-export \
7
- --source-arn arn:aws:rds:us-west-2:123456789012:snapshot:db5-snapshot-test \
8
- --s3-bucket-name mybucket \
9
- --iam-role-arn arn:aws:iam::123456789012:role/service-role/ExportRole \
10
- --kms-key-id arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff
11
-
12
- Output::
13
-
14
- {
15
- "ExportTaskIdentifier": "my-s3-export",
16
- "SourceArn": "arn:aws:rds:us-west-2:123456789012:snapshot:db5-snapshot-test",
17
- "SnapshotTime": "2020-03-27T20:48:42.023Z",
18
- "S3Bucket": "mybucket",
19
- "IamRoleArn": "arn:aws:iam::123456789012:role/service-role/ExportRole",
20
- "KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff",
21
- "Status": "STARTING",
22
- "PercentProgress": 0,
23
- "TotalExtractedDataInGB": 0
24
- }
25
-
26
- For more information, see `Exporting a Snapshot to an Amazon S3 Bucket <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html#USER_ExportSnapshot.Exporting>`__ in the *Amazon RDS User Guide*.
1
+ **To export a snapshot to Amazon S3**
2
+
3
+ The following ``start-export-task`` example exports a DB snapshot named ``db5-snapshot-test`` to the Amazon S3 bucket named ``amzn-s3-demo-bucket``. ::
4
+
5
+ aws rds start-export-task \
6
+ --export-task-identifier my-s3-export \
7
+ --source-arn arn:aws:rds:us-west-2:123456789012:snapshot:db5-snapshot-test \
8
+ --s3-bucket-name amzn-s3-demo-bucket \
9
+ --iam-role-arn arn:aws:iam::123456789012:role/service-role/ExportRole \
10
+ --kms-key-id arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff
11
+
12
+ Output::
13
+
14
+ {
15
+ "ExportTaskIdentifier": "my-s3-export",
16
+ "SourceArn": "arn:aws:rds:us-west-2:123456789012:snapshot:db5-snapshot-test",
17
+ "SnapshotTime": "2020-03-27T20:48:42.023Z",
18
+ "S3Bucket": "amzn-s3-demo-bucket",
19
+ "IamRoleArn": "arn:aws:iam::123456789012:role/service-role/ExportRole",
20
+ "KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/abcd0000-7fca-4128-82f2-aabbccddeeff",
21
+ "Status": "STARTING",
22
+ "PercentProgress": 0,
23
+ "TotalExtractedDataInGB": 0
24
+ }
25
+
26
+ For more information, see `Exporting a Snapshot to an Amazon S3 Bucket <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html#USER_ExportSnapshot.Exporting>`__ in the *Amazon RDS User Guide*.
@@ -14,13 +14,13 @@ are two types of path arguments: ``LocalPath`` and ``S3Uri``.
14
14
  written as an absolute path or relative path.
15
15
 
16
16
  ``S3Uri``: represents the location of a S3 object, prefix, or bucket. This
17
- must be written in the form ``s3://mybucket/mykey`` where ``mybucket`` is
17
+ must be written in the form ``s3://amzn-s3-demo-bucket/mykey`` where ``amzn-s3-demo-bucket`` is
18
18
  the specified S3 bucket, ``mykey`` is the specified S3 key. The path argument
19
19
  must begin with ``s3://`` in order to denote that the path argument refers to
20
20
  a S3 object. Note that prefixes are separated by forward slashes. For
21
21
  example, if the S3 object ``myobject`` had the prefix ``myprefix``, the
22
22
  S3 key would be ``myprefix/myobject``, and if the object was in the bucket
23
- ``mybucket``, the ``S3Uri`` would be ``s3://mybucket/myprefix/myobject``.
23
+ ``amzn-s3-demo-bucket``, the ``S3Uri`` would be ``s3://amzn-s3-demo-bucket/myprefix/myobject``.
24
24
 
25
25
  ``S3Uri`` also supports S3 access points. To specify an access point, this
26
26
  value must be of the form ``s3://<access-point-arn>/<key>``. For example if
awscli/examples/s3/cp.rst CHANGED
@@ -3,67 +3,67 @@
3
3
  The following ``cp`` command copies a single file to a specified
4
4
  bucket and key::
5
5
 
6
- aws s3 cp test.txt s3://mybucket/test2.txt
6
+ aws s3 cp test.txt s3://amzn-s3-demo-bucket/test2.txt
7
7
 
8
8
  Output::
9
9
 
10
- upload: test.txt to s3://mybucket/test2.txt
10
+ upload: test.txt to s3://amzn-s3-demo-bucket/test2.txt
11
11
 
12
12
  **Example 2: Copying a local file to S3 with an expiration date**
13
13
 
14
14
  The following ``cp`` command copies a single file to a specified
15
15
  bucket and key that expires at the specified ISO 8601 timestamp::
16
16
 
17
- aws s3 cp test.txt s3://mybucket/test2.txt \
17
+ aws s3 cp test.txt s3://amzn-s3-demo-bucket/test2.txt \
18
18
  --expires 2014-10-01T20:30:00Z
19
19
 
20
20
  Output::
21
21
 
22
- upload: test.txt to s3://mybucket/test2.txt
22
+ upload: test.txt to s3://amzn-s3-demo-bucket/test2.txt
23
23
 
24
24
  **Example 3: Copying a file from S3 to S3**
25
25
 
26
26
  The following ``cp`` command copies a single s3 object to a specified bucket and key::
27
27
 
28
- aws s3 cp s3://mybucket/test.txt s3://mybucket/test2.txt
28
+ aws s3 cp s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket/test2.txt
29
29
 
30
30
  Output::
31
31
 
32
- copy: s3://mybucket/test.txt to s3://mybucket/test2.txt
32
+ copy: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket/test2.txt
33
33
 
34
34
  **Example 4: Copying an S3 object to a local file**
35
35
 
36
36
  The following ``cp`` command copies a single object to a specified file locally::
37
37
 
38
- aws s3 cp s3://mybucket/test.txt test2.txt
38
+ aws s3 cp s3://amzn-s3-demo-bucket/test.txt test2.txt
39
39
 
40
40
  Output::
41
41
 
42
- download: s3://mybucket/test.txt to test2.txt
42
+ download: s3://amzn-s3-demo-bucket/test.txt to test2.txt
43
43
 
44
44
  **Example 5: Copying an S3 object from one bucket to another**
45
45
 
46
46
  The following ``cp`` command copies a single object to a specified bucket while retaining its original name::
47
47
 
48
- aws s3 cp s3://mybucket/test.txt s3://amzn-s3-demo-bucket2/
48
+ aws s3 cp s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket2/
49
49
 
50
50
  Output::
51
51
 
52
- copy: s3://mybucket/test.txt to s3://amzn-s3-demo-bucket2/test.txt
52
+ copy: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket2/test.txt
53
53
 
54
54
  **Example 6: Recursively copying S3 objects to a local directory**
55
55
 
56
56
  When passed with the parameter ``--recursive``, the following ``cp`` command recursively copies all objects under a
57
- specified prefix and bucket to a specified directory. In this example, the bucket ``mybucket`` has the objects
57
+ specified prefix and bucket to a specified directory. In this example, the bucket ``amzn-s3-demo-bucket`` has the objects
58
58
  ``test1.txt`` and ``test2.txt``::
59
59
 
60
- aws s3 cp s3://mybucket . \
60
+ aws s3 cp s3://amzn-s3-demo-bucket . \
61
61
  --recursive
62
62
 
63
63
  Output::
64
64
 
65
- download: s3://mybucket/test1.txt to test1.txt
66
- download: s3://mybucket/test2.txt to test2.txt
65
+ download: s3://amzn-s3-demo-bucket/test1.txt to test1.txt
66
+ download: s3://amzn-s3-demo-bucket/test2.txt to test2.txt
67
67
 
68
68
  **Example 7: Recursively copying local files to S3**
69
69
 
@@ -71,51 +71,51 @@ When passed with the parameter ``--recursive``, the following ``cp`` command rec
71
71
  specified directory to a specified bucket and prefix while excluding some files by using an ``--exclude`` parameter. In
72
72
  this example, the directory ``myDir`` has the files ``test1.txt`` and ``test2.jpg``::
73
73
 
74
- aws s3 cp myDir s3://mybucket/ \
74
+ aws s3 cp myDir s3://amzn-s3-demo-bucket/ \
75
75
  --recursive \
76
76
  --exclude "*.jpg"
77
77
 
78
78
  Output::
79
79
 
80
- upload: myDir/test1.txt to s3://mybucket/test1.txt
80
+ upload: myDir/test1.txt to s3://amzn-s3-demo-bucket/test1.txt
81
81
 
82
82
  **Example 8: Recursively copying S3 objects to another bucket**
83
83
 
84
84
  When passed with the parameter ``--recursive``, the following ``cp`` command recursively copies all objects under a
85
85
  specified bucket to another bucket while excluding some objects by using an ``--exclude`` parameter. In this example,
86
- the bucket ``mybucket`` has the objects ``test1.txt`` and ``another/test1.txt``::
86
+ the bucket ``amzn-s3-demo-bucket`` has the objects ``test1.txt`` and ``another/test1.txt``::
87
87
 
88
- aws s3 cp s3://mybucket/ s3://amzn-s3-demo-bucket2/ \
88
+ aws s3 cp s3://amzn-s3-demo-bucket/ s3://amzn-s3-demo-bucket2/ \
89
89
  --recursive \
90
90
  --exclude "another/*"
91
91
 
92
92
  Output::
93
93
 
94
- copy: s3://mybucket/test1.txt to s3://amzn-s3-demo-bucket2/test1.txt
94
+ copy: s3://amzn-s3-demo-bucket/test1.txt to s3://amzn-s3-demo-bucket2/test1.txt
95
95
 
96
96
  You can combine ``--exclude`` and ``--include`` options to copy only objects that match a pattern, excluding all others::
97
97
 
98
- aws s3 cp s3://mybucket/logs/ s3://amzn-s3-demo-bucket2/logs/ \
98
+ aws s3 cp s3://amzn-s3-demo-bucket/logs/ s3://amzn-s3-demo-bucket2/logs/ \
99
99
  --recursive \
100
100
  --exclude "*" \
101
101
  --include "*.log"
102
102
 
103
103
  Output::
104
104
 
105
- copy: s3://mybucket/logs/test/test.log to s3://amzn-s3-demo-bucket2/logs/test/test.log
106
- copy: s3://mybucket/logs/test3.log to s3://amzn-s3-demo-bucket2/logs/test3.log
105
+ copy: s3://amzn-s3-demo-bucket/logs/test/test.log to s3://amzn-s3-demo-bucket2/logs/test/test.log
106
+ copy: s3://amzn-s3-demo-bucket/logs/test3.log to s3://amzn-s3-demo-bucket2/logs/test3.log
107
107
 
108
108
  **Example 9: Setting the Access Control List (ACL) while copying an S3 object**
109
109
 
110
110
  The following ``cp`` command copies a single object to a specified bucket and key while setting the ACL to
111
111
  ``public-read-write``::
112
112
 
113
- aws s3 cp s3://mybucket/test.txt s3://mybucket/test2.txt \
113
+ aws s3 cp s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket/test2.txt \
114
114
  --acl public-read-write
115
115
 
116
116
  Output::
117
117
 
118
- copy: s3://mybucket/test.txt to s3://mybucket/test2.txt
118
+ copy: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket/test2.txt
119
119
 
120
120
  Note that if you're using the ``--acl`` option, ensure that any associated IAM
121
121
  policies include the ``"s3:PutObjectAcl"`` action::
@@ -138,7 +138,7 @@ Output::
138
138
  "s3:PutObjectAcl"
139
139
  ],
140
140
  "Resource": [
141
- "arn:aws:s3:::mybucket/*"
141
+ "arn:aws:s3:::amzn-s3-demo-bucket/*"
142
142
  ],
143
143
  "Effect": "Allow",
144
144
  "Sid": "Stmt1234567891234"
@@ -152,11 +152,11 @@ Output::
152
152
  The following ``cp`` command illustrates the use of the ``--grants`` option to grant read access to all users identified
153
153
  by URI and full control to a specific user identified by their Canonical ID::
154
154
 
155
- aws s3 cp file.txt s3://mybucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=id=79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
155
+ aws s3 cp file.txt s3://amzn-s3-demo-bucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=id=79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
156
156
 
157
157
  Output::
158
158
 
159
- upload: file.txt to s3://mybucket/file.txt
159
+ upload: file.txt to s3://amzn-s3-demo-bucket/file.txt
160
160
 
161
161
  **Example 11: Uploading a local file stream to S3**
162
162
 
@@ -164,13 +164,13 @@ Output::
164
164
 
165
165
  The following ``cp`` command uploads a local file stream from standard input to a specified bucket and key::
166
166
 
167
- aws s3 cp - s3://mybucket/stream.txt
167
+ aws s3 cp - s3://amzn-s3-demo-bucket/stream.txt
168
168
 
169
169
  **Example 12: Uploading a local file stream that is larger than 50GB to S3**
170
170
 
171
171
  The following ``cp`` command uploads a 51GB local file stream from standard input to a specified bucket and key. The ``--expected-size`` option must be provided, or the upload may fail when it reaches the default part limit of 10,000::
172
172
 
173
- aws s3 cp - s3://mybucket/stream.txt --expected-size 54760833024
173
+ aws s3 cp - s3://amzn-s3-demo-bucket/stream.txt --expected-size 54760833024
174
174
 
175
175
  **Example 13: Downloading an S3 object as a local file stream**
176
176
 
@@ -178,7 +178,7 @@ The following ``cp`` command uploads a 51GB local file stream from standard inpu
178
178
 
179
179
  The following ``cp`` command downloads an S3 object locally as a stream to standard output. Downloading as a stream is not currently compatible with the ``--recursive`` parameter::
180
180
 
181
- aws s3 cp s3://mybucket/stream.txt -
181
+ aws s3 cp s3://amzn-s3-demo-bucket/stream.txt -
182
182
 
183
183
  **Example 14: Uploading to an S3 access point**
184
184
 
awscli/examples/s3/ls.rst CHANGED
@@ -1,19 +1,19 @@
1
1
  **Example 1: Listing all user owned buckets**
2
2
 
3
- The following ``ls`` command lists all of the bucket owned by the user. In this example, the user owns the buckets ``mybucket`` and ``amzn-s3-demo-bucket2``. The timestamp is the date the bucket was created, shown in your machine's time zone. This date can change when making changes to your bucket, such as editing its bucket policy. Note if ``s3://`` is used for the path argument ``<S3Uri>``, it will list all of the buckets as well. ::
3
+ The following ``ls`` command lists all of the bucket owned by the user. In this example, the user owns the buckets ``amzn-s3-demo-bucket`` and ``amzn-s3-demo-bucket2``. The timestamp is the date the bucket was created, shown in your machine's time zone. This date can change when making changes to your bucket, such as editing its bucket policy. Note if ``s3://`` is used for the path argument ``<S3Uri>``, it will list all of the buckets as well. ::
4
4
 
5
5
  aws s3 ls
6
6
 
7
7
  Output::
8
8
 
9
- 2013-07-11 17:08:50 mybucket
9
+ 2013-07-11 17:08:50 amzn-s3-demo-bucket
10
10
  2013-07-24 14:55:44 amzn-s3-demo-bucket2
11
11
 
12
12
  **Example 2: Listing all prefixes and objects in a bucket**
13
13
 
14
- The following ``ls`` command lists objects and common prefixes under a specified bucket and prefix. In this example, the user owns the bucket ``mybucket`` with the objects ``test.txt`` and ``somePrefix/test.txt``. The ``LastWriteTime`` and ``Length`` are arbitrary. Note that since the ``ls`` command has no interaction with the local filesystem, the ``s3://`` URI scheme is not required to resolve ambiguity and may be omitted. ::
14
+ The following ``ls`` command lists objects and common prefixes under a specified bucket and prefix. In this example, the user owns the bucket ``amzn-s3-demo-bucket`` with the objects ``test.txt`` and ``somePrefix/test.txt``. The ``LastWriteTime`` and ``Length`` are arbitrary. Note that since the ``ls`` command has no interaction with the local filesystem, the ``s3://`` URI scheme is not required to resolve ambiguity and may be omitted. ::
15
15
 
16
- aws s3 ls s3://mybucket
16
+ aws s3 ls s3://amzn-s3-demo-bucket
17
17
 
18
18
  Output::
19
19
 
@@ -24,7 +24,7 @@ Output::
24
24
 
25
25
  The following ``ls`` command lists objects and common prefixes under a specified bucket and prefix. However, there are no objects nor common prefixes under the specified bucket and prefix. ::
26
26
 
27
- aws s3 ls s3://mybucket/noExistPrefix
27
+ aws s3 ls s3://amzn-s3-demo-bucket/noExistPrefix
28
28
 
29
29
  Output::
30
30
 
@@ -34,7 +34,7 @@ Output::
34
34
 
35
35
  The following ``ls`` command will recursively list objects in a bucket. Rather than showing ``PRE dirname/`` in the output, all the content in a bucket will be listed in order. ::
36
36
 
37
- aws s3 ls s3://mybucket \
37
+ aws s3 ls s3://amzn-s3-demo-bucket \
38
38
  --recursive
39
39
 
40
40
  Output::
@@ -54,7 +54,7 @@ Output::
54
54
 
55
55
  The following ``ls`` command demonstrates the same command using the --human-readable and --summarize options. --human-readable displays file size in Bytes/MiB/KiB/GiB/TiB/PiB/EiB. --summarize displays the total number of objects and total size at the end of the result listing::
56
56
 
57
- aws s3 ls s3://mybucket \
57
+ aws s3 ls s3://amzn-s3-demo-bucket \
58
58
  --recursive \
59
59
  --human-readable \
60
60
  --summarize
awscli/examples/s3/mb.rst CHANGED
@@ -1,22 +1,22 @@
1
1
  **Example 1: Create a bucket**
2
2
 
3
- The following ``mb`` command creates a bucket. In this example, the user makes the bucket ``mybucket``. The bucket is
3
+ The following ``mb`` command creates a bucket. In this example, the user makes the bucket ``amzn-s3-demo-bucket``. The bucket is
4
4
  created in the region specified in the user's configuration file::
5
5
 
6
- aws s3 mb s3://mybucket
6
+ aws s3 mb s3://amzn-s3-demo-bucket
7
7
 
8
8
  Output::
9
9
 
10
- make_bucket: s3://mybucket
10
+ make_bucket: s3://amzn-s3-demo-bucket
11
11
 
12
12
  **Example 2: Create a bucket in the specified region**
13
13
 
14
14
  The following ``mb`` command creates a bucket in a region specified by the ``--region`` parameter. In this example, the
15
- user makes the bucket ``mybucket`` in the region ``us-west-1``::
15
+ user makes the bucket ``amzn-s3-demo-bucket`` in the region ``us-west-1``::
16
16
 
17
- aws s3 mb s3://mybucket \
17
+ aws s3 mb s3://amzn-s3-demo-bucket \
18
18
  --region us-west-1
19
19
 
20
20
  Output::
21
21
 
22
- make_bucket: s3://mybucket
22
+ make_bucket: s3://amzn-s3-demo-bucket
awscli/examples/s3/mv.rst CHANGED
@@ -2,55 +2,55 @@
2
2
 
3
3
  The following ``mv`` command moves a single file to a specified bucket and key. ::
4
4
 
5
- aws s3 mv test.txt s3://mybucket/test2.txt
5
+ aws s3 mv test.txt s3://amzn-s3-demo-bucket/test2.txt
6
6
 
7
7
  Output::
8
8
 
9
- move: test.txt to s3://mybucket/test2.txt
9
+ move: test.txt to s3://amzn-s3-demo-bucket/test2.txt
10
10
 
11
11
  **Example 2: Move an object to the specified bucket and key**
12
12
 
13
13
  The following ``mv`` command moves a single s3 object to a specified bucket and key. ::
14
14
 
15
- aws s3 mv s3://mybucket/test.txt s3://mybucket/test2.txt
15
+ aws s3 mv s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket/test2.txt
16
16
 
17
17
  Output::
18
18
 
19
- move: s3://mybucket/test.txt to s3://mybucket/test2.txt
19
+ move: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket/test2.txt
20
20
 
21
21
  **Example 3: Move an S3 object to the local directory**
22
22
 
23
23
  The following ``mv`` command moves a single object to a specified file locally. ::
24
24
 
25
- aws s3 mv s3://mybucket/test.txt test2.txt
25
+ aws s3 mv s3://amzn-s3-demo-bucket/test.txt test2.txt
26
26
 
27
27
  Output::
28
28
 
29
- move: s3://mybucket/test.txt to test2.txt
29
+ move: s3://amzn-s3-demo-bucket/test.txt to test2.txt
30
30
 
31
31
  **Example 4: Move an object with it's original name to the specified bucket**
32
32
 
33
33
  The following ``mv`` command moves a single object to a specified bucket while retaining its original name::
34
34
 
35
- aws s3 mv s3://mybucket/test.txt s3://amzn-s3-demo-bucket2/
35
+ aws s3 mv s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket2/
36
36
 
37
37
  Output::
38
38
 
39
- move: s3://mybucket/test.txt to s3://amzn-s3-demo-bucket2/test.txt
39
+ move: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket2/test.txt
40
40
 
41
41
  **Example 5: Move all objects and prefixes in a bucket to the local directory**
42
42
 
43
43
  When passed with the parameter ``--recursive``, the following ``mv`` command recursively moves all objects under a
44
- specified prefix and bucket to a specified directory. In this example, the bucket ``mybucket`` has the objects
44
+ specified prefix and bucket to a specified directory. In this example, the bucket ``amzn-s3-demo-bucket`` has the objects
45
45
  ``test1.txt`` and ``test2.txt``. ::
46
46
 
47
- aws s3 mv s3://mybucket . \
47
+ aws s3 mv s3://amzn-s3-demo-bucket . \
48
48
  --recursive
49
49
 
50
50
  Output::
51
51
 
52
- move: s3://mybucket/test1.txt to test1.txt
53
- move: s3://mybucket/test2.txt to test2.txt
52
+ move: s3://amzn-s3-demo-bucket/test1.txt to test1.txt
53
+ move: s3://amzn-s3-demo-bucket/test2.txt to test2.txt
54
54
 
55
55
  **Example 6: Move all objects and prefixes in a bucket to the local directory, except ``.jpg`` files**
56
56
 
@@ -58,7 +58,7 @@ When passed with the parameter ``--recursive``, the following ``mv`` command rec
58
58
  specified directory to a specified bucket and prefix while excluding some files by using an ``--exclude`` parameter. In
59
59
  this example, the directory ``myDir`` has the files ``test1.txt`` and ``test2.jpg``. ::
60
60
 
61
- aws s3 mv myDir s3://mybucket/ \
61
+ aws s3 mv myDir s3://amzn-s3-demo-bucket/ \
62
62
  --recursive \
63
63
  --exclude "*.jpg"
64
64
 
@@ -70,39 +70,39 @@ Output::
70
70
 
71
71
  When passed with the parameter ``--recursive``, the following ``mv`` command recursively moves all objects under a
72
72
  specified bucket to another bucket while excluding some objects by using an ``--exclude`` parameter. In this example,
73
- the bucket ``mybucket`` has the objects ``test1.txt`` and ``another/test1.txt``. ::
73
+ the bucket ``amzn-s3-demo-bucket`` has the objects ``test1.txt`` and ``another/test1.txt``. ::
74
74
 
75
- aws s3 mv s3://mybucket/ s3://amzn-s3-demo-bucket2/ \
75
+ aws s3 mv s3://amzn-s3-demo-bucket/ s3://amzn-s3-demo-bucket2/ \
76
76
  --recursive \
77
- --exclude "mybucket/another/*"
77
+ --exclude "amzn-s3-demo-bucket/another/*"
78
78
 
79
79
  Output::
80
80
 
81
- move: s3://mybucket/test1.txt to s3://amzn-s3-demo-bucket2/test1.txt
81
+ move: s3://amzn-s3-demo-bucket/test1.txt to s3://amzn-s3-demo-bucket2/test1.txt
82
82
 
83
83
  **Example 8: Move an object to the specified bucket and set the ACL**
84
84
 
85
85
  The following ``mv`` command moves a single object to a specified bucket and key while setting the ACL to
86
86
  ``public-read-write``. ::
87
87
 
88
- aws s3 mv s3://mybucket/test.txt s3://mybucket/test2.txt \
88
+ aws s3 mv s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket/test2.txt \
89
89
  --acl public-read-write
90
90
 
91
91
  Output::
92
92
 
93
- move: s3://mybucket/test.txt to s3://mybucket/test2.txt
93
+ move: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket/test2.txt
94
94
 
95
95
  **Example 9: Move a local file to the specified bucket and grant permissions**
96
96
 
97
97
  The following ``mv`` command illustrates the use of the ``--grants`` option to grant read access to all users and full
98
98
  control to a specific user identified by their email address. ::
99
99
 
100
- aws s3 mv file.txt s3://mybucket/ \
100
+ aws s3 mv file.txt s3://amzn-s3-demo-bucket/ \
101
101
  --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=user@example.com
102
102
 
103
103
  Output::
104
104
 
105
- move: file.txt to s3://mybucket/file.txt
105
+ move: file.txt to s3://amzn-s3-demo-bucket/file.txt
106
106
 
107
107
  **Example 10: Move a file to an S3 access point**
108
108
 
awscli/examples/s3/rb.rst CHANGED
@@ -1,24 +1,24 @@
1
1
  **Example 1: Delete a bucket**
2
2
 
3
- The following ``rb`` command removes a bucket. In this example, the user's bucket is ``mybucket``. Note that the bucket must be empty in order to remove::
3
+ The following ``rb`` command removes a bucket. In this example, the user's bucket is ``amzn-s3-demo-bucket``. Note that the bucket must be empty in order to remove::
4
4
 
5
- aws s3 rb s3://mybucket
5
+ aws s3 rb s3://amzn-s3-demo-bucket
6
6
 
7
7
  Output::
8
8
 
9
- remove_bucket: mybucket
9
+ remove_bucket: amzn-s3-demo-bucket
10
10
 
11
11
  **Example 2: Force delete a bucket**
12
12
 
13
13
  The following ``rb`` command uses the ``--force`` parameter to first remove all of the objects in the bucket and then
14
- remove the bucket itself. In this example, the user's bucket is ``mybucket`` and the objects in ``mybucket`` are
14
+ remove the bucket itself. In this example, the user's bucket is ``amzn-s3-demo-bucket`` and the objects in ``amzn-s3-demo-bucket`` are
15
15
  ``test1.txt`` and ``test2.txt``::
16
16
 
17
- aws s3 rb s3://mybucket \
17
+ aws s3 rb s3://amzn-s3-demo-bucket \
18
18
  --force
19
19
 
20
20
  Output::
21
21
 
22
- delete: s3://mybucket/test1.txt
23
- delete: s3://mybucket/test2.txt
24
- remove_bucket: mybucket
22
+ delete: s3://amzn-s3-demo-bucket/test1.txt
23
+ delete: s3://amzn-s3-demo-bucket/test2.txt
24
+ remove_bucket: amzn-s3-demo-bucket