How to change user for a Rados Gateway bucket ?
Environment
- Red Hat Ceph Storage 1.2.z
- Red Hat Ceph Storage 1.3.z
Issue
- How to change user for a Rados Gateway bucket ?
- How to grant access to new user for another user owned bucket in Rados Gateway?
Resolution
- Please use
radosgw-admin bucket unlinkandradosgw-admin bucket linkcommand to do change the owner of a bucket. - Due to this Content from tracker.ceph.com is not included.Ceph Upstream bug #11076 in real bucket owner is not change but in
bucket statcommand it shows user is changed. * Which is causing the issue here and new user wont be able to access the bucket. - For same issue downstream This content is not included.Red Hat Ceph Storage Bugzilla #1324497 is created.
Workaround :
- Please use add_user_grant() boto function.
- Please first test these steps for two test users and one test bucket before applying these steps to your production users and production buckets.
- Content from github.com is not included.add_user_grant() definition:
# Method with same signature as boto.s3.bucket.Bucket.add_user_grant(),
# to allow polymorphic treatment at application layer.
def add_user_grant(self, permission, user_id, recursive=False,
headers=None):
"""
Convenience method that provides a quick way to add a canonical user
grant to a bucket. This method retrieves the current ACL, creates a new
grant based on the parameters passed in, adds that grant to the ACL and
then PUTs the new ACL back to GCS.
:type permission: string
:param permission: The permission being granted. Should be one of:
(READ|WRITE|FULL_CONTROL)
:type user_id: string
:param user_id: The canonical user id associated with the GS account
you are granting the permission to.
:type recursive: bool
:param recursive: A boolean value to controls whether the call <============ Important Part
will apply the grant to all keys within the bucket
or not. The default value is False. By passing a
True value, the call will iterate through all keys
in the bucket and apply the same grant to each key.
CAUTION: If you have a lot of keys, this could take
a long time!
-
As given above if number of objects are very high for the bucket which we are changing the user then this could take a long time!
-
First user testuser is owner of this bucket
# radosgw-admin bucket stats --bucket=my-new-bucket
{
"bucket": "my-new-bucket",
"pool": ".rgw.buckets",
"index_pool": ".rgw.buckets.index",
"id": "default.9474362.1",
"marker": "default.9474362.1",
"owner": "testuser", <==================================
"ver": "0#7",
"master_ver": "0#0",
"mtime": "2016-04-09 16:47:33.000000",
"max_marker": "0#",
"usage": {
"rgw.main": {
"size_kb": 1,
"size_kb_actual": 4,
"num_objects": 1
}
},
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
}
}
- Test script s3test.py
import boto
import boto.s3.connection
access_key = 'QN2J8UZ9V9BLI1UO6F2F'
secret_key = 'bCkgBFF3DaxoVrEdQO8McCbryYpLDJ5PP3NI9YW7'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'radosgw1.example.com',
is_secure=False,
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
print "{name}\t{created}".format(
name = bucket.name,
created = bucket.creation_date,
)
bucket.add_user_grant('FULL_CONTROL','testuser1', True, None) <======= Add grant for new user for this bucket
key = bucket.new_key('hello.txt')
key.set_contents_from_string('Hello World !!!!')
- Run the script :
# python s3test.py
Result : my-new-bucket 2016-04-09T11:17:33.000Z
- unlink the bucket from old user : testuser and link it to new user : testuser1
# radosgw-admin bucket unlink --bucket=my-new-bucket --uid=testuser
# radosgw-admin bucket link --bucket=my-new-bucket --uid=testuser --bucket-id=default.9474362.1
# radosgw-admin bucket stats --bucket=my-new-bucket
{
"bucket": "my-new-bucket",
"pool": ".rgw.buckets",
"index_pool": ".rgw.buckets.index",
"id": "default.9474362.1",
"marker": "default.9474362.1",
"owner": "testuser1", <=========== New Bucket owner : testuser1
"ver": "0#9",
"master_ver": "0#0",
"mtime": "2016-04-09 16:47:33.000000",
"max_marker": "0#",
"usage": {
"rgw.main": {
"size_kb": 1,
"size_kb_actual": 4,
"num_objects": 1
}
},
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
}
}
- Now run test script s3test.py with testuser1 secret and access key :
import boto
import boto.s3.connection
access_key = 'FQ702DEZSUM6L9V0KCSE'
secret_key = 'TcQCh5EsmdZYXQwL72H1y3gaIGslFPI6GHbq15uV'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'radosgw1.example.com',
is_secure=False,
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
print "{name}\t{created}".format(
name = bucket.name,
created = bucket.creation_date,
)
key = bucket.new_key('hello.txt')
key.set_contents_from_string('Hello World !!!!')
#python s3test.py
Result : my-new-bucket 2016-04-09T11:17:33.000Z
- Now it works perfectly fine.
SBR
Product(s)
Category
Tags
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.