2016-10-27 10:26:42 |
Volodymyr Litovka |
description |
I'm trying to setup S3 test bench for developers, using Devstack (using stable/newton branch in devstack configuration file - local.conf). While I'm able to browse containers and objects using CLI (openstack container / object, swift), I can't get access to containers using s3curl. In logs (full log is attached) I see two different URLs at final stage:
------------- “openstack container list” command, issued locally
[ ... ]
proxy-server: Using identity: {'service_roles': [], 'roles': [u'admin'], 'project_domain': (u'default', u'Default'), 'auth_version': 3, 'user': (u'eac0298a83e44b12b2c08aa98e9b1c9a', u'admin'), 'user_domain': (u'default', u'Default'), 'tenant': (u'2d7365b17c8147e9aead99f870125d31', u'admin')} (txn: txda7984e9e1f04b7792920-005811ca49)
[ ... ]
proxy-server: de.vs.ta.ck de.vs.ta.ck 27/Oct/2016/09/35/05 GET /v1/AUTH_2d7365b17c8147e9aead99f870125d31%3Fformat%3Djson HTTP/1.0 200 - osc-lib%20keystoneauth1/2.14.0%20python-requests/2.11.1%20CPython/2.7.12 a5ef5769d7ef... - 42 - txda7984e9e1f04b7792920-005811ca49 - 0.0881 - - 1477560905.352745056 1477560905.440839052 -
You see correct URL in the request above.
------------- S3 session using s3curl from re.mo.te.host
[ ... ]
proxy-server: Using identity: {'service_roles': [], 'roles': [u'admin'], 'project_domain': (u'default', u'Default'), 'auth_version': 3, 'user': (u'eac0298a83e44b12b2c08aa98e9b1c9a', u'admin'), 'user_domain': (u'default', u'Default'), 'tenant': (u'2d7365b17c8147e9aead99f870125d31', u'admin')} (txn: tx61f057911f3e475eb1962-005811c95a)
[ ... ]
proxy-server: re.mo.te.host re.mo.te.host 27/Oct/2016/09/31/07 GET / HTTP/1.0 200 - curl/7.43.0 - - 219 - tx61f057911f3e475eb1962-005811c95a - 0.2074 - - 1477560666.966339111 1477560667.173743010 -
URL above is malformed and, of course, will return nothing. It seems something can bee wrong with proxy-server - having same information, it produces different request URLs for different kinds of access (swift client access vs remote S3 access).
For S3 access I created EC2 credentials:
/opt# openstack credential create --type ec2 --project admin admin '{"access" : "admin", "secret" : "adm1n0"}'
+------------+------------------------------------------------------------------+
| Field | Value |
+------------+------------------------------------------------------------------+
| blob | {"access" : "admin", "secret" : "adm1n0"} |
| id | 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 |
| project_id | 2d7365b17c8147e9aead99f870125d31 |
| type | ec2 |
| user_id | eac0298a83e44b12b2c08aa98e9b1c9a |
+------------+------------------------------------------------------------------+
and, of course, there are created containers and objects for admin/admin:
/opt# openstack object list c0
+----------+
| Name |
+----------+
| list.txt |
+----------+
Any ideas on what's is wrong and how to go ahead? I'm ready to answer any questions and provide any additional information on this. Full log of proxy-server is attached, config is below:
======== proxy-server.conf =========
[DEFAULT]
bind_ip = 0.0.0.0
bind_port = 8080
swift_dir = /etc/swift
user = stack
workers = 1
log_level = DEBUG
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit crossdomain swift3 s3token authtoken keystoneauth tempauth formpost staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]
allow_account_management = true
account_autocreate = true
conn_timeout = 20
node_timeout = 120
use = egg:swift#proxy
[filter:tempauth]
user_swiftprojecttest1_swiftusertest3 = testing3 .admin
user_swiftprojecttest2_swiftusertest2 = testing2 .admin
user_swiftprojecttest1_swiftusertest1 = testing .admin
use = egg:swift#tempauth
reseller_prefix = TEMPAUTH
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service
reseller_prefix = TEMPAUTH
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
[filter:ratelimit]
use = egg:swift#ratelimit
[filter:domain_remap]
use = egg:swift#domain_remap
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:cname_lookup]
use = egg:swift#cname_lookup
[filter:staticweb]
use = egg:swift#staticweb
[filter:tempurl]
use = egg:swift#tempurl
[filter:formpost]
use = egg:swift#formpost
[filter:name_check]
use = egg:swift#name_check
[filter:list-endpoints]
use = egg:swift#list_endpoints
[filter:proxy-logging]
reveal_sensitive_prefix = 12
use = egg:swift#proxy_logging
[filter:bulk]
use = egg:swift#bulk
[filter:slo]
use = egg:swift#slo
[filter:dlo]
use = egg:swift#dlo
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:container_sync]
use = egg:swift#container_sync
[filter:xprofile]
use = egg:swift#xprofile
[filter:versioned_writes]
use = egg:swift#versioned_writes
[filter:copy]
use = egg:swift#copy
[filter:keymaster]
use = egg:swift#keymaster
encryption_root_secret = changeme
[filter:encryption]
use = egg:swift#encryption
[filter:crossdomain]
use = egg:swift#crossdomain
[filter:authtoken]
include_service_catalog = False
cache = swift.cache
delay_auth_decision = 1
memcached_servers = de.vs.ta.ck:11211
signing_dir = /var/cache/swift
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://de.vs.ta.ck/identity
project_domain_name = Default
project_name = service
user_domain_name = Default
password = paut1n0
username = swift
auth_url = http://de.vs.ta.ck/identity_admin
auth_type = password
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
log_name = swift
[filter:keystoneauth]
operator_roles = Member, admin
use = egg:swift#keystoneauth
[filter:s3token]
paste.filter_factory = keystonemiddleware.s3_token:filter_factory
auth_uri = http://de.vs.ta.ck/identity_admin
cafile = /opt/stack/data/ca-bundle.pem
admin_user = swift
admin_tenant_name = service
admin_password = paut1n0
[filter:swift3]
use = egg:swift3#swift3
location = RegionOne |
I'm trying to setup S3 test bench for developers, using Devstack (using stable/newton branch in devstack configuration file - local.conf). While I'm able to browse containers and objects using CLI (openstack container / object, swift), I can't get access to containers using s3curl. In logs (full log is attached) I see two different URLs at final stage:
------------- “openstack container list” command, issued locally
[ ... ]
proxy-server: Using identity: {'service_roles': [], 'roles': [u'admin'], 'project_domain': (u'default', u'Default'), 'auth_version': 3, 'user': (u'eac0298a83e44b12b2c08aa98e9b1c9a', u'admin'), 'user_domain': (u'default', u'Default'), 'tenant': (u'2d7365b17c8147e9aead99f870125d31', u'admin')} (txn: txda7984e9e1f04b7792920-005811ca49)
[ ... ]
proxy-server: de.vs.ta.ck de.vs.ta.ck 27/Oct/2016/09/35/05 GET /v1/AUTH_2d7365b17c8147e9aead99f870125d31%3Fformat%3Djson HTTP/1.0 200 - osc-lib%20keystoneauth1/2.14.0%20python-requests/2.11.1%20CPython/2.7.12 a5ef5769d7ef... - 42 - txda7984e9e1f04b7792920-005811ca49 - 0.0881 - - 1477560905.352745056 1477560905.440839052 -
You see correct URL in the request above.
------------- S3 session using s3curl from re.mo.te.host
[ ... ]
proxy-server: Using identity: {'service_roles': [], 'roles': [u'admin'], 'project_domain': (u'default', u'Default'), 'auth_version': 3, 'user': (u'eac0298a83e44b12b2c08aa98e9b1c9a', u'admin'), 'user_domain': (u'default', u'Default'), 'tenant': (u'2d7365b17c8147e9aead99f870125d31', u'admin')} (txn: tx61f057911f3e475eb1962-005811c95a)
[ ... ]
proxy-server: re.mo.te.host re.mo.te.host 27/Oct/2016/09/31/07 GET / HTTP/1.0 200 - curl/7.43.0 - - 219 - tx61f057911f3e475eb1962-005811c95a - 0.2074 - - 1477560666.966339111 1477560667.173743010 -
URL above is malformed and, of course, will return nothing. It seems something can bee wrong with proxy-server - having same information, it produces different request URLs for different kinds of access (swift client access vs remote S3 access).
For S3 access I created EC2 credentials:
/opt# openstack credential create --type ec2 --project admin admin '{"access" : "admin", "secret" : "adm1n0"}'
+------------+------------------------------------------------------------------+
| Field | Value |
+------------+------------------------------------------------------------------+
| blob | {"access" : "admin", "secret" : "adm1n0"} |
| id | 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 |
| project_id | 2d7365b17c8147e9aead99f870125d31 |
| type | ec2 |
| user_id | eac0298a83e44b12b2c08aa98e9b1c9a |
+------------+------------------------------------------------------------------+
and, of course, there are created containers and objects for admin/admin:
/opt# openstack object list c0
+----------+
| Name |
+----------+
| list.txt |
+----------+
Any ideas on what's is wrong and how to go ahead? I'm ready to answer any questions and provide any additional information on this. Full log of proxy-server, as well proxy-server.conf are attached.
Thank you! |
|