s3api - 500 r_code when ls a container + wrong date + swift cli getting 403
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Object Storage (swift) |
New
|
Undecided
|
Unassigned |
Bug Description
Hello,
We are trying to implement the s3api module on our proxy servers, recently upgraded to ussuri version.
We are having the following issues:
- getting 500 error code when trying to ls a bucket/container with aws cli
- getting 403 errors when trying to curl/use swift cli to manage a project when s3api module is enabled
- all buckets are created with "2009-02-03 17:45:09 " date.
We need your help on this as we are unable to see what's wrong with the s3 listing command for a bucket, and why it is preventing swift cli to be used.
Thank you in advance for your input and merry Xmas to you :)
#######
here are some informations:
- We have followed the documentation provided:
https:/
- We have succesfully created the ec2 credentials.
- We are testing with s3cmd and aws cli endpoint as suggested here : https:/
We are able to :
- list containers/buckets inside a swift project
aws s3 ls
2009-02-03 17:45:09 container
2009-02-03 17:45:09 container12
2009-02-03 17:45:09 container123456
2009-02-03 17:45:09 container_dev
2009-02-03 17:45:09 container_s3
2009-02-03 17:45:09 container_test_s3
2009-02-03 17:45:09 containerlol
2009-02-03 17:45:09 sergetest
2009-02-03 17:45:09 test-bucket
2009-02-03 17:45:09 test123
2009-02-03 17:45:09 testbucket
2009-02-03 17:45:09 testbucket2
- create a container/bucket
aws --profile default s3 mb s3://bucketopendev
make_bucket: bucketopendev
- upload an object to a container/bucket
aws --profile default s3 cp test s3://bucketopendev
upload: ./test to s3://bucketopen
- remove an object inside a container/bucket
aws --profile default s3 rm s3://bucketopen
delete: s3://bucketopen
- remove a container/bucket
aws --profile default s3 rb s3://bucketopendev
remove_bucket: bucketopendev
.aws/config file:
cat .aws/config
[plugins]
endpoint = awscli_
[default]
aws_access_key_id = 5eXXXXX33981e4
aws_secret_
s3 =
endpoint_url = https:/
signature_version = s3v4
- proxy configuration that might be usefull :
[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken s3api s3token keystoneauth proxy-logging bulk account-quotas container-quotas proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_
account_autocreate = true
request_node_count = 2 * replicas
[filter:authtoken]
paste.filter_
admin_tenant_name = XW
admin_user = swift
admin_password = X
auth_host = api.ks
auth_port = 443
auth_protocol = https
identity_uri = https:/
auth_uri = https:/
signing_dir = /tmp/keystone-
delay_auth_decision = True
[filter:s3api]
use = egg:swift#s3api
s3_acl = false
dns_compliant_
check_bucket_owner = false
force_swift_
storage_domain = api.dev.cdn.net
[filter:s3token]
use = egg:swift#s3token
reseller_prefix = AUTH_
delay_auth_decision = True
auth_uri = https:/
http_timeout = 10.0
log_name = s3token
Issues seen:
- we are now unable to use the SWIFT CLI, getting a 403. It looks like the s3api is getting ALL the requests as you can see when trying to authenticate to swift after getting a keystone token:
SWIFT CALL FAILED: $VAR1 = {
'url' => 'https:/
'status' => '403',
'headers' => {
'success' => '',
'reason' => 'Forbidden'
};
When s3api is disabled we are able to use swift cli again .
- error 500 when trying to ls a container :
I've run a debug command if it can help. Trying to list rthe bucket that contains an object i've just uploaded :
24-12-2020 10:43:20 :] fcoutard@fcoutard /home/fcoutard : aws --profile default s3 mb s3://bucketopendev
make_bucket: bucketopendev
24-12-2020 10:43:34 :] fcoutard@fcoutard /home/fcoutard : aws s3 ls
2009-02-03 17:45:09 bucketopendev
2009-02-03 17:45:09 container
24-12-2020 10:43:37 :] fcoutard@fcoutard /home/fcoutard : vi test
24-12-2020 10:43:57 :] fcoutard@fcoutard /home/fcoutard : aws --profile default s3 cp test s3://bucketopendev
upload: ./test to s3://bucketopen
s3cmd debug mode is more verbose so i'm using this one :
DEBUG: Command: ls
DEBUG: Bucket 's3://bucketope
DEBUG: CreateRequest: resource[uri]=/
DEBUG: ===== SEND Inner request to determine the bucket region =====
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v4
DEBUG: get_hostname(
DEBUG: canonical_headers = host:api.
x-amz-content-
x-amz-date:
DEBUG: Canonical Request:
GET
/bucketopendev/
location=
host:api.
x-amz-content-
x-amz-date:
host;x-
e3b0c44298fc1c1
-------
DEBUG: signature-v4 headers: {'x-amz-date': '20201224T101500Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=
DEBUG: Processing request, please wait...
DEBUG: get_hostname(
DEBUG: ConnMan.get(): creating new connection: https:/
DEBUG: Using ca_certs_file None
DEBUG: httplib.
DEBUG: non-proxied HTTPSConnection
DEBUG: format_uri(): /bucketopendev/
DEBUG: Sending request method_
DEBUG: ConnMan.put(): connection put back to pool (https:/
DEBUG: Response:
{'data': b"<?xml version='1.0' encoding=
b'="http://
'headers': {'connection': 'keep-alive',
'reason': 'OK',
'status': 200}
DEBUG: ===== SUCCESS Inner request to determine the bucket region ('us-east-1') =====
DEBUG: Using signature v4
DEBUG: get_hostname(
DEBUG: canonical_headers = host:api.
x-amz-content-
x-amz-date:
DEBUG: Canonical Request:
GET
/bucketopendev/
delimiter=%2F
host:api.
x-amz-content-
x-amz-date:
host;x-
e3b0c44298fc1c1
-------
DEBUG: signature-v4 headers: {'x-amz-date': '20201224T101500Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=
DEBUG: Processing request, please wait...
DEBUG: get_hostname(
DEBUG: ConnMan.get(): re-using connection: https:/
DEBUG: format_uri(): /bucketopendev/
DEBUG: Sending request method_
DEBUG: ConnMan.put(): connection put back to pool (https:/
DEBUG: Response:
{'data': b"<?xml version='1.0' encoding=
'headers': {'connection': 'keep-alive',
'reason': 'Internal Server Error',
'status': 500}
DEBUG: S3Error: 500 (Internal Server Error)
DEBUG: HttpHeader: server: nginx/1.14.0 (Ubuntu)
DEBUG: HttpHeader: date: Thu, 24 Dec 2020 10:15:00 GMT
DEBUG: HttpHeader: content-type: application/xml
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: connection: keep-alive
DEBUG: HttpHeader: x-amz-id-2: tx4be0e45646954
DEBUG: HttpHeader: x-amz-request-id: tx4be0e45646954
DEBUG: HttpHeader: x-trans-id: tx4be0e45646954
DEBUG: HttpHeader: x-openstack-
DEBUG: ErrorXML: Code: 'InternalError'
DEBUG: ErrorXML: Message: 'We encountered an internal error. Please try again.'
DEBUG: ErrorXML: RequestId: 'tx4be0e4564695
DEBUG: ErrorXML: Reason: 'Expecting value [[test\ntest1\n]]: line 1 column 1 (char 0)'
WARNING: Retrying failed request: /?delimiter=%2F (500 (InternalError): We encountered an internal error. Please try again.)
WARNING: Waiting 3 sec...
By displaying the error string in the python code we were able to see that : Expecting value: line 1 column 1 (char 0)' was :
'Reason>Expecting value [[test\ntest1\n]]: line 1 column 1 (char 0)' so we can it contains the 2 objects but it is not displayed properly.
On the proxy we are getting the following error output:
Expecting value: line 1 column 1 (char 0): #012Traceback (most recent call last):#012
File "/usr/lib/
File "/usr/lib/
File "/usr/lib/
File "/usr/lib/
File "/usr/lib/
File "/usr/lib/
description: | updated |
description: | updated |
tags: | added: proxy-server |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
description: | updated |
Thank you for the detailed bug report! The problem seems to be our middleware auto-insertion, for listing_formats in particular. I expect when your proxy starts you see log lines like
Adding required filter versioned_writes to pipeline at position 8
Adding required filter dlo to pipeline at position 8
Adding required filter copy to pipeline at position 8
Adding required filter listing_formats to pipeline at position 8
Adding required filter gatekeeper to pipeline at position 1
Pipeline was modified. New pipeline is "catch_errors gatekeeper healthcheck cache authtoken s3api s3token keystoneauth proxy-logging listing_formats copy dlo versioned_writes bulk account-quotas container-quotas proxy-server".
The specific issue is how far to the right we insert the listing_formats middleware; we want it pretty close to the client so other middleware can rely on receiving JSON listings. The reason comes down to the limitations of how we do auto-insertion and the fact that there's only one instance of proxy-logging in the pipeline. We've recommended two for a while (see https:/ /github. com/openstack/ swift/commit/ a622349) -- but as long as old configs were working, I completely understand the desire to not muck with them.
I think a pipeline like
pipeline = catch_errors healthcheck proxy-logging cache authtoken s3api s3token keystoneauth bulk account-quotas container-quotas proxy-logging proxy-server
should get you squared; there will be some auto-insertion going on, but it should arrive at much better placement. FWIW, we've had some patches (like https:/ /review. opendev. org/c/openstack /swift/ +/635040 and https:/ /review. opendev. org/c/openstack /swift/ +/504472) to try to help with this general problem, but haven't polished any of them to the point of being able to land them on master :-/