Deployment failed with Upload cirros "TestVM" image failed

Bug #1320123 reported by Andrey Sledzinskiy
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Fix Released
High
Fuel Library (Deprecated)

Bug Description

http://jenkins-product.srt.mirantis.net:8080/view/0_0_swarm/job/master_fuelmain.system_test.centos.thread_1/51/testReport/junit/%28root%29/ceph_multinode_with_cinder/ceph_multinode_with_cinder/

Steps:
1. Create cluster - Centos, Simple, Flat Nova-network, Cinder for volumes and Ceph for images, 1 controller, 1 compute, 2 cinder+ceph nodes
2. Run deploy

Actual - deploy failed with next error in astute.log:
2014-05-15T23:10:18 err: [399] MCollective agents '3' didn't respond within the allotted time.

2014-05-15T23:10:18 debug: [399] 42f5bc8f-5895-4505-9e12-ca955c6f1d4f: cmd: /usr/bin/glance -N http://127.0.0.1:5000/v2.0/ -T admin -I admin -K admin image-create --name 'TestVM' --is-public true --container-format='bare' --disk-format='qcow2' --min-ram=64 --property murano_image_info='{"title": "Murano Demo", "type": "cirros.demo"}' --file '/opt/vm/cirros-x86_64-disk.img'
                                               mcollective error: 42f5bc8f-5895-4505-9e12-ca955c6f1d4f: MCollective agents '3' didn't respond within the allotted time.

2014-05-15T23:10:18 err: [399] 42f5bc8f-5895-4505-9e12-ca955c6f1d4f: Upload cirros "TestVM" image failed
2014-05-15T23:10:18 debug: [399] Data received by DeploymentProxyReporter to report it up: {"nodes"=>[{"uid"=>"3", "status"=>"error", "error_type"=>"deploy", "role"=>"controller"}]}
2014-05-15T23:10:18 debug: [399] Data send by DeploymentProxyReporter to report it up: {"nodes"=>[{"uid"=>"3", "status"=>"error", "error_type"=>"deploy", "role"=>"controller"}]}
2014-05-15T23:10:18 info: [399] Casting message to fuel: {"method"=>"deploy_resp", "args"=>{"task_uuid"=>"42f5bc8f-5895-4505-9e12-ca955c6f1d4f", "nodes"=>[{"uid"=>"3", "status"=>"error", "error_type"=>"deploy", "role"=>"controller"}]}}
2014-05-15T23:10:18 err: [399] Error running RPC method deploy: Upload cirros "TestVM" image failed, trace: ["/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions/upload_cirros_image.rb:91:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `block in process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/orchestrator.rb:46:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/dispatcher.rb:103:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:126:in `dispatch_message'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:89:in `block in dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `call'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `block in each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `each_with_index'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:72:in `block in perform_main_job'"]

Logs are attached

Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :
Mike Scherbakov (mihgen)
Changed in fuel:
assignee: Fuel Astute Team (fuel-astute) → Vladimir Sharshov (vsharshov)
Revision history for this message
Dmitry Ilyin (idv1985) wrote :

I'm not confirming.

Deployment with 1 controller and 2 dedicated ceph nodes. ceph for both glance and cinder. ubuntu ha. iso 206

Provisioning, deployment went normally, image was uploaded and creating instaces works as expected.

Revision history for this message
Vladimir Sharshov (vsharshov) wrote :
Download full text (14.1 KiB)

I haven't found anythings about problems in mcollective, but have found potencial ceph problem which can affect this.

I will try to reproduce it.

2014-05-15T20:21:40.199629+00:00 info: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]) Starting to evaluate the resource
2014-05-15T20:21:40.200930+00:00 debug: (Exec[ceph-deploy mon create](provider=posix)) Executing check 'ceph mon stat | grep 10.108.7.4'
2014-05-15T20:21:40.201708+00:00 debug: Executing 'ceph mon stat | grep 10.108.7.4'
2014-05-15T20:21:40.507230+00:00 debug: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/unless) 2014-05-15 20:21:40.469280 7f8aa2e3c700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2014-05-15T20:21:40.507370+00:00 debug: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/unless) 2014-05-15 20:21:40.469333 7f8aa2e3c700 0 librados: client.admin initialization error (2) No such file or directory
2014-05-15T20:21:40.507370+00:00 debug: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/unless) Error connecting to cluster: ObjectNotFound
2014-05-15T20:21:40.507370+00:00 debug: (Exec[ceph-deploy mon create](provider=posix)) Executing 'ceph-deploy mon create node-3:10.108.7.4'
2014-05-15T20:21:40.507813+00:00 debug: Executing 'ceph-deploy mon create node-3:10.108.7.4'
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy mon create node-3:10.108.7.4
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node-3:10.108.7.4
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [ceph_deploy.mon][DEBUG ] detecting platform for host node-3 ...
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection without sudo
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [node-3][DEBUG ] determining if provided host has same hostname in remote
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [node-3][DEBUG ] deploying mon to node-3
2014-05-15T20:21:46.403993+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [node-3][DEBUG ] remote hostname: node-3
2014-05-15T20:21:46.404142+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [node-3][INFO ] write cluster configuration to /etc/ceph/{cluster}.conf
2014-05-15T20:21:46.404142+00:00 notice: (/Stage[main]/Ceph::Mon/Exec[ceph-deploy mon create]/returns) [node-3][ERROR ] Traceback (most recent call last...

Changed in fuel:
status: New → Triaged
Revision history for this message
Vladimir Sharshov (vsharshov) wrote :

Reproduced on #211. Investigating.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to fuel-astute (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/94164

Revision history for this message
Vladimir Sharshov (vsharshov) wrote :

This fix do not directly solve the problem, but decrease amount of potencial problem place in code. Today i deploy cluster for this bug 3 times and repeated only once (first deploy). After this fix i will try to find problem in other place, if bug will reproduce.

Revision history for this message
Dmitry Ilyin (idv1985) wrote :
Download full text (3.8 KiB)

I have reproduced this

2014-05-19 14:57:28 ERR
[412] Error running RPC method deploy: Upload cirros "TestVM" image failed, trace: ["/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions/upload_cirros_image.rb:91:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `block in process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/orchestrator.rb:46:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/dispatcher.rb:105:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:126:in `dispatch_message'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:89:in `block in dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `call'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `block in each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `each_with_index'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:72:in `block in perform_main_job'"]
2014-05-19 14:57:28 ERR
[412] 851b46e2-69d4-441f-8e18-2afd1e7ea68c: Upload cirros "TestVM" image failed
2014-05-19 14:57:28 ERR
[412] MCollective agents '3' didn't respond within the allotted time.
2014-05-19 13:05:48 ERR
[421] Exception during worker initialization: #<AMQP::TCPConnectionFailed: Could not estabilish TCP connection to 10.20.0.2:5672>, trace: ["/usr/lib64/ruby/gems/2.1.0/gems/amq-client-0.9.12/lib/amq/client/async/adapters/event_machine.rb:162:in `block in initialize'", "/usr/lib64/ruby/gems/2.1.0/gems/amq-client-0.9.12/lib/amq/client/async/adapter.rb:306:in `call'", "/usr/lib64/ruby/gems/2.1.0/gems/amq-client-0.9.12/lib/amq/client/async/adapter.rb:306:in `tcp_connection_failed'", "/usr/lib64/ruby/gems/2.1.0/gems/amq-client-0.9.12/lib/amq/client/async/adapters/event_machine.rb:290:in `unbind'", "/usr/lib64/ruby/gems/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1438:in `event_callback'", "/usr/lib64/ruby/gems/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine'", "/usr/lib64/ruby/gems/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/worker.rb:38:in `run'", "/usr/lib64/ruby/gems/2.1.0/gems/raemon-0.3.0/lib/raemon/master.rb:343:in `worker_loop!'", "/usr/lib64/ruby/gems/2.1.0/gems/raemon-0.3.0/lib/raemon/master.rb:266:in `block (2 levels) in spawn_workers'", "/usr/lib64/ruby/gems/2.1.0/gems/raemon-0.3.0/lib/raemon/master.rb:266:in `fork'", "/usr/lib64/ruby/gems/2.1.0/gems/raemon-0.3.0/lib/raemon/master.rb:266:in `block in spawn_worke...

Read more...

Revision history for this message
Dmitry Ilyin (idv1985) wrote :

Looks like this problem has nothing to do with mcollective and astute. It's glance/ceph that don't work and hang while trying to either upload image or even gel a list of images. Perhapt this bug is releted too https://bugs.launchpad.net/fuel/+bug/1295717

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to fuel-astute (master)

Reviewed: https://review.openstack.org/94164
Committed: https://git.openstack.org/cgit/stackforge/fuel-astute/commit/?id=b110b31dcbcdbadf8eae0e0e4d526cfa0a7e4f19
Submitter: Jenkins
Branch: master

commit b110b31dcbcdbadf8eae0e0e4d526cfa0a7e4f19
Author: Vladimir Sharshov <email address hidden>
Date: Mon May 19 14:41:09 2014 +0400

    Use recommended method to execute shell commands

    Use default way from MCollective to execute shell commands.
    We use latest version in all systems(Ubuntu and CentOS)
    which most stable at now moment.
    It can possible prevent problem with unexpected timeout.

    Change-Id: If5a900e046ee0b0e487376df14349c56b05779e2
    Related-Bug: #1320123

Changed in fuel:
status: Triaged → In Progress
Revision history for this message
Mike Scherbakov (mihgen) wrote :

Needs additional confirmation and research. It is likely an issue somewhere between Cinder & Ceph, services may not be up at the time we try to upload an image.

Changed in fuel:
assignee: Vladimir Sharshov (vsharshov) → Fuel Library Team (fuel-library)
status: In Progress → Confirmed
Mike Scherbakov (mihgen)
Changed in fuel:
assignee: Fuel Library Team (fuel-library) → Dmitry Ilyin (idv1985)
Revision history for this message
Aleksey Kasatkin (alekseyk-ru) wrote :
Download full text (3.2 KiB)

ISO #214.

2014-05-21 09:58:55 ERR
[412] Error running RPC method deploy: Upload cirros "TestVM" image failed, trace: ["/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions/upload_cirros_image.rb:91:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `block in process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/orchestrator.rb:46:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/dispatcher.rb:105:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:126:in `dispatch_message'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:89:in `block in dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `call'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `block in each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `each_with_index'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:72:in `block in perform_main_job'"]
2014-05-21 09:58:55 ERR
[412] c9c73cf2-ae03-43a6-a376-5e8777b3a413: Upload cirros "TestVM" image failed
2014-05-21 09:58:55 ERR
[412] c9c73cf2-ae03-43a6-a376-5e8777b3a413: cmd: /usr/bin/glance -N http://192.168.0.2:5000/v2.0/ -T admin -I admin -K admin image-create --name 'TestVM' --is-public true --container-format='bare' --disk-format='qcow2' --min-ram=64 --property murano_image_info='{"title": "Murano Demo", "type": "cirros.demo"}' --file '/opt/vm/cirros-x86_64-disk.img'
                                               mcollective error: c9c73cf2-ae03-43a6-a376-5e8777b3a413: MCollective agents '1' didn't respond within the allotted time.
2014-05-21 09:58:55 ERR
[412] MCollective agents '1' didn't respond within the allotted time.
2014-05-21 09:56:54 ERR
[412] c9c73cf2-ae03-43a6-a376-5e8777b3a413: cmd: /usr/bin/glance -N http://192.168.0.2:5000/v2.0/ -T admin -I admin -K admin index && (/usr/bin/glance -N http://192.168.0.2:5000/v2.0/ -T admin -I admin -K admin index | grep TestVM)
                                               mcollective error: c9c73cf2-ae03-43a6-a376-5e8777b3a413: MCollective agents '1' didn't respond within the allotted time.
2014-05-21 09:56:54 ERR
[412] MCollecti...

Read more...

Revision history for this message
Aleksey Kasatkin (alekseyk-ru) wrote :
Revision history for this message
Aleksey Kasatkin (alekseyk-ru) wrote :

Second try has led to the same result.

Revision history for this message
Mike Scherbakov (mihgen) wrote :

Alexey's env:
HA: 2 controllers, FlatDHCP, 1 mongo, 1 compute

Swift contains errors:
2014-05-21T09:26:34.101808+00:00 err: ERROR with Account server 192.168.1.3:6002/1 re: Trying to GET /v1/AUTH_cb1dd3f67f1e40c4a715c56589fd7cf2: ConnectionTimeout (0.5s) (txn: tx1a83d7a74e72488a9f971-00537c7133) (client_ip: 172.16.0.2)
2014-05-21T09:26:34.274523+00:00 err: STDOUT: ERROR:root:Error connecting to memcached: node-2:11211#012Traceback (most recent call last):#012 File "/usr/lib/python2.6/site-packages/swift/common/memcached.py", line 239, in _get_conns#012 fp, sock = self._client_cache[server].get()#012 File "/usr/lib/python2.6/site-packages/swift/common/memcached.py", line 135, in get#012 fp, sock = self.create()#012 File "/usr/lib/python2.6/site-packages/swift/common/memcached.py", line 128, in create#012 sock.connect((host, int(port)))#012 File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 192, in connect#012 socket_checkerr(fd)#012 File "/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 46, in socket_checkerr#012 raise socket.error(err, errno.errorcode[err])#012error: [Errno 113] EHOSTUNREACH (txn: tx1a83d7a74e72488a9f971-00537c7133) (client_ip: 172.16.0.2)
2014-05-21T09:26:34.935341+00:00 err: ERROR with Account server 192.168.1.3:6002/1 re: Trying to GET /v1/AUTH_de3fef055aca4d378891859119f8bb0a: ConnectionTimeout (0.5s) (txn: txcd208d0404ea41daa98df-00537c714a) (client_ip: 172.16.0.2)
2014-05-21T09:26:35.335924+00:00 err: ERROR with Account server 192.168.1.3:6002/2 re: Trying to GET /v1/AUTH_de3fef055aca4d378891859119f8bb0a: Host unreachable (txn: txcd208d0404ea41daa98df-00537c714a) (client_ip: 172.16.0.2)
2014-05-21T09:26:35.891709+00:00 err: ERROR with Account server 192.168.1.3:6002/2 re: Trying to HEAD /v1/AUTH_cb1dd3f67f1e40c4a715c56589fd7cf2: ConnectionTimeout (0.5s) (txn: txfc6fb5add9d34d4da1bc6-00537c714b) (client_ip: 172.16.0.2)
2014-05-21T09:26:36.392949+00:00 err: ERROR with Account server 192.168.1.3:6002/1 re: Trying to HEAD /v1/AUTH_cb1dd3f67f1e40c4a715c56589fd7cf2: ConnectionTimeout (0.5s) (txn: txfc6fb5add9d34d4da1bc6-00537c714b) (client_ip: 172.16.0.2)
2014-05-21T09:26:36.693036+00:00 err: STDOUT: ERROR:root:Timeout connecting to memcached: node-2:11211 (txn: txfc6fb5add9d34d4da1bc6-00537c714b) (client_ip: 172.16.0.2)
2014-05-21T09:26:37.231120+00:00 err: ERROR with Account server 192.168.1.3:6002/1 re: Trying to HEAD /v1/AUTH_de3fef055aca4d378891859119f8bb0a: ConnectionTimeout (0.5s) (txn: txb7a1a8fe421e4870a5310-00537c714c) (client_ip: 172.16.0.2)
2014-05-21T09:26:37.768836+00:00 err: ERROR with Account server 192.168.1.3:6002/2 re: Trying to HEAD /v1/AUTH_de3fef055aca4d378891859119f8bb0a: ConnectionTimeout (0.5s) (txn: txb7a1a8fe421e4870a5310-00537c714c) (client_ip: 172.16.0.2)

Revision history for this message
Mike Scherbakov (mihgen) wrote :

I was not able to reproduce an issue.. I believe the env was overloaded, so it exceeded allowed timeout of image upload process.
I had 2 controllers, FlatDHCP CentOS, 1 compute node, and deployed passed fine, image was uploaded on #209 ISO.

Closing as incomplete for now. We need to get precise steps to reproduce and particular environment to identify an issue.

Changed in fuel:
status: Confirmed → Incomplete
Revision history for this message
Vladimir Kuklin (vkuklin) wrote :

discussed the bug with Alexey - seems really like performance issue. need a reproducer from someone other. also, the bug description is very generic - you can have upload of image failing for a thousand of reasons.

Revision history for this message
Vladimir Kuklin (vkuklin) wrote :
Changed in fuel:
assignee: Dmitry Ilyin (idv1985) → Fuel OSCI Team (fuel-osci)
Changed in fuel:
status: Incomplete → Confirmed
Changed in fuel:
status: Confirmed → Fix Committed
Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :

Get teh same error during deployment with bonding in systme test results
http://jenkins-product.srt.mirantis.net:8080/view/5.0_swarm/job/5.0_fuelmain.system_test.ubuntu.bonding/4/testReport/junit/%28root%29/deploy_bonding_ha_balance_slb/deploy_bonding_ha_balance_slb/

Errors in astute logs:
[418] Error running RPC method deploy: Upload cirros "TestVM" image failed, trace: ["/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions/upload_cirros_image.rb:91:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `block in process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/post_deploy_actions.rb:29:in `process'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/orchestrator.rb:46:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/dispatcher.rb:105:in `deploy'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:126:in `dispatch_message'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:89:in `block in dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `call'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:64:in `block in each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/task_queue.rb:56:in `each'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `each_with_index'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:87:in `dispatch'", "/usr/lib64/ruby/gems/2.1.0/gems/astute-0.0.2/lib/astute/server/server.rb:72:in `block in perform_main_job'"]

Cluster configuration - Ubuntu, HA, Neutron GRE, 3 controllers, 2 compute, balance-slb bonding for all interfaces

Logs are attached

Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :
Changed in fuel:
status: Fix Committed → Confirmed
assignee: Fuel OSCI Team (fuel-osci) → Fuel Library Team (fuel-library)
Revision history for this message
Vladimir Kuklin (vkuklin) wrote :
Download full text (3.7 KiB)

Andrey,

1) we needed a reproducer from the different environment at least
2) you have specific configuration for bonding - network connectivity problems. I see this glance-api logs:

2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 ERROR swiftclient [e0ec78b9-472d-430b-b2a3-46558fca50ff 535a6b9342e8465ba54c5cb125d3ef1c 9c9688e5dd7740cf925af6a3327ae0a8 - - -] [Errno 113] EHOSTUNREACH
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient Traceback (most recent call last):
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 1110, in _retry
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient rv = func(self.url, self.token, *args, **kwargs)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 570, in head_container
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient conn.request(method, path, '', req_headers)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/dist-packages/swiftclient/client.py", line 168, in request_escaped
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient func(method, url, body=body, headers=headers or {})
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/httplib.py", line 958, in request
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient self._send_request(method, url, body, headers)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient self.endheaders(body)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient self._send_output(message_body)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient self.send(msg)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/httplib.py", line 776, in send
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient self.connect()
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient File "/usr/lib/python2.7/httplib.py", line 757, in connect
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 26860 TRACE swiftclient self.timeout, self.source_address)
2014-05-26T00:29:47.460216+00:00 debug: 2014-05-26 00:29:40.149 268...

Read more...

Changed in fuel:
status: Confirmed → Fix Committed
Revision history for this message
OSCI Robot (oscirobot) wrote :

Package keystone has been built from changeset: https://review.fuel-infra.org/84
RPM Repository URL: http://osci-obs.vm.mirantis.net:82/centos-fuel-6.0-stable-84/centos

Revision history for this message
OSCI Robot (oscirobot) wrote :

Package keystone has been built from changeset: https://review.fuel-infra.org/84
DEB Repository URL: http://osci-obs.vm.mirantis.net:82/ubuntu-fuel-6.0-stable-84/ubuntu

Revision history for this message
OSCI Robot (oscirobot) wrote :

Package keystone has been built from changeset: https://review.fuel-infra.org/84
RPM Repository URL: http://osci-obs.vm.mirantis.net:82/centos-fuel-6.0-stable/centos

Revision history for this message
OSCI Robot (oscirobot) wrote :

Package keystone has been built from changeset: https://review.fuel-infra.org/84
DEB Repository URL: http://osci-obs.vm.mirantis.net:82/ubuntu-fuel-6.0-stable/ubuntu

Changed in fuel:
status: Fix Committed → Fix Released
Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix proposed to openstack/keystone (openstack-ci/fuel-7.0/2015.1.0)

Fix proposed to branch: openstack-ci/fuel-7.0/2015.1.0
Change author: Alexander Makarov <email address hidden>
Review: https://review.fuel-infra.org/8167

Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Change abandoned on openstack/keystone (openstack-ci/fuel-7.0/2015.1.0)

Change abandoned by Alexander Makarov <email address hidden> on branch: openstack-ci/fuel-7.0/2015.1.0
Review: https://review.fuel-infra.org/8167

Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix proposed to openstack/keystone (openstack-ci/fuel-6.1/2014.2)

Fix proposed to branch: openstack-ci/fuel-6.1/2014.2
Change author: Alexander Makarov <email address hidden>
Review: https://review.fuel-infra.org/15988

Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Change abandoned on openstack/keystone (openstack-ci/fuel-6.1/2014.2)

Change abandoned by Sergii Rizvan <email address hidden> on branch: openstack-ci/fuel-6.1/2014.2
Review: https://review.fuel-infra.org/15988

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.