Lost connection to MySQL during query

Bug #1691951 reported by Michele Baldessari
24
This bug affects 4 people
Affects Status Importance Assigned to Milestone
tripleo
Fix Released
Medium
Michele Baldessari
zaqar
Fix Released
Medium
Unassigned

Bug Description

Sporadically I will get the following when deploying an overcloud. The error comes from the undercloud, as if mysql there has a short hiccup:
+ openstack overcloud deploy --templates /home/stack/tripleo-heat-templates --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 -e /home/stack/cloud-names.yaml -e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e /home/stack/tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /home/stack/tripleo-heat-templates/environments/low-memory-usage.yaml --validation-warnings-fatal --control-scale 1 --ntp-server pool.ntp.org -e /home/stack/tripleo-heat-templates/environments/config-debug.yaml -e /home/stack/tripleo-heat-templates/environments/ha-docker.yaml -r /home/stack/custom_roles.yaml
Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: eed7c3b1-6f6b-4712-825b-8541fcce645f
{u'body': {u'exception': u"(pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'INSERT INTO `Queues` (project, name, metadata) VALUES (%(project)s, %(name)s, %(metadata)s)'] [parameters: {'project': u'5ac518c14589482da9d97c4ca4a58b6a', 'name': u'2eff06e4-2712-4f78-8ab6-6b934c852aef', 'metadata': bytearray(b'{}')}]", u'error': u'Unexpected error.'}, u'headers': {u'status': 500}, u'request': {u'action': u'queue_create', u'body': {u'queue_name': u'2eff06e4-2712-4f78-8ab6-6b934c852aef'}, u'api': u'v2', u'headers': {u'Client-ID': u'dcab4333-7ba2-42e3-a0d1-01b31bf97f75', u'X-Project-ID': u'5ac518c14589482da9d97c4ca4a58b6a'}}}
+ status_code=1
+ openstack stack list

So the error is at the following time:
2017-05-19 06:29:42.015 31478 INFO workflow_trace [-] Task 'send_message' (de1c9cc5-2916-4459-8c9b-8c219d4e965d) [RUNNING -> ERROR, msg=Failed to run action [action_ex_id=814164d4-7d79-4861-a962-394c7d402c39, action_cls='<class 'mistral.actions.action_factory.ZaqarAction'>', attributes='{u'client_method_name': u'queue_post'}', params='{u'queue_name': u'2eff06e4-2712-4f78-8ab6-6b934c852aef', u'messages': {u'body': {u'type': u'tripleo.validations.v1.check_boot_images', u'payload': {u'status': u'SUCCESS', u'errors': [], u'warnings': [], u'kernel_id': u'eedfbbeb-023a-4717-96ac-842376273c85', u'ramdisk_id': u'd42345d8-8a69-4cdd-9c83-e423555690ad', u'message': u'', u'execution': {u'name': u'tripleo.validations.v1.check_boot_images', u'created_at': u'2017-05-19 06:29:40', u'id': u'708639de-3a8f-4536-888c-31783f345757', u'params': {u'index': 0, u'task_execution_id': u'7a97b920-cdfc-4d57-b893-1f4f37d0ba31'}, u'input': {u'deploy_ramdisk_name': u'bm-deploy-ramdisk', u'deploy_kernel_name': u'bm-deploy-kernel', u'queue_name': u'2eff06e4-2712-4f78-8ab6-6b934c852aef', u'run_validations': True}, u'spec': {u'input': [{u'deploy_kernel_name': u'bm-deploy-kernel'}, {u'deploy_ramdisk_name': u'bm-deploy-ramdisk'}, {u'run_validations': True}, {u'queue_name': u'tripleo'}], u'tasks': {u'send_message': {u'retry': u'count=5 delay=1', u'name': u'send_message', u'on-success': [{u'fail': u'<% $.get(\'status\') = "FAILED" %>'}], u'version': u'2.0', u'action': u'zaqar.queue_post', u'input': {u'queue_name': u'<% $.queue_name %>', u'messages': {u'body': {u'type': u'tripleo.validations.v1.check_boot_images', u'payload': {u'status': u"<% $.get('status', 'SUCCESS') %>", u'errors': u'<% $.errors %>', u'warnings': u'<% $.warnings %>', u'kernel_id': u'<% $.kernel_id %>', u'ramdisk_id': u'<% $.ramdisk_id %>', u'message': u"<% $.get('message', '') %>", u'execution': u'<% execution() %>'}}}}, u'type': u'direct'}, u'fail_check_images': {u'version': u'2.0', u'type': u'direct', u'name': u'fail_check_images', u'publish': {u'status': u'FAILED', u'message': u'<% task(check_images).result %>'}, u'on-success': u'send_message'}, u'check_images': {u'name': u'check_images', u'on-error': u'fail_check_images', u'on-success': u'send_message', u'publish-on-error': {u'kernel_id': u'<% task(check_images).result.kernel_id %>', u'ramdisk_id': u'<% task(check_images).result.ramdisk_id %>', u'errors': u'<% task(check_images).result.errors %>', u'warnings': u'<% task(check_images).result.warnings %>'}, u'publish': {u'kernel_id': u'<% task(check_images).result.kernel_id %>', u'ramdisk_id': u'<% task(check_images).result.ramdisk_id %>', u'errors': u'<% task(check_images).result.errors %>', u'warnings': u'<% task(check_images).result.warnings %>'}, u'version': u'2.0', u'action': u'tripleo.validations.check_boot_images', u'input': {u'images': u'<% $.images %>', u'deploy_kernel_name': u'<% $.deploy_kernel_name %>', u'deploy_ramdisk_name': u'<% $.deploy_ramdisk_name %>'}, u'type': u'direct'}, u'check_run_validations': {u'on-complete': [{u'get_images': u'<% $.run_validations %>'}, {u'send_message': u'<% not $.run_validations %>'}], u'version': u'2.0', u'type': u'direct', u'name': u'check_run_validations'}, u'get_images': {u'name': u'get_images', u'on-success': u'check_images', u'publish': {u'images': u'<% task(get_images).result %>'}, u'version': u'2.0', u'action': u'glance.images_list', u'type': u'direct'}}, u'name': u'check_boot_images', u'version': u'2.0', u'output': {u'kernel_id': u'<% $.kernel_id %>', u'errors': u'<% $.errors %>', u'ramdisk_id': u'<% $.ramdisk_id %>', u'warnings': u'<% $.warnings %>'}}}}}}}']
 ZaqarAction.queue_post failed: <class 'zaqarclient.transport.errors.InternalServerError'>: Error response from Zaqar. Code: 500. Title: Internal server error. Description: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: u'SELECT `Queues`.metadata \nFROM `Queues` \nWHERE `Queues`.project = %(project_1)s AND `Queues`.name = %(name_1)s'] [parameters: {u'name_1': '2eff06e4-2712-4f78-8ab6-6b934c852aef', u'project_1': u'5ac518c14589482da9d97c4ca4a58b6a'}].] (execution_id=708639de-3a8f-4536-888c-31783f345757)
2017-05-19 06:29:42.020 31478 INFO workflow_trace [-] Task 'send_message' [ERROR -> DELAYED, delay = 1 sec] (execution_id=708639de-3a8f-4536-888c-31783f345757 task_id=de1c9cc5-2916-4459-8c9b-8c219d4e965d)

Yet nothing in /var/log/mariadb/mariadb.log on the undercloud

Revision history for this message
Michele Baldessari (michele) wrote :

Full sosreport of the undercloud here: http://people.redhat.com/mbaldess/lp1691951/sosreport-undercloud-20170519063650.tar.xz

Note that the undercloud has been installed many hours before the deployment that had the error mentioned in this bug

Revision history for this message
Martin André (mandre) wrote :

I've also seen this in my environment. Note that re-running the same deploy command usually succeeds on second try.

Revision history for this message
Mike Bayer (zzzeek) wrote :

copying from my mail to Michele

the original thinking for this is that something is causing network connectivity to have an issue while it's going on. Examples could be haproxy being restarted, VIPs being moved, neutron fiddling with routes, something like that.

but that this is the undercloud is of course a little strange, because that's a one-node, stable system and there shouldn't be any such changes occurring.

The other cause given by MySQL's docs and googling is that this is a timeout condition, which per them is a "millions of rows" situation, but again if the undercloud is a VM being taxed, perhaps this condition can be triggered more frequently. The solution there is to increase net-read-timeout to a number like 60 seconds or higher in the my.cnf files:

https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_net_read_timeout

The error message stating "during query" tells us that it's a read, not a write. I think this would mean its waiting too long for the INSERT to return a response.

If you normally see this happen a few times a day then maybe try changing that parameter and seeing if it decreases. Also is the undercloud a VM, and what is the config for the VM? In particular we've observed that the disk cache settings affect MySQL performance particularly if innodb_file_per_table is on, we usually set cache="unsafe", see patch at https://review.openstack.org/#/c/432444/.

Revision history for this message
Mike Bayer (zzzeek) wrote :

ah one wrinkle. we are using pymysql which also raises this error within any kind of socket timeout on read, and it isn't using this setting. it has it's own "read_timeout" setting for the socket which defaults to None, so system defaults, and this isn't exposed through oslo.db as of yet. ( we can add that).

I'd look at the VM disk cache setting first and I'll try to see what ways we might get pymysql's read_timeout to be inserted.

Revision history for this message
Mike Bayer (zzzeek) wrote :

ah, i remember. so where the database URL is configured, look for the "mysql" url and add the parameter to the query string:

connection = mysql+pymysql://user:pass@hostname/dbname?read_timeout=60

im not sure how zaqar configures this, in a local undercloud i have here zaqar is not using SQL transport, I'd assume it's under "sqlalchemy" or something like that in zaqar.conf.

Revision history for this message
Michele Baldessari (michele) wrote :

Thanks Mike!

So the rate at which I am hitting this is fairly low, so I won't be 100% sure I will be able to say it is fixed or not with the changes you mention.

Can you think of a way to log this failure on the server (or client side)? I am kind of thinking that there might be an option to log this exact failure and maybe understand better what is going on? 'Cause the VM is really doing nothing when I get the error and the error shows up super quickly (I can't really observe some longish timeout), so I wonder if something else is messing with us.

Revision history for this message
Mike Bayer (zzzeek) wrote :

there should be a stack trace within the log files of the service that actually had the problem. if zaqar is the one w/ the issue then it would be in /var/log/zaqar/zaqar.log.

Revision history for this message
Mike Bayer (zzzeek) wrote :

so it's valuable to look at the PyMySQL driver to see when he throws this error - these are not necessarily the same conditions under which the main MySQL documentation documents, since those refer to the native client libraries.

It is thrown exactly in these places:

in _read_bytes, where there is an IOError or OSError during the read() call. We know the error above is not it, because in this case, he also adds the errno:

            try:
                data = self._rfile.read(num_bytes)
                break
            except (IOError, OSError) as e:
                if e.errno == errno.EINTR:
                    continue
                self._force_close()
                raise err.OperationalError(
                    CR.CR_SERVER_LOST,
                    "Lost connection to MySQL server during query (%s)" % (e,))

then he throws it right below, if the MySQL server did not send enough data based on what the protocol has stated should be sent:

        if len(data) < num_bytes:
            self._force_close()
            raise err.OperationalError(
                CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")

finally, it is thrown in a case that is documented specifically for MariaDB, when the server is being shutdown (but clearly is still running because this is data sent from the server):

                if packet_number == 0:
                    # MariaDB sends error packet with seqno==0 when shutdown
                    raise err.OperationalError(
                        CR.CR_SERVER_LOST,
                        "Lost connection to MySQL server during query")

This is all in pure Python so it should be present in the log file completely within a stack trace which source we are seeing.

Revision history for this message
Mike Bayer (zzzeek) wrote :

OK based on the package I see we are using, 0.7.9 with some patches, and linking to the stack trace we have at http://paste.openstack.org/show/610572/, the error is thrown on line 1008 where MySQL is not sending the expected number of bytes:

      if len(data) < num_bytes:
            raise err.OperationalError(
                2013, "Lost connection to MySQL server during query")

so this would not necessarily be a timeout, it means the filehandle off the socket, when the client calls file.read(N), is returning with less than N bytes having been read. N here is determined by the MySQL protocol, and is sent within the packet header as to what it will be. This indicates the stream has been shut down prematurely (because otherwise, if the server just wasn't sending enough bytes, the read() call would continue waiting until we hit the timeout condition) so this indicates connectivity is being lost and/or the server is being killed abruptly.

Revision history for this message
Mike Bayer (zzzeek) wrote :

in fact the trace refers to the call where it's just reading the packet itself as where this is happening:

line 971: packet_header = self._read_bytes(4)

that's failing with less data read. I'm going to guess the actual number of bytes read is zero, e.g. the filehandle is already dead when we get there.

Revision history for this message
Mike Bayer (zzzeek) wrote :

OK, try this setting in /etc/my.cnf.d/mariadb-server.cnf :

[server]
max_allowed_packet=32M

https://dev.mysql.com/doc/refman/5.7/en/packet-too-large.html

Revision history for this message
Michele Baldessari (michele) wrote :

I changed it on my undercloud. Will report here how it goes.

Revision history for this message
Michele Baldessari (michele) wrote :

For the record the full trace was here:
http://paste.openstack.org/show/610572/

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to instack-undercloud (master)

Fix proposed to branch: master
Review: https://review.openstack.org/467707

Changed in tripleo:
assignee: nobody → Michele Baldessari (michele)
status: Triaged → In Progress
Revision history for this message
Michele Baldessari (michele) wrote :
Download full text (7.9 KiB)

So still not entirely gold. I got this today:
Started Mistral Workflow tripleo.validations.v1.check_pre_deployment_validations. Execution ID: e1904824-33e7-40a4-9938-8bbab39e0b7d │·
{u'body': {u'exception': u'(pymysql.err.OperationalError) (2006, "MySQL server has gone away (error(32, \'Broken pipe\'))") [SQL: u\'INSERT INTO `Queues` (project, name, metadata) VALUES (│·
%(project)s, %(name)s, %(metadata)s)\'] [parameters: {\'project\': u\'5ac518c14589482da9d97c4ca4a58b6a\', \'name\': u\'d9eecd49-a283-4bcb-86f9-bfd2a33e9857\', \'metadata\': bytearray(b\'{}│·
\')}]', u'error': u'Unexpected error.'}, u'headers': {u'status': 500}, u'request': {u'action': u'queue_create', u'body': {u'queue_name': u'd9eecd49-a283-4bcb-86f9-bfd2a33e9857'}, u'api': u│·
'v2', u'headers': {u'Client-ID': u'a355b207-54e7-4fcf-8468-94175313ba5f', u'X-Project-ID': u'5ac518c14589482da9d97c4ca4a58b6a'}}}

And my setting was already changed server-side:
MariaDB [(none)]> show variables like 'max_allowed_packet';
+--------------------+----------+
| Variable_name | Value |
+--------------------+----------+
| max_allowed_packet | 33554432 |
+--------------------+----------+
1 row in set (0.00 sec)

Maybe the size is not enough?

The zaqar.log for today's failure is:
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver tmp = target(*args, **kwargs)
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver File "/usr/lib/python2.7/site-packages/zaqar/storage/sqlalchemy/queues.py", line 66, in get_metadata
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver queue = self.driver.run(sel).fetchone()
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver File "/usr/lib/python2.7/site-packages/zaqar/storage/sqlalchemy/driver.py", line 81, in run
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver return self.connection.execute(*args, **kwargs)
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1046, in execute
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver bind, close_with_result=True).execute(clause, params or {})
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver return meth(self, multiparams, params)
2017-05-25 08:16:57.149 23412 ERROR zaqar.transport.wsgi.driver File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection ...

Read more...

Revision history for this message
Mike Bayer (zzzeek) wrote :

ah ha ! now that's a different condition. note the error:

(2006, "MySQL server has gone away (error(32, \'Broken pipe\'))")

and also it's crapping out on the "write" side this time. The same table / column, however we can see in the error this is not a large set of data.

the code here crashing is:

   def _write_bytes(self, data):
        self._sock.settimeout(self._write_timeout)
        try:
            self._sock.sendall(data)
        except IOError as e:
            raise err.OperationalError(2006, "MySQL server has gone away (%r)" % (e,))

unfortunately I'm not finding any other option besides the max_allowed_packet in the discussion of this. But per the error, the actual INSERT statement is not inserting a large row:

SQL: u\'INSERT INTO `Queues` (project, name, metadata) VALUES (│·
%(project)s, %(name)s, %(metadata)s)\'] [parameters: {\'project\': u\'5ac518c14589482da9d97c4ca4a58b6a\', \'name\': u\'d9eecd49-a283-4bcb-86f9-bfd2a33e9857\', \'metadata\': bytearray(b\'{}│·
\')}]'

that's an empty bytearray.

I would ask these questions:

1. is this error *always* involving zaqar and this `Queues` table ? e.g. it is never any other table, never any other openstack application?

2. does this error happen on completely different openstack installs elsewhere? Or otherwise, is it possible that this "Queue" table has some kind of problem?

3. do you have innodb_file_per_table turned on or off? What happens if we change the setting to the opposite of what it is? I'm wondering if there's some strange issue with the "Queue" table itself.

4. has this been reported to the Zaqar project already? do other people observe this with Zaqar?

5. changing the Zaqar backend to not be the "SQL" backend (on my undercloud here Zaqar does not seem to use the database), all problems go away?

The next step here if it were me would be to turn on SQL logging for Zaqar and watch the whole SQL conversation happen - however, that's a lot of logging and it may even indirectly make the error go away if it is a race-oriented condition bc. it will slow the app down a bit.

I'd be curious at this point however if the Zaqar project has seen this problem.

Revision history for this message
Michele Baldessari (michele) wrote :

I am adding zaqar to this bug as it does seem something very zaqar specific happening here, and we could do with a bit of help from the zaqar folks.

Changed in tripleo:
status: In Progress → Triaged
Revision history for this message
Mike Bayer (zzzeek) wrote :

The other day I took a look into zaqar and as we see in https://review.openstack.org/#/c/469916/ I simplified their connectivity pattern a bit, noting however that Zaqar made the curious design choice to *not* use oslo.db. Today I noticed that this even means I can't turn on SQL logging without hand-patching the code; doing that on an undercloud, I failed to restart the service correctly, ran an overcloud deploy, and got the error immediately, which made me realize why we are getting this error.

The very likely cause here is that Zaqar is not properly handling stale connections. MySQL will kill any connections that are idle for more than eight hours, and SQLAlchemy's connection pool won't do anything about that unless you also tell it to discard connections that are older than a certain time. Going beyond that, best practice in modern use is to "pre-ping" the server upon every connection checkout, and recycle the connection if it's shown to be stale. These things aren't automatic and I usually never think to look at these things because oslo.db has solved all these issues a long time ago, and they are in general very basic SQLAlchemy related tasks because SQLA doesn't assume these settings.

Short answer here is zaqar needs to implement "pre-ping" which w/ current SQLAlchemy release looks like http://docs.sqlalchemy.org/en/rel_1_1/core/pooling.html#disconnect-handling-pessimistic, or much more easily, replace "from sqlalchemy import create_engine" with "from oslo_db.sqlalchemy.engines import create_engine" which adds this in for free.

Changed in tripleo:
milestone: pike-2 → pike-3
Revision history for this message
Mike Bayer (zzzeek) wrote :
Revision history for this message
Mike Bayer (zzzeek) wrote :
Download full text (4.9 KiB)

after navigating the learning curve how to get a keystone token and do a zaqar request, I can confirm the error occurs due to the connection being killed from the server, or if the server has been restarted. if the connection was killed, as we'd get with the 8 hour timeout, we get the "Lost connection to MySQL server" error, and if we just stop the database, we get "MySQL server has gone away".

# STEP 1 - do a successful zaqar request - it will have a new database connection

(undercloud) [stack@undercloud ~]$ curl -i -X PUT https://192.168.24.2:13888/v2/queues/newqueue -H "Content-type: application/json" -H "Client-ID: e58668
fc-26eb-11e3-8270-5b3128d43830" -H "X-Auth-Token: gAAAAABZQBEG1WFsSChhqmi-EzZrGfV_v7tWM_vfVL2jB_BgkXyM8uaTYX_Dg2eML8EuItTBtuUyLUUmBV_n72HuVFtszDUCw0AY1Wh
lqO6FyVehllm9SlgXGzJ5Ai8dkJemgud9UkaplAM-ffKfN5sZCjoPjfDigwdRuEVkgbtzof84p3t_txM" -H "Accept: application/json" -H "X-Project-Id: 34ce252c76ae4e3b9c2a936
0a3c6eaf0"
HTTP/1.1 204 No Content
Date: Tue, 13 Jun 2017 16:32:05 GMT
Server: Apache
location: /v2/queues/newqueue
Content-Type: text/plain

# STEP 2 - log into mysql and kill off all zaqar processes

(undercloud) [stack@undercloud ~]$ mysql -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 70
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> select * from information_schema.processlist where user='zaqar';
+----+-------+--------------------+-------+---------+------+-------+------+-----------+-------+-----------+----------+-------------+---------------+-----
-----+-------------+-------+
| ID | USER | HOST | DB | COMMAND | TIME | STATE | INFO | TIME_MS | STAGE | MAX_STAGE | PROGRESS | MEMORY_USED | EXAMINED_ROWS | QUER
Y_ID | INFO_BINARY | TID |
+----+-------+--------------------+-------+---------+------+-------+------+-----------+-------+-----------+----------+-------------+---------------+-----
-----+-------------+-------+
| 68 | zaqar | 192.168.24.1:50696 | zaqar | Sleep | 15 | | NULL | 15097.283 | 0 | 0 | 0.000 | 69480 | 0 |
7960 | NULL | 17399 |
+----+-------+--------------------+-------+---------+------+-------+------+-----------+-------+-----------+----------+-------------+---------------+-----
-----+-------------+-------+
1 row in set (0.00 sec)
MariaDB [(none)]> kill connection 68; [46/1987]
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> ^DBye

# STEP 3 - do he same request. 503 error

(undercloud) [stack@undercloud ~]$ curl -i -X PUT https://192.168.24.2:13888/v2/queues/newqueue -H "Content-type: application/json" -H "Client-ID: e58668
fc-26eb-11e3-8270-5b3128d43830" -H "X-Auth-Token: gAAAAABZQBEG1WFsSChhqmi-EzZrGfV_v7tWM_vfVL2jB_BgkXyM8uaTYX_Dg2eML8EuItTBtuUyLUUmBV_n72HuVFtszDUCw0AY1Wh
lqO6FyVehllm9SlgXGzJ5Ai8dkJemgud9UkaplAM-ffKfN5sZCjoPjfDigwdRuEVkgbtzof84p3t_txM" -H "Accept: application/json" -H "...

Read more...

Changed in tripleo:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on instack-undercloud (master)

Change abandoned by Michele Baldessari (<email address hidden>) on branch: master
Review: https://review.openstack.org/467707
Reason: zzzek is working on a proper fix in zaqar's db code

Changed in tripleo:
milestone: pike-3 → pike-rc1
Changed in tripleo:
milestone: pike-rc1 → queens-1
Revision history for this message
Thomas Herve (therve) wrote :

I think it has been fixed by https://review.openstack.org/#/c/473612/

Changed in zaqar:
status: New → Fix Released
Changed in tripleo:
status: In Progress → Fix Released
Changed in zaqar:
importance: Undecided → Medium
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.