SSHTimeout in tempest trying to verify that computes are actually functioning

Bug #1298472 reported by Ihar Hrachyshka
124
This bug affects 25 people
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
Unassigned
OpenStack Compute (nova)
Fix Released
Critical
Unassigned
tempest
Invalid
Undecided
Matt Riedemann

Bug Description

tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern failed at least once with the following traceback when trying to connect via SSH:

Traceback (most recent call last):
  File "tempest/scenario/test_volume_boot_pattern.py", line 156, in test_volume_boot_pattern
    ssh_client = self._ssh_to_server(instance_from_snapshot, keypair)
  File "tempest/scenario/test_volume_boot_pattern.py", line 100, in _ssh_to_server
    private_key=keypair.private_key)
  File "tempest/scenario/manager.py", line 466, in get_remote_client
    return RemoteClient(ip, username, pkey=private_key)
  File "tempest/common/utils/linux/remote_client.py", line 47, in __init__
    if not self.ssh_client.test_connection_auth():
  File "tempest/common/ssh.py", line 149, in test_connection_auth
    connection = self._get_ssh_connection()
  File "tempest/common/ssh.py", line 65, in _get_ssh_connection
    timeout=self.channel_timeout, pkey=self.pkey)
  File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, in connect
    retry_on_signal(lambda: sock.connect(addr))
  File "/usr/local/lib/python2.7/dist-packages/paramiko/util.py", line 279, in retry_on_signal
    return function()
  File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, in <lambda>
    retry_on_signal(lambda: sock.connect(addr))
  File "/usr/lib/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
  File "/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 52, in signal_handler
    raise TimeoutException()
TimeoutException

Logs can be found at: http://logs.openstack.org/86/82786/1/gate/gate-tempest-dsvm-neutron-pg/1eaadd0/
The review that triggered the issue is: https://review.openstack.org/#/c/82786/

Matt Riedemann (mriedem)
tags: added: gate-failure testing volumes
Revision history for this message
Matt Riedemann (mriedem) wrote :

The scenario test does the following:

1. Creates a security group rule in the default security group to allow ssh and ping.
2. Creates a volume from the configured image.
3. Boots instance A from the volume.
4. Writes some random text to a tmp file on instance A which is written to the backing volume.
5. Deletes instance A.
6. Create instance B from the volume.
7. Uses ssh to cat out the contents of the file written in step 4 using instance B and verifies the text is the same.
8. Makes a snapshot of the volume.
9. Creates a volume from the snapshot image.
10. Create instance C from the volume created from the snapshot.
11. Uses instance C to check the content of the text file written to the original volume in step 4.

We fail in step 11 with the ssh timeout.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Note that TestVolumeBootPattern is for cinder v1 and there is a TestVolumeBootPatternV2 for cinder v2 to handle the different block device mapping format for cinder v2.

There is also some cleanup at the end of the test case, but that should probably be in addCleanup calls rather than at the end, otherwise they won't get called on tear down and we're potentially leaking resources:

        # NOTE(gfidente): ensure resources are in clean state for
        # deletion operations to succeed
        self._stop_instances([instance_2nd, instance_from_snapshot])
        self._detach_volumes([volume_origin, volume])

Revision history for this message
Matt Riedemann (mriedem) wrote :

Looks like mtreinish has an old patch to try and fix the cleanup issues in the scenario tests:

https://review.openstack.org/#/c/62101/

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Matt Riedemann (mriedem) wrote :

79 hits in 7 days, all failures, check and gate queues, started spiking on 6/5.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Joe Gordon (jogo) wrote :

First hit in the gate queue: 2014-06-05T22:48:32.806+00:00

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tempest (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/98497

Changed in tempest:
assignee: nobody → Matt Riedemann (mriedem)
status: New → In Progress
Revision history for this message
Matt Riedemann (mriedem) wrote : Re: tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern fails with TimeoutException on SSH connection
Changed in nova:
importance: Undecided → Critical
Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tempest (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/98632

Matt Riedemann (mriedem)
summary: - tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
- fails with TimeoutException on SSH connection
+ SSHTimeout in tempest scenario tests using nova-network
tags: added: nova-network
removed: volumes
Changed in tempest:
importance: Undecided → Critical
Revision history for this message
Matt Riedemann (mriedem) wrote : Re: SSHTimeout in tempest scenario tests using nova-network

The revert of this change has passed Jenkins 4 times now without hitting the ssh timeout failure:

https://review.openstack.org/#/c/97842/

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tempest (master)

Reviewed: https://review.openstack.org/98632
Committed: https://git.openstack.org/cgit/openstack/tempest/commit/?id=2da58590dab4bc7e56928edf08d02a96fb933ee7
Submitter: Jenkins
Branch: master

commit 2da58590dab4bc7e56928edf08d02a96fb933ee7
Author: Matt Riedemann <email address hidden>
Date: Sun Jun 8 12:46:02 2014 +0000

    Revert "don't reuse servers in test_server_actions"

    This reverts commit b7f648b31c9044e3a7c25c0fb1dcc31a81e0da9f.

    Change-Id: I31672c8b131fec4a916c4d703f6819c22d605876
    Related-Bug: #1298472

Revision history for this message
Attila Fazekas (afazekas) wrote : Re: SSHTimeout in tempest scenario tests using nova-network
Revision history for this message
Attila Fazekas (afazekas) wrote :
Revision history for this message
Attila Fazekas (afazekas) wrote :

actually it was retied for 3-4 minutes, so the 98212 wouldn't help.

Revision history for this message
Matt Riedemann (mriedem) wrote :

The revert in tempest seems to have cleared this up for now.

Changed in tempest:
status: In Progress → Fix Committed
Changed in nova:
importance: Critical → Undecided
Revision history for this message
Matt Riedemann (mriedem) wrote :

Related nova change to improve nova-network logging:

https://review.openstack.org/#/c/99002/

Revision history for this message
Matt Riedemann (mriedem) wrote :

Wondering if there is something racy in the ec2 boto 3rd party tests or in the ec2 API with not doing a good job of cleaning up networks:

http://goo.gl/6f1dfw

Revision history for this message
cyli (cyli) wrote :

Seem to have the same problem with check-grenade-dvsm. It fails at step 11 because SSH times out with a no route to host: http://logs.openstack.org/70/99470/5/check/check-grenade-dsvm/10135fa/console.html.gz

Revision history for this message
cyli (cyli) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/99002
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=80c5c87fa24e7007b3d372a0028a5148cd06c66b
Submitter: Jenkins
Branch: master

commit 80c5c87fa24e7007b3d372a0028a5148cd06c66b
Author: Michael Still <email address hidden>
Date: Tue Jun 10 19:17:00 2014 +1000

    Add more logging to nova-network.

    As requested to help debug gate issues.

    Related-Bug: #1298472

    Change-Id: I5640df721345fe9a878c7c6f8e1c13cfed484112

Revision history for this message
Matt Riedemann (mriedem) wrote : Re: SSHTimeout in tempest scenario tests using nova-network

Going back to ec2 tests and comment 19, dims pointed out that the boto packages has some connection pooling, and there might be bugs there cause issues with networking.

I noticed that boto 2.29.1 was released on 5/30 and that's what we're running with, and there was a connection pooling related change in there, I'm wondering if that caused any regression:

https://github.com/boto/boto/commit/fb3a7b407488c8b2374502d10a90d431daf0aef9

The various duplicate bugs for this ssh timeout failure in tempest showed up around 6/6 which is why we reverted that tempest change which added more server load to the runs, but maybe that was just exposing a limitation in the boto connection pooling code? I'm not really sure but it's a theory at least.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tempest (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/102633

Revision history for this message
jiang, yunhong (yunhong-jiang) wrote : Re: SSHTimeout in tempest scenario tests using nova-network

I'm not sure if I meet the similar issue, since according to the status,it's 'fix committed".

The error in my side is:
2014-07-02 00:23:17.219 | ==============================
2014-07-02 00:23:17.219 | Failed 1 tests - output below:
2014-07-02 00:23:17.219 | ==============================
2014-07-02 00:23:17.219 |
2014-07-02 00:23:17.219 | tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,image,network,volume]
2014-07-02 00:23:17.219 | ----------------------------------------------------------------------------------------------------------------------
2014-07-02 00:23:17.220 |
2014-07-02 00:23:17.220 | Captured traceback:
2014-07-02 00:23:17.220 | ~~~~~~~~~~~~~~~~~~~
2014-07-02 00:23:17.220 | Traceback (most recent call last):
2014-07-02 00:23:17.220 | File "tempest/test.py", line 128, in wrapper
2014-07-02 00:23:17.220 | return f(self, *func_args, **func_kwargs)
2014-07-02 00:23:17.220 | File "tempest/scenario/test_minimum_basic.py", line 138, in test_minimum_basic_scenario
2014-07-02 00:23:17.221 | self.ssh_to_server()
2014-07-02 00:23:17.221 | File "tempest/scenario/test_minimum_basic.py", line 95, in ssh_to_server
2014-07-02 00:23:17.221 | self.linux_client = self.get_remote_client(self.floating_ip.ip)
2014-07-02 00:23:17.221 | File "tempest/scenario/manager.py", line 415, in get_remote_client
2014-07-02 00:23:17.221 | linux_client.validate_authentication()
2014-07-02 00:23:17.221 | File "tempest/common/utils/linux/remote_client.py", line 53, in validate_authentication
2014-07-02 00:23:17.221 | self.ssh_client.test_connection_auth()
2014-07-02 00:23:17.222 | File "tempest/common/ssh.py", line 150, in test_connection_auth
2014-07-02 00:23:17.222 | connection = self._get_ssh_connection()
2014-07-02 00:23:17.222 | File "tempest/common/ssh.py", line 87, in _get_ssh_connection
2014-07-02 00:23:17.222 | password=self.password)
2014-07-02 00:23:17.222 | SSHTimeout: Connection to the 172.24.4.1 via SSH timed out.
2014-07-02 00:23:17.222 | User: cirros, Password: None
2014-07-02 00:23:17.223 |
2014-07-02 00:23:17.223 |

And the log is at http://logs.openstack.org/55/101355/4/check/check-grenade-dsvm-partial-ncpu/53a2af3/console.html

Revision history for this message
Sean Dague (sdague) wrote :

We've disabled the ec2 tests in smoke runs, and are still seeing this in grenade runes, so I think boto is probably not to blame. This is a more fundamental issue.

Changed in nova:
importance: Undecided → Critical
Revision history for this message
Parashuram Hallur (hallur-p-t) wrote :

I'm also seeing TimeoutException, looks inline with what is being observed here

11:47:06 tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,image,volume]
11:47:06 --------------------------------------------------------------------------------------------------------------
11:47:06
11:47:06 Captured traceback:
11:47:06 ~~~~~~~~~~~~~~~~~~~
11:47:06 Traceback (most recent call last):
11:47:06 File "tempest/test.py", line 128, in wrapper
11:47:06 return f(self, *func_args, **func_kwargs)
11:47:06 File "tempest/scenario/test_volume_boot_pattern.py", line 157, in test_volume_boot_pattern
11:47:06 keypair)
11:47:06 File "tempest/scenario/test_volume_boot_pattern.py", line 64, in _boot_instance_from_volume
11:47:06 return self.create_server(image='', create_kwargs=create_kwargs)
11:47:06 File "tempest/scenario/manager.py", line 348, in create_server
11:47:06 self.status_timeout(client.servers, server.id, 'ACTIVE')
11:47:06 File "tempest/scenario/manager.py", line 192, in status_timeout
11:47:06 not_found_exception=not_found_exception)
11:47:06 File "tempest/scenario/manager.py", line 252, in _status_timeout
11:47:06 CONF.compute.build_interval):
11:47:06 File "tempest/test.py", line 625, in call_until_true
11:47:06 time.sleep(sleep_for)
11:47:06 File "/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 52, in signal_handler
11:47:06 raise TimeoutException()
11:47:06 TimeoutException

I don't think, this timeout issue is completely fixed. This happens only during the tempest run from Jenkins, if this test is run manually, it runs through with the status "OK".

Revision history for this message
Sergey Skripnick (eyerediskin) wrote :

I have the same as jiang yunhong:

http://logs.openstack.org/23/104123/2/check/check-grenade-dsvm-partial-ncpu/5e39806/console.html#_2014-07-02_10_17_43_290

2014-07-02 10:17:43.291 | SSHTimeout: Connection to the 172.24.4.1 via SSH timed out.
2014-07-02 10:17:43.291 | User: cirros, Password: None

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tempest (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/104220

Revision history for this message
Kashyap Chamarthy (kashyapc) wrote : Re: SSHTimeout in tempest scenario tests using nova-network
Revision history for this message
Kashyap Chamarthy (kashyapc) wrote :

Err, please disregard my above comment. It does not belong here, I was supposed to add it in a different bug. (Too many bug windows). Sorry for the noise.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Sounds like bug 1182131 is probably the root issue of the race.

Revision history for this message
Parashuram Hallur (hallur-p-t) wrote :

Matt,
The bug 1182131 is already in "Fix committed" state. Are you saying it may not have resolved the issue completely?

Revision history for this message
Sean Dague (sdague) wrote :

I'm continuing to think that this is a firewall synchronization issue. In this log (a failured that happened after Dan's keyerror change landed) - http://logs.openstack.org/21/102721/1/check/check-grenade-dsvm/c454f85/console.html#_2014-07-03_09_18_20_351

We have the following in the iptables:

2014-07-03 09:18:20.351 | -A nova-compute-local -d 10.1.0.4/32 -c 33 3504 -j nova-compute-inst-50
2014-07-03 09:18:20.351 | -A nova-compute-local -d 10.1.0.2/32 -c 4 1360 -j nova-compute-inst-32
2014-07-03 09:18:20.351 | -A nova-compute-local -d 10.1.0.4/32 -c 0 0 -j nova-compute-inst-61

I believe that nova-compute-inst-61 is the guest that is not working (directly mapping is all kinds of fun). However we see 2 jump rules for the same destination, and the once we want. The earlier failure for ssh had a similar overlap between rules 50 and 58.

Revision history for this message
Sean Dague (sdague) wrote :

Another instance here, where there is also a doubled up -d => -j rule for the ip that failed to work - http://logs.openstack.org/25/103325/1/check/check-tempest-dsvm-full/7c03d85/console.html#_2014-07-03_07_41_33_661

Revision history for this message
Sean Dague (sdague) wrote :
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to tempest (master)

Reviewed: https://review.openstack.org/104220
Committed: https://git.openstack.org/cgit/openstack/tempest/commit/?id=8976c9dd664f3bab831c1add283c34fc01c034f6
Submitter: Jenkins
Branch: master

commit 8976c9dd664f3bab831c1add283c34fc01c034f6
Author: Sean Dague <email address hidden>
Date: Wed Jul 2 11:12:26 2014 -0400

    update iptables rules for more useful debugging

    As discussed with dansmith in #openstack-nova,
    iptables --line-numbers -L -nv is a more useful representation of
    the iptables dump for determining network issues.

    Related-Bug: #1298472

    Change-Id: Ibae97f7a0cf29105e3601eca8ce24b8271a3a13d

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to nova (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/104530

Revision history for this message
Sean Dague (sdague) wrote : Re: SSHTimeout in tempest scenario tests using nova-network

As the current indicators trend towards being related to iptables rules, I think we're able to cross cinder off the list of possible causes.

Changed in nova:
status: New → Confirmed
Changed in cinder:
status: New → Invalid
summary: - SSHTimeout in tempest scenario tests using nova-network
+ SSHTimeout in tempest trying to verify that computes are actually
+ functioning
Revision history for this message
Sean Dague (sdague) wrote :

Also, things we know, this happens both in n-net and neutron, failure rates aren't dramatically different in both. So network backend is not related.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Bug 1338844 might be related, or caused by, the recent related change to nova for this bug, the trends in logstash seem to indicate that at least.

Musaab Jameel (mosaabjm)
Changed in tempest:
status: Fix Committed → Incomplete
Revision history for this message
Joe Gordon (jogo) wrote :

We are now seeing this bug mainly on icehouse code, so it looks like this has been fixed on master and needs to be backported. Not sure which patch fixed this bug though.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to nova (master)

Reviewed: https://review.openstack.org/104530
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=d7ce7cccbcd98aa17515d9fd449c88807cb6f0bd
Submitter: Jenkins
Branch: master

commit d7ce7cccbcd98aa17515d9fd449c88807cb6f0bd
Author: Sean Dague <email address hidden>
Date: Thu Jul 3 08:00:39 2014 -0400

    change the firewall debugging for clarity

    When we are building rules ensure we log the instance['id'] so
    we can actually correlate the iptables output to UUID for the
    instance.

    Also bundle up the security group to iptables translation to a
    final view of the world instead of the piecemeal rule at a time
    view.

    Display what rules are being skipped in the add process, as the
    skips seem to happen a lot. If this is completely normal we should
    probably delete the bit entirely at some later point.

    Related-Bug: #1298472

    Change-Id: I0e90c3af9bf908b733ed895ad7c204b0a95ef786

Revision history for this message
Dan Smith (danms) wrote :

For posterity, here are the reviews that didn't get reported here and that seemed to help:

For master:
https://review.openstack.org/104581
https://review.openstack.org/104325

For icehouse:
https://review.openstack.org/104335
https://review.openstack.org/106792

Revision history for this message
Matt Riedemann (mriedem) wrote :

This removes the elastic-recheck query: https://review.openstack.org/#/c/107839/

Changed in nova:
status: Confirmed → Fix Committed
milestone: none → juno-2
Revision history for this message
Zhikun Liu (zhikunliu) wrote :

Found this problem in tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario

router and iptables: http://logs.openstack.org/93/100193/9/check/check-grenade-dsvm-partial-ncpu/b3603c4/console.html#_2014-07-21_17_55_38_683

Changed in nova:
status: Fix Committed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on tempest (master)

Change abandoned by Matt Riedemann (<email address hidden>) on branch: master
Review: https://review.openstack.org/98497

Revision history for this message
Yaniv Kaul (yaniv-kaul) wrote :

I'm quite sure that in some cases the problem is that the VM did not boot - it was trying to boot via PXE for some reason, failed to boot from its disk. This of course caused it not to respond via SSH.
(I'm testing Cinder block storage with IceHouse).

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tempest (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/117520

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on tempest (master)

Change abandoned by Sean Dague (<email address hidden>) on branch: master
Review: https://review.openstack.org/102633

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to tempest (master)

Related fix proposed to branch: master
Review: https://review.openstack.org/123477

Thierry Carrez (ttx)
Changed in nova:
milestone: juno-2 → 2014.2
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on tempest (master)

Change abandoned by Sean Dague (<email address hidden>) on branch: master
Review: https://review.openstack.org/123477
Reason: This review is > 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

Revision history for this message
Chet Burgess (cfb-n) wrote :

So it looks like I hit this or a version of this today with https://review.openstack.org/#/c/136477/4

Specifically I'm seeing the following

http://logs.openstack.org/77/136477/4/gate/gate-grenade-dsvm-partial-ncpu/8bd3974/logs/grenade.sh.txt.gz

Revision history for this message
Lianhao Lu (lianhao-lu) wrote :
Revision history for this message
Silvan Kaiser (2-silvan) wrote :

I am hitting this in the volume tests of tempest, example log is here (in the lower third of the log): http://176.9.127.22:8081/refs-changes-35-155735-231/console.log.out .

Changed in tempest:
status: Incomplete → New
Revision history for this message
Attila Fazekas (afazekas) wrote :

What is the actionable item on tempest side ?

IMHO tempest should be removed from the affected component list.

Changed in tempest:
status: New → Incomplete
Revision history for this message
Dmitry Guryanov (dguryanov) wrote :

I think actionable item is to investigate, what is the problem, Is it in product or in tempest? Our vzstorage CI is constantly hitting this bug. It can also be reproduced in several hours by running this testcase in a loop.

I did some investigation, but haven't reached any results. There are no problems with network, boot and virtual machine becomes accessible via ssh after test.

There is only one theoretical problem with this testcase running on remotefs-based cinder drivers - snapshots are not safe, because qemu agent is missing in cirros image, but since VM log shows, that VM booted successfully, it is not the reason of fails.

ZhaoHangbo (497492840-9)
information type: Public → Public Security
information type: Public Security → Private Security
Jeremy Stanley (fungi)
information type: Private Security → Public
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Ken'ichi Ohmichi (<email address hidden>) on branch: master
Review: https://review.openstack.org/117520
Reason: As http://lists.openstack.org/pipermail/openstack-dev/2016-May/096204.html , we start abandoning patches have gotten old without any updating after negative feedback. Please restore if necessary to restart this again.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Ken'ichi Ohmichi (<email address hidden>) on branch: master
Review: https://review.openstack.org/123477
Reason: As http://lists.openstack.org/pipermail/openstack-dev/2016-May/096204.html , we start abandoning patches have gotten old without any updating after negative feedback. Please restore if necessary to restart this again.

Revision history for this message
Anna Babich (ababich) wrote :

We hit this problem from time to time on our CI when running tests from tempest.api.compute.volumes.test_attach_volume suit. The details are in bug: https://bugs.launchpad.net/mos/+bug/1606218

Changed in tempest:
status: Incomplete → New
Jason (kouych)
Changed in cinder:
status: Invalid → Opinion
status: Opinion → Invalid
Revision history for this message
Andrea Frittoli (andrea-frittoli) wrote :

This is a very old bug. Anna set it back to new in Aug 2016 as it may be related to https://bugs.launchpad.net/mos/+bug/1606218, which has been fixed since.
Hence I will set this to invalid. If some hits an ssh bug in the gate again, please file a new bug.

Changed in tempest:
status: New → Invalid
importance: Critical → Undecided
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.