NVMe/FC connections fail to reestablish after controller is reset

Bug #1874270 reported by Jennifer Duong
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
nvme-cli (Ubuntu)
Triaged
Undecided
Unassigned

Bug Description

My FC host can't seem to reestablish NVMe/FC connections after resetting one of my E-Series controllers. this is with Ubuntu 20.04 kernel-5.4.0-25-generic nvme-cli 1.9-1. I'm seeing this on my fabric-attached and direct-connect systems. These are the HBAs I'm running with:

Emulex LPe16002B-M6 FV12.4.243.11 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe16002B-M6 FV12.4.243.11 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe32002-M2 FV12.4.243.17 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux
Emulex LPe35002-M2 FV12.4.243.23 DV12.6.0.4 HN:ICTM1610S01H1 OS:Linux

QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2742 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k
QLE2692 FW:v8.08.231 DVR:v10.01.00.19-k

ProblemType: Bug
DistroRelease: Ubuntu 20.04
Package: nvme-cli 1.9-1
ProcVersionSignature: Ubuntu 5.4.0-25.29-generic 5.4.30
Uname: Linux 5.4.0-25-generic x86_64
ApportVersion: 2.20.11-0ubuntu27
Architecture: amd64
CasperMD5CheckResult: skip
Date: Wed Apr 22 09:26:00 2020
InstallationDate: Installed on 2020-04-13 (8 days ago)
InstallationMedia: Ubuntu-Server 20.04 LTS "Focal Fossa" - Alpha amd64 (20200124)
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=<set>
 LANG=en_US.UTF-8
 SHELL=/bin/bash
SourcePackage: nvme-cli
UpgradeStatus: No upgrade log present (probably fresh install)
modified.conffile..etc.nvme.hostnqn: ictm1610s01h1-hostnqn
mtime.conffile..etc.nvme.hostnqn: 2020-04-14T16:02:14.512816

Revision history for this message
Jennifer Duong (jduong) wrote :
Revision history for this message
Jennifer Duong (jduong) wrote :

I've attached logs.

Revision history for this message
Jennifer Duong (jduong) wrote :

Also, it does not look like Broadcom's website has an autoconnect script that supports Ubuntu.

Revision history for this message
Jennifer Duong (jduong) wrote :

I am still seeing this with Ubuntu 20.04 LTS

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in nvme-cli (Ubuntu):
status: New → Confirmed
Revision history for this message
Dan Streetman (ddstreet) wrote :

You'll need to check the specific output/log of the nvmf-connect@ services that are failing, e.g.:

Apr 21 16:48:53 ICTM1610S01H2 systemd-udevd[2946]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200400a098d85eb4:pn-0x202400a098d85eb4\t--trsvcid=none\t--host-traddr=nn-0x20000024ff1877fb:pn-0x21000024ff1877fb.service' failed with exit code 1.

Changed in nvme-cli (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for nvme-cli (Ubuntu) because there has been no activity for 60 days.]

Changed in nvme-cli (Ubuntu):
status: Incomplete → Expired
Changed in nvme-cli (Ubuntu):
status: Expired → Incomplete
Revision history for this message
Jennifer Duong (jduong) wrote :

Dan, where can I find the location of these logs?

Revision history for this message
Dan Streetman (ddstreet) wrote :

> Dan, where can I find the location of these logs?

they should be in /var/log/syslog, and/or the journal, which you can check with:

for all logs:
$ journalctl

just logs for this boot:
$ journalctl -b

just logs for the nvmf-connect unit(s):
$ journalctl -u 'nvmf-connect*'

You probably only really need to check the last one, to get the logs specific to the nvmf-connect service failures.

Revision history for this message
Jennifer Duong (jduong) wrote :
Download full text (5.7 KiB)

At the time in which a storage controller is failed, /var/log/syslog and journalctl look identical:

Apr 7 11:45:28 ICTM1608S01H1 kernel: [586649.657080] lpfc 0000:af:00.1: 5:(0):6172 NVME rescanned DID x3d0a00 port_state x2
Apr 7 11:45:28 ICTM1608S01H1 kernel: [586649.657268] lpfc 0000:18:00.1: 1:(0):6172 NVME rescanned DID x3d0a00 port_state x2
Apr 7 11:45:28 ICTM1608S01H1 kernel: [586649.658064] nvme nvme5: NVME-FC{4}: controller connectivity lost. Awaiting Reconnect
Apr 7 11:45:28 ICTM1608S01H1 kernel: [586649.659036] nvme nvme1: NVME-FC{0}: controller connectivity lost. Awaiting Reconnect
Apr 7 11:45:28 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x202200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x20000090fadcc5ce:pn-0x10000090fadcc5ce.service' failed with exit code 1.
Apr 7 11:45:28 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x202200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x200000109b8f2b8e:pn-0x100000109b8f2b8e.service' failed with exit code 1.
Apr 7 11:45:28 ICTM1608S01H1 kernel: [586649.680671] nvme nvme5: NVME-FC{4}: io failed due to lldd error 6
Apr 7 11:45:28 ICTM1608S01H1 kernel: [586649.703918] nvme nvme1: NVME-FC{0}: io failed due to lldd error 6
Apr 7 11:45:29 ICTM1608S01H1 kernel: [586650.469693] lpfc 0000:af:00.0: 4:(0):6172 NVME rescanned DID x011400 port_state x2
Apr 7 11:45:29 ICTM1608S01H1 kernel: [586650.469715] lpfc 0000:18:00.0: 0:(0):6172 NVME rescanned DID x011400 port_state x2
Apr 7 11:45:29 ICTM1608S01H1 kernel: [586650.470629] nvme nvme4: NVME-FC{1}: controller connectivity lost. Awaiting Reconnect
Apr 7 11:45:29 ICTM1608S01H1 kernel: [586650.471611] nvme nvme8: NVME-FC{5}: controller connectivity lost. Awaiting Reconnect
Apr 7 11:45:29 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x201200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x20000090fadcc5cd:pn-0x10000090fadcc5cd.service' failed with exit code 1.
Apr 7 11:45:29 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200200a098d8580e:pn-0x201200a098d8580e\t--trsvcid=none\t--host-traddr=nn-0x200000109b8f2b8d:pn-0x100000109b8f2b8d.service' failed with exit code 1.
Apr 7 11:45:29 ICTM1608S01H1 kernel: [586650.493222] nvme nvme4: NVME-FC{1}: io failed due to lldd error 6
Apr 7 11:45:29 ICTM1608S01H1 kernel: [586650.516848] nvme nvme8: NVME-FC{5}: io failed due to lldd error 6
Apr 7 11:45:59 ICTM1608S01H1 kernel: [586680.663369] rport-10:0-9: blocked FC remote port time out: removing rport
Apr 7 11:45:59 ICTM1608S01H1 kernel: [586680.663373] rport-16:0-9: blocked FC remote port time out: removing rport
Apr 7 11:45:59 ICTM1608S01H1 kernel: [586680.663377] rport-15:0-9: blocked FC remote port time out: removing rport
Apr 7 11:45:59 ICTM1608S01H1 kernel: [586680.663383] rport...

Read more...

Revision history for this message
Dan Streetman (ddstreet) wrote :
Download full text (5.1 KiB)

It looks like the service is failing because your controller is in the process of resetting, which appears to take several minutes. I'm not sure what the design is for nvme-cli tools handling such a long reset time, but my first guess would be to increase the kernel rport timeout, which appears to be around 30 seconds, from the log output. In your hardware's case, it seems like that timeout should be more than 180 seconds.

Apr 07 11:45:10 ICTM1608S01H1 root[2894793]: JD: Resetting controller A
Apr 07 11:45:28 ICTM1608S01H1 kernel: lpfc 0000:af:00.1: 5:(0):6172 NVME rescanned DID x3d0a00 port_state x2
Apr 07 11:45:28 ICTM1608S01H1 kernel: lpfc 0000:18:00.1: 1:(0):6172 NVME rescanned DID x3d0a00 port_state x2
Apr 07 11:45:28 ICTM1608S01H1 kernel: nvme nvme5: NVME-FC{4}: controller connectivity lost. Awaiting Reconnect
Apr 07 11:45:28 ICTM1608S01H1 kernel: nvme nvme1: NVME-FC{0}: controller connectivity lost. Awaiting Reconnect
Apr 07 11:45:28 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transp>
Apr 07 11:45:28 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transp>
Apr 07 11:45:28 ICTM1608S01H1 kernel: nvme nvme5: NVME-FC{4}: io failed due to lldd error 6
Apr 07 11:45:28 ICTM1608S01H1 kernel: nvme nvme1: NVME-FC{0}: io failed due to lldd error 6
Apr 07 11:45:29 ICTM1608S01H1 kernel: lpfc 0000:af:00.0: 4:(0):6172 NVME rescanned DID x011400 port_state x2
Apr 07 11:45:29 ICTM1608S01H1 kernel: lpfc 0000:18:00.0: 0:(0):6172 NVME rescanned DID x011400 port_state x2
Apr 07 11:45:29 ICTM1608S01H1 kernel: nvme nvme4: NVME-FC{1}: controller connectivity lost. Awaiting Reconnect
Apr 07 11:45:29 ICTM1608S01H1 kernel: nvme nvme8: NVME-FC{5}: controller connectivity lost. Awaiting Reconnect
Apr 07 11:45:29 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transp>
Apr 07 11:45:29 ICTM1608S01H1 systemd-udevd[2895178]: fc_udev_device: Process 'systemctl --no-block start nvmf-connect@--device=none\t--transp>
Apr 07 11:45:29 ICTM1608S01H1 kernel: nvme nvme4: NVME-FC{1}: io failed due to lldd error 6
Apr 07 11:45:29 ICTM1608S01H1 kernel: nvme nvme8: NVME-FC{5}: io failed due to lldd error 6
Apr 07 11:45:59 ICTM1608S01H1 kernel: rport-10:0-9: blocked FC remote port time out: removing rport
Apr 07 11:45:59 ICTM1608S01H1 kernel: rport-16:0-9: blocked FC remote port time out: removing rport
Apr 07 11:45:59 ICTM1608S01H1 kernel: rport-15:0-9: blocked FC remote port time out: removing rport
Apr 07 11:45:59 ICTM1608S01H1 kernel: rport-12:0-9: blocked FC remote port time out: removing rport
Apr 07 11:46:28 ICTM1608S01H1 kernel: nvme nvme5: NVME-FC{4}: dev_loss_tmo (60) expired while waiting for remoteport connectivity.
Apr 07 11:46:28 ICTM1608S01H1 kernel: nvme nvme5: Removing ctrl: NQN "nqn.1992-08.com.netapp:5700.600a098000d8580e000000005c0136a2"
Apr 07 11:46:28 ICTM1608S01H1 kernel: nvme nvme1: NVME-FC{0}: dev_loss_tmo (60) expired while waiting for remoteport connectivity.
Apr 07 11:46:28 ICTM1608S01H1 kernel: nvme nvme1: Removing ctrl: NQN "nqn.1992...

Read more...

Revision history for this message
Jennifer Duong (jduong) wrote :

Dan, where do I change the kernel rport timeout? And how can I go about changing the timeout on a server with Emulex cards installed versus Qlogic?

Revision history for this message
Dan Streetman (ddstreet) wrote :

Hmm, I looked at the udev rule and the service it calls a bit closer, and I'm not sure if that udev rule has *ever* worked; it's attempting to pass a full list of parameters as the template value on the cmdline, and I can kind of understand the intention, but it's not a good way to implement it.

$ sudo systemctl --no-block start nvmf-connect@--device=none\t--transport=fc\t--traddr=nn-0x200400a098d85eb4:pn-0x203400a098d85eb4\t--trsvcid=none\t--host-traddr=nn-0x20000090fadcc57dpn-0x10000090fadcc57d.service
Invalid unit name "nvmf-connect@--device=nonet--transport=fct--traddr=nn-0x200400a098d85eb4:pn-0x203400a098d85eb4t--trsvcid=nonet--host-traddr=nn-0x20000090fadcc57dpn-0x10000090fadcc57d.service" escaped as "nvmf-connect@--device\x3dnonet--transport\x3dfct--traddr\x3dnn-0x200400a098d85eb4:pn-0x203400a098d85eb4t--trsvcid\x3dnonet--host-traddr\x3dnn-0x20000090fadcc57dpn-0x10000090fadcc57d.service" (maybe you should use systemd-escape?).

The problem here is that systemctl doesn't allow "=" character to be included in the unit template data.

Revision history for this message
Dan Streetman (ddstreet) wrote :

@jduong can you test with the nvme-cli package from this ppa:
https://launchpad.net/~ddstreet/+archive/ubuntu/lp1874270

Revision history for this message
Jennifer Duong (jduong) wrote :
Download full text (4.5 KiB)

Dan, I've attached /var/log/syslog and journalctl logs of a recreate after installing nvme-cli_1.9-1ubuntu0.1+bug1874270v20210408b1_amd64 and rebooting the host. It looks like connect-all didn't recognize the "matching" flag.

Apr 8 11:48:45 ICTM1608S01H1 root: JD: Resetting controller B
Apr 8 11:49:39 ICTM1608S01H1 kernel: [ 545.652088] lpfc 0000:af:00.1: 5:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr 8 11:49:39 ICTM1608S01H1 kernel: [ 545.652166] nvme nvme2: NVME-FC{2}: controller connectivity lost. Awaiting Reconnect
Apr 8 11:49:39 ICTM1608S01H1 kernel: [ 545.652203] lpfc 0000:18:00.1: 1:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr 8 11:49:39 ICTM1608S01H1 kernel: [ 545.652276] nvme nvme6: NVME-FC{6}: controller connectivity lost. Awaiting Reconnect
Apr 8 11:49:39 ICTM1608S01H1 kernel: [ 545.673853] nvme nvme2: NVME-FC{2}: io failed due to lldd error 6
Apr 8 11:49:39 ICTM1608S01H1 systemd[1]: Started NVMf auto-connect scan upon nvme discovery controller Events.
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: connect-all: unrecognized option '--matching'
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: Discover NVMeoF subsystems and connect to them [ --transport=<LIST>, -t <LIST> ] --- transport type
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --traddr=<LIST>, -a <LIST> ] --- transport address
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --trsvcid=<LIST>, -s <LIST> ] --- transport service id (e.g. IP
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: port)
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --host-traddr=<LIST>, -w <LIST> ] --- host traddr (e.g. FC WWN's)
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --hostnqn=<LIST>, -q <LIST> ] --- user-defined hostnqn (if default
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: not used)
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --hostid=<LIST>, -I <LIST> ] --- user-defined hostid (if default
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: not used)
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --raw=<LIST>, -r <LIST> ] --- raw output file
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --device=<LIST>, -d <LIST> ] --- use existing discovery controller
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: device
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --keep-alive-tmo=<LIST>, -k <LIST> ] --- keep alive timeout period in
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: seconds
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --reconnect-delay=<LIST>, -c <LIST> ] --- reconnect timeout period in
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: seconds
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --ctrl-loss-tmo=<LIST>, -l <LIST> ] --- controller loss timeout period in
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: seconds
Apr 8 11:49:39 ICTM1608S01H1 nvme[7329]: [ --hdr_digest, -g ] --- enable transport protocol header
Apr 8 11:49:39 ICTM1608S01H1 ...

Read more...

Revision history for this message
Dan Streetman (ddstreet) wrote :

> It looks like connect-all didn't recognize the "matching" flag

ah sorry, that param is used in later versions, i uploaded a new buidl to the ppa can you try that one? It just started building now, so might take some time to finish building and get published.

Changed in nvme-cli (Ubuntu):
status: Incomplete → Triaged
Revision history for this message
Jennifer Duong (jduong) wrote :
Download full text (6.8 KiB)

Dan, I've attached /var/log/syslog and journalctl logs of a recreate after installing nvme-cli_1.9-1ubuntu0.1+bug1874270v20210408b2_amd64 and rebooting the host.

Apr 8 14:44:00 ICTM1608S01H1 root: JD: Resetting controller B
Apr 8 14:44:09 ICTM1608S01H1 kernel: [ 196.190003] lpfc 0000:af:00.1: 5:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr 8 14:44:09 ICTM1608S01H1 kernel: [ 196.190082] nvme nvme2: NVME-FC{2}: controller connectivity lost. Awaiting Reconnect
Apr 8 14:44:09 ICTM1608S01H1 kernel: [ 196.190176] lpfc 0000:18:00.1: 1:(0):6172 NVME rescanned DID x3d3800 port_state x2
Apr 8 14:44:09 ICTM1608S01H1 kernel: [ 196.190268] nvme nvme6: NVME-FC{6}: controller connectivity lost. Awaiting Reconnect
Apr 8 14:44:09 ICTM1608S01H1 kernel: [ 196.211805] nvme nvme2: NVME-FC{2}: io failed due to lldd error 6
Apr 8 14:44:09 ICTM1608S01H1 systemd[1]: Started NVMf auto-connect scan upon nvme discovery controller Events.
Apr 8 14:44:09 ICTM1608S01H1 systemd[1]: nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x20000090fadcc5ce:pn\x2d0x10000090fadcc5ce.service: Succeeded.
Apr 8 14:44:09 ICTM1608S01H1 systemd-udevd[2827]: filp(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x20000090fadcc5ce:pn\x2d0x10000090fadcc5ce.service): Failed to process device, ignoring: File name too long
Apr 8 14:44:09 ICTM1608S01H1 systemd-udevd[2828]: dentry(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x20000090fadcc5ce:pn\x2d0x10000090fadcc5ce.service): Failed to process device, ignoring: File name too long
Apr 8 14:44:09 ICTM1608S01H1 systemd-udevd[2827]: pid(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x20000090fadcc5ce:pn\x2d0x10000090fadcc5ce.service): Failed to process device, ignoring: File name too long
Apr 8 14:44:09 ICTM1608S01H1 systemd-udevd[2828]: inode_cache(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x20000090fadcc5ce:pn\x2d0x10000090fadcc5ce.service): Failed to process device, ignoring: File name too long
Apr 8 14:44:09 ICTM1608S01H1 systemd-udevd[2829]: kmalloc-rcl-512(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtransport\x3dfc\x20\x2d\x2dtraddr\x3dnn\x2d0x200200a098d8580e:pn\x2d0x202300a098d8580e\x20\x2d\x2dtrsvcid\x3dnone\x20\x2d\x2dhost\x2dtraddr\x3dnn\x2d0x20000090fadcc5ce:pn\x2d0x10000090fadcc5ce.service): Failed to process device, ignoring: File name too long
Apr 8 14:44:09 ICTM1608S01H1 systemd-udevd[2829]: PING(639:nvmf-connect@\x2d\x2ddevice\x3dnone\x20\x2d\x2dtr...

Read more...

Revision history for this message
Dan Streetman (ddstreet) wrote :

sigh, well that illustrates why this design was a bad idea.

I changed the param to use their short form which should keep the service filename length under the limit.

To be totally clear, this is in no way a correct fix, I just want to confirm this is actually the problem; then it'll have to get fixed upstream in a 'correct' way.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.