Activity log for bug #1717257

Date Who What changed Old value New value Message
2017-09-14 13:15:29 DSUZUKI bug added bug
2017-09-14 13:16:04 DSUZUKI description Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 **** support, for x86_64 and x86. */ #if (defined __x86_64__ \ ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) # define __HAVE_FLOAT128 1 #else # define __HAVE_FLOAT128 0 --- 28,35 ---- support, for x86_64 and x86. */ #if (defined __x86_64__ \ ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__) # define __HAVE_FLOAT128 1 #else # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__))); ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in Intel and NVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/ Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/
2017-09-14 16:29:26 DSUZUKI description Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/ Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. This bug has already been discussed in Archlinux and the patch (same as...) proposed. https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/
2017-09-14 16:32:41 DSUZUKI description Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. This bug has already been discussed in Archlinux and the patch (same as...) proposed. https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/ DistroRelease: Ubuntu 17.10 (Proposed) Package: glibc 2.26-0ubuntu1 Architecture: amd64 Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/
2017-09-14 16:33:47 DSUZUKI description DistroRelease: Ubuntu 17.10 (Proposed) Package: glibc 2.26-0ubuntu1 Architecture: amd64 Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/ DistroRelease: Ubuntu 17.10 (Proposed) Package: glibc 2.26-0ubuntu1 Architecture: amd64 extrapackage (not incl. in UBUNTU) NVIDIA-DRIVER, CUDA ------------------------------------------------------------------------------- Here is another proposal regarding bug report I posted (but in reject). https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1716816 I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC. Under glibc 2.24, they work. It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom) I proposed the patch to   /usr/include/x86_64-linux-gnu/bits/floatn.h which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0). So can the patch be merged to glibc2.25 be adopted in Arful or later? ------------------------------------------------------------------------------------- *** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900 --- floatn.h 2017-09-14 21:46:15.334033614 +0900 *************** *** 28,34 ****      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 --- 28,35 ----      support, for x86_64 and x86. */   #if (defined __x86_64__ \        ? __GNUC_PREREQ (4, 3) \ ! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \ ! && !defined(__CUDACC__)   # define __HAVE_FLOAT128 1   #else   # define __HAVE_FLOAT128 0 ------------------------------------------------------------------------------------- (evidence) ---------- 1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. . -------------- typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));                          ^ INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc: /usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__" /usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined -------------- 2. Forums in INVIDIA where issues the problems around glibc.2.26 NVIDIA https://devtalk.nvidia.com/default/topic/1023776/cuda-programming-and-performance/-request-add-nvcc-compatibility-with-glibc-2-26/ 3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed https://www.reddit.com/r/archlinux/comments/6zrmn1/torch_on_arch/
2017-09-15 04:29:46 DSUZUKI summary proposal of patch to avoid erros in compiping NVCC proposal of patch for glibc to avoid erros in compiping NVCC
2017-09-17 11:04:37 DSUZUKI summary proposal of patch for glibc to avoid erros in compiping NVCC [bug / patch ]patch for glibc 2.26 to avoid errors in compiling with CUDA(NVCC)
2017-10-10 08:15:20 Launchpad Janitor glibc (Ubuntu): status New Confirmed
2017-10-10 08:15:29 Steffen Röcker bug added subscriber Steffen Röcker
2017-10-10 08:25:29 Steffen Röcker bug watch added https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=871011
2017-10-10 08:26:02 Steffen Röcker bug task added llvm-toolchain-3.8 (Ubuntu)
2017-10-10 08:48:16 Steffen Röcker bug watch added https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=678033
2017-10-12 04:25:41 Adam Conrad glibc (Ubuntu): assignee Adam Conrad (adconrad)
2017-10-12 05:37:53 Graham Inggs bug added subscriber Graham Inggs
2017-10-13 09:15:34 Launchpad Janitor llvm-toolchain-3.8 (Ubuntu): status New Confirmed
2017-10-14 10:04:14 Launchpad Janitor glibc (Ubuntu): status Confirmed Fix Released
2017-10-15 01:02:12 Adam Conrad bug task added gcc-7 (Ubuntu)
2017-10-15 01:02:47 Adam Conrad gcc-7 (Ubuntu): assignee Adam Conrad (adconrad)
2017-10-15 01:02:50 Adam Conrad llvm-toolchain-3.8 (Ubuntu): status Confirmed Invalid
2017-10-15 07:47:53 Adam Conrad gcc-7 (Ubuntu): status New Fix Committed
2017-10-15 16:09:39 Launchpad Janitor gcc-7 (Ubuntu): status Fix Committed Fix Released
2021-04-26 05:45:53 Mathew Hodson bug task deleted llvm-toolchain-3.8 (Ubuntu)