I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC.
Under glibc 2.24, they work.
It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom)
I proposed the patch to
/usr/include/x86_64-linux-gnu/bits/floatn.h
which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0).
So can the patch be merged to glibc2.25 be adopted in Arful or later?
(evidence)
----------
1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-components. .
--------------
typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__))); ^
INFO: From Compiling external/nccl_archive/src/broadcast.cu.cc:
/usr/include/x86_64-linux-gnu/bits/floatn.h(61): error: invalid argument to attribute "__mode__"
/usr/include/x86_64-linux-gnu/bits/floatn.h(73): error: identifier "__float128" is undefined
--------------
2. Forums in Intel and NVIDIA
where issues the problems around glibc.2.26
Here is another proposal regarding bug report I posted (but in reject). /bugs.launchpad .net/ubuntu/ +source/ glibc/+ bug/1716816
https:/
I reported that glibc 2.26-0ubuntu1 ( adopted in Artful-proposed on Sep 5.) causes compilation error by using NVIDIA's CUDA8.0 and 9.0RC.
Under glibc 2.24, they work.
It maybe comes from glibc226's new feature, i.e. "128-bit floating point as defined by ISO/IEC/IEEE 60559:2011 (IEEE 754-2008) and ISO/IEC TS 18661-3:2015" . (Evidence Scripted at bottom)
I proposed the patch to
/usr/ include/ x86_64- linux-gnu/ bits/floatn. h
which lets NVCC (CUDA's compiler) avoid __float128 (NVCC does support neither 8.0 nor 9.0).
So can the patch be merged to glibc2.25 be adopted in Arful or later?
------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- - __CUDACC_ _) ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- -
*** floatn.h-dist 2017-09-04 16:34:21.000000000 +0900
--- floatn.h 2017-09-14 21:46:15.334033614 +0900
***************
*** 28,34 ****
support, for x86_64 and x86. */
#if (defined __x86_64__ \
? __GNUC_PREREQ (4, 3) \
! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4)))
# define __HAVE_FLOAT128 1
#else
# define __HAVE_FLOAT128 0
--- 28,35 ----
support, for x86_64 and x86. */
#if (defined __x86_64__ \
? __GNUC_PREREQ (4, 3) \
! : (defined __GNU__ ? __GNUC_PREREQ (4, 5) : __GNUC_PREREQ (4, 4))) \
! && !defined(
# define __HAVE_FLOAT128 1
#else
# define __HAVE_FLOAT128 0
-------
(evidence) components. .
^ nccl_archive/ src/broadcast. cu.cc: x86_64- linux-gnu/ bits/floatn. h(61): error: invalid argument to attribute "__mode__"
----------
1. Here is part of message during compiing Tensorflow with CUDA on UBUNTU17.10 beta with proposed-
--------------
typedef _Complex float __cfloat128 __attribute__ ((__mode__ (__TC__)));
INFO: From Compiling external/
/usr/include/
/usr/include/ x86_64- linux-gnu/ bits/floatn. h(73): error: identifier "__float128" is undefined
--------------
2. Forums in Intel and NVIDIA
where issues the problems around glibc.2.26
NVIDIA /devtalk. nvidia. com/default/ topic/1023776/ cuda-programmin g-and-performan ce/-request- add-nvcc- compatibility- with-glibc- 2-26/
https:/
3. THis bug has already discussed in Arch linux and the patch (tas same ) proposed /www.reddit. com/r/archlinux /comments/ 6zrmn1/ torch_on_ arch/
https:/