2016-08-17 08:20:00 |
Andreas Nicolai |
bug |
|
|
added bug |
2016-08-17 08:23:18 |
Andreas Nicolai |
description |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
<pre>
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
</pre>
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
<pre>
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
</pre>
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
|
2016-08-17 08:23:55 |
Andreas Nicolai |
description |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
<pre>
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
</pre>
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
|
2016-08-17 08:51:21 |
Ubuntu Foundations Team Bug Bot |
tags |
|
bot-comment |
|
2016-08-17 15:21:29 |
Andreas Nicolai |
description |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
|
2016-08-17 15:26:59 |
Andreas Nicolai |
affects |
ubuntu |
gcc |
|
2016-08-17 15:28:22 |
Andreas Nicolai |
bug task added |
|
glibc (Ubuntu) |
|
2016-08-18 10:26:53 |
Launchpad Janitor |
glibc (Ubuntu): status |
New |
Confirmed |
|
2016-08-28 07:18:25 |
Andreas Nicolai |
description |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
../../data/tests/CCMTest/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 1026 == 1026
IntegratorFunctionEvals 32474 == 32474
IntegratorLESSetup 3114 == 3114
IntegratorLESSolve 32473 == 32473
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTimeWriteOutputs 0.00 ~~ 0.00
IntegratorTimeFunctionEvals 4.96 <> 9.46
IntegratorTimeLESSetup 0.38 ~~ 0.58
IntegratorTimeLESSolve 0.36 ~~ 0.35
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o
RHField.d6o
../../data/tests/EN15026/Kirchhoff.d6p
Reference New
IntegratorErrorTestFails 2 == 2
IntegratorFunctionEvals 17685 == 17685
IntegratorLESSetup 903 == 903
IntegratorLESSolve 17684 == 17684
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTimeWriteOutputs 0.03 ~~ 0.03
IntegratorTimeFunctionEvals 31.04 <> 58.89
IntegratorTimeLESSetup 2.47 ~~ 3.76
IntegratorTimeLESSolve 3.05 ~~ 2.98
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http://bauklimatik-dresden.de/downloads/tmp/test_suite.tar.7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop? |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04.
UPDATE:
Test results (on Ubuntu 16.04.1 LTS):
case 1 - static linking = 6.56 s
case 1 - dynamic linking = 8.29 s
case 2 - static linking = 45.8 s
case 2 - dynamic linking = 49.4 s
I compiled the solver with GCC 4.8.4 on Ubuntu 14.04. When I run the solver unmodified on 16.04.1 LTS it runs much slower. Tested on an Ubuntu system that was upgraded from 14.04.4 and on a second machine after a fresh install.
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Hence, the slowdown does not seem to be related by the code-generation, but rather by the runtime library.
see attached performance test. |
|
2016-08-28 07:20:02 |
Andreas Nicolai |
description |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04.
UPDATE:
Test results (on Ubuntu 16.04.1 LTS):
case 1 - static linking = 6.56 s
case 1 - dynamic linking = 8.29 s
case 2 - static linking = 45.8 s
case 2 - dynamic linking = 49.4 s
I compiled the solver with GCC 4.8.4 on Ubuntu 14.04. When I run the solver unmodified on 16.04.1 LTS it runs much slower. Tested on an Ubuntu system that was upgraded from 14.04.4 and on a second machine after a fresh install.
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Hence, the slowdown does not seem to be related by the code-generation, but rather by the runtime library.
see attached performance test. |
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04.
UPDATE:
Test results (on Ubuntu 16.04.1 LTS):
case 1 - static linking = 6.56 s
case 1 - dynamic linking = 8.29 s
case 2 - static linking = 45.8 s
case 2 - dynamic linking = 49.4 s
case 3 - static linking = 7.02 s
case 3 - dynamic linking = 11.2 s
I compiled the solver with GCC 4.8.4 on Ubuntu 14.04. When I run the solver unmodified on 16.04.1 LTS it runs much slower. Tested on an Ubuntu system that was upgraded from 14.04.4 and on a second machine after a fresh install.
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Hence, the slowdown does not seem to be related by the code-generation, but rather by the runtime library.
see attached performance test. |
|
2016-08-28 07:21:07 |
Andreas Nicolai |
attachment added |
|
Performance test that illustrates the runtime difference (dynamic vs. shared libraries) https://bugs.launchpad.net/gcc/+bug/1613996/+attachment/4729180/+files/performance_test.tar.7z |
|