I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWriteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
Particularly affected is the physics part of the code (IntegratorTimeFunctionEvals), which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
I noticed that a numerical solver I develop runs much slower on 16.04.1 than on 14.04. See for example this output:
The counters (top part of each result section) show that the solver does the same on both variants. The timings (lower part, beginning with FrameworkTimeWr iteOutputs) are execution time in seconds. Overall time is in the last row (WallClockTime). First column shows results on Ubuntu 14.04, second column shows time on 16.04.1.
<pre> tests/CCMTest/ Kirchhoff. d6p
Reference New orTestFails 1026 == 1026 ctionEvals 32474 == 32474 Setup 3114 == 3114 Solve 32473 == 32473 WriteOutputs 0.00 ~~ 0.00 eFunctionEvals 4.96 <> 9.46 eLESSetup 0.38 ~~ 0.58 eLESSolve 0.36 ~~ 0.35
../../data/
IntegratorErr
IntegratorFun
IntegratorLES
IntegratorLES
IntegratorSteps 25809 == 25809
LESJacEvals 463 == 463
LESRHSEvals 3241 == 3241
LESSetups 3114 == 3114
--
FrameworkTime
IntegratorTim
IntegratorTim
IntegratorTim
LESTimeJacEvals 0.08 ~~ 0.08
LESTimeRHSEvals 0.27 ~~ 0.46
WallClockTime 6.13 <> 10.79
MoistField.d6o tests/EN15026/ Kirchhoff. d6p
Reference New orTestFails 2 == 2 ctionEvals 17685 == 17685 Setup 903 == 903 Solve 17684 == 17684 WriteOutputs 0.03 ~~ 0.03 eFunctionEvals 31.04 <> 58.89 eLESSetup 2.47 ~~ 3.76 eLESSolve 3.05 ~~ 2.98
RHField.d6o
../../data/
IntegratorErr
IntegratorFun
IntegratorLES
IntegratorLES
IntegratorSteps 17635 == 17635
LESJacEvals 295 == 295
LESRHSEvals 2065 == 2065
LESSetups 903 == 903
--
FrameworkTime
IntegratorTim
IntegratorTim
IntegratorTim
LESTimeJacEvals 0.28 ~~ 0.28
LESTimeRHSEvals 2.02 ~~ 3.30
WallClockTime 40.39 <> 69.39
</pre>
Particularly affected is the physics part of the code (IntegratorTime FunctionEvals) , which does by far the most memory access and uses pow(), sqrt(), exp() functions.
The test code was compiled with GCC 4.8.4 on Ubuntu 14.04 and was run unmodified on 16.04 (after upgrade and on a second machine after a fresh install).
When the code is compiled with the new GCC 5.4 on Ubuntu 16.04, the execution times are approximately the same as with GCC 4.8.4 on Ubuntu 16.04. Therefore I would not think it is a GCC bug.
I have a test suite archive for download and test execution prepared:
http:// bauklimatik- dresden. de/downloads/ tmp/test_ suite.tar. 7z
Run the test suite on 14.04 and on 16.04 and observe the numbers in the "New" column, they will differ significantly for most test cases.
Can you confirm my observation? And if yes, does anyone know how to avoid this performance drop?