Comment 13 for bug 574910

Revision history for this message
Alex Howells (howells) wrote : Re: High load averages on Lucid EC2 while idling

For reference the systems used for testing ('thunder', 'lightning', 'aurora') are all HP ProLiant BL460c G5.

agh@thunder:~$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz
stepping : 6
cpu MHz : 3000.366
cache size : 6144 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm tpr_shadow vnmi flexpriority
bogomips : 6000.73
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:

Rest of the output snipped for brevity. I've spun up some additional hosts with Ubuntu 10.04 which are HP ProLiant BL495c G5 and am unable to reproduce the issue on the 3-4 of those readily available to me, the memory usage upon 'first boot' seems to be abnormally high but this is the case on both Karmic and Lucid kernels.

Linux ferret 2.6.32-21-server #32-Ubuntu SMP Fri Apr 16 09:17:34 UTC 2010 x86_64 GNU/Linux

agh@ferret:~$ free -m
             total used free shared buffers cached
Mem: 64560 1275 63285 0 0 36
-/+ buffers/cache: 1237 63322
Swap: 7629 0 7629

Linux ferret 2.6.31-22-server #60-Ubuntu SMP Thu May 27 03:42:09 UTC 2010 x86_64 GNU/Linux

agh@ferret:~$ free -m
             total used free shared buffers cached
Mem: 64561 1313 63247 0 0 37
-/+ buffers/cache: 1275 63285
Swap: 7629 0 7629

Output from /proc/cpuinfo on that beefier box is as follows, again snipped for brevity:

processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 4
model name : Quad-Core AMD Opteron(tm) Processor 2384
stepping : 2
cpu MHz : 2699.504
cache size : 512 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt
bogomips : 5398.98
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate

Also possibly of interest the AMD boxes have somewhat less wakeups per second although this is still dominated by the scheduler and may not be of relevence --

Wakeups-from-idle per second : 95.6 interval: 10.0s
no ACPI power usage estimate available

Top causes for wakeups:
  52.9% (401.1) [kernel scheduler] Load balancing tick
  26.4% (200.1) [kernel core] add_timer (smi_timeout)
  13.2% (100.0) kipmi0
   1.6% ( 12.5) [kernel core] hrtimer_start (tick_sched_timer)
   1.3% ( 10.0) [kernel core] ipmi_timeout (ipmi_timeout)
   1.0% ( 7.4) [sata_svw] <interrupt>
   0.9% ( 7.0) [ipmi_si] <interrupt>
   0.5% ( 4.0) [Rescheduling interrupts] <kernel IPI>
   0.5% ( 4.0) [kernel core] usb_hcd_poll_rh_status (rh_timer_func)