RPM

Comment 9 for bug 635834

Revision history for this message
In , Zdenek (zdenek-redhat-bugs) wrote :

(In reply to comment #7)
> I did not mean to imply you have done --rebuilddb benchchmarks. My statement is
> true no matter what: neither -qa _NOR_ --rebuilddb are proper measurements
> of rpmdb "performance" because the access is sequential. So any claim of
> "inefficient"
> rpmdb I/O will only apply narrowly and incompletely.

Well I still don't get what you mean by this -

I've reported problem I'm experiencing as a regular daily user.

I see a very slow behavior and I report this as a problem.

It could be easily checked by anyone just by repeating steps in my first post.

If you think this bugzilla title is not correct - propose a better name.

> Are you claiming I/O stalls or not? You have not disclosed any measurement
> that directly shows stalls, only pointed out the possibility afaict. strace
> tstamps
> would be convincing. I have not seen stalls, or behavior indicative of I/O
> waits,
> while benchmarking RPM.

Statistics cached rpm

perf stat -- rpm -qa >/dev/null

 Performance counter stats for 'rpm -qa':

    1119.839601 task-clock-msecs # 0.994 CPUs
            137 context-switches # 0.000 M/sec
              0 CPU-migrations # 0.000 M/sec
           3667 page-faults # 0.003 M/sec
     2453079208 cycles # 2190.563 M/sec
     4382415288 instructions # 1.786 IPC
       27935906 cache-references # 24.946 M/sec
         408832 cache-misses # 0.365 M/sec

    1.126486505 seconds time elapsed

Statistics uncached

 Performance counter stats for 'rpm -qa':

    2073.234535 task-clock-msecs # 0.179 CPUs
           2443 context-switches # 0.001 M/sec
              1 CPU-migrations # 0.000 M/sec
           3666 page-faults # 0.002 M/sec
     2830241244 cycles # 1365.133 M/sec
     4833644897 instructions # 1.708 IPC
       37085859 cache-references # 17.888 M/sec
         543550 cache-misses # 0.262 M/sec

   11.552013713 seconds time elapsed

Statistics cached without digest

 Performance counter stats for 'rpm -qa --nodigest --nosignature':

     355.230563 task-clock-msecs # 0.990 CPUs
             41 context-switches # 0.000 M/sec
              1 CPU-migrations # 0.000 M/sec
           3547 page-faults # 0.010 M/sec
      778136260 cycles # 2190.510 M/sec
      845519636 instructions # 1.087 IPC
       20456887 cache-references # 57.588 M/sec
         353068 cache-misses # 0.994 M/sec

    0.358798949 seconds time elapsed

Statistics uncached without digest

 Performance counter stats for 'rpm -qa --nodigest --nosignature':

    1160.226932 task-clock-msecs # 0.099 CPUs
           2322 context-switches # 0.002 M/sec
              3 CPU-migrations # 0.000 M/sec
           3546 page-faults # 0.003 M/sec
     1111501957 cycles # 958.004 M/sec
     1267144821 instructions # 1.140 IPC
       28076070 cache-references # 24.199 M/sec
         478383 cache-misses # 0.412 M/sec

   11.668546646 seconds time elapsed

---
Short sample from strace -tt cached - difference .003302s

15:12:49.660754 write(1, "dejavu-lgc-sans-mono-fonts-2.30-"..., 46) = 46
15:12:49.660860 pread(3, "\0\0\0\0\1\0\0\0\3142\0\0\0\0\0\0\3152\0\0\1\0\346\17\0\7\0\0\0J\0\1"..., 4096, 53264384) = 4096
...
15:12:49.663573 pread(3, "\0\0\0\0\1\0\0\0\3452\0\0\3442\0\0\0\0\0\0\1\0\252\f\0\7\0\0\0\0\0\0"..., 4096, 53366784) = 4096
15:12:49.663865 rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1], [], 8) = 0
15:12:49.663953 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
15:12:49.664056 write(1, "gedit-2.28.0-1.fc12.x86_64\n", 27) = 27

----
Short sample from strace -tt uncached - difference .013265s

15:13:27.990154 write(1, "dejavu-lgc-sans-mono-fonts-2.30-"..., 46) = 46
..
15:13:28.003419 write(1, "gedit-2.28.0-1.fc12.x86_64\n", 27) = 27

>
>
> > Yeah - sure python is much bigger CPU eater in this case - but rpm is not
> > negligible either...
> LVM is not exactly svelte either. Reasoning from yum->python->lvm peformance
> to claimed "inefficient database disk reading" for RPM based on the largeness
> of the code base is no measurement I understand.

Not exactly sure what do you mean here by LVM....