Unpacking linux-headers unbelievably slow in Lubuntu Precise (Beta 1)

Bug #947664 reported by Cefn
140
This bug affects 32 people
Affects Status Importance Assigned to Milestone
dpkg
New
Undecided
Unassigned
dpkg (Ubuntu)
Fix Released
High
Unassigned

Bug Description

The below is an example line in an apt-get dist-upgrade which seems to take 30 minutes or more on a 2.1GHz CPU (one core of a quad core through Virtualbox). I've been looking at it for ages, and even had time to log in here and file the bug - the line is still 'hung'.

Unpacking linux-headers-3.2.0-18 (from .../linux-headers-3.2.0-18_3.2.0-18.28_all.deb)

Can anyone explain this, or is it a bug in the configuration of Lubuntu, Apt-get or something. I read that there was a similarly behaving bug to do with dpkg using paranoid file synchronization but this length of delay is frankly a bit crazy.

summary: - Apt-get unbelievably slow when unpacking in Lubuntu Precise
+ Apt-get unbelievably slow when unpacking in Lubuntu Precise (Beta 1)
Revision history for this message
RedSingularity (redsingularity) wrote : Re: Apt-get unbelievably slow when unpacking in Lubuntu Precise (Beta 1)

Moving package, though I doubt it is the fault of the apt system.
---
Ubuntu Bug Squad volunteer triager
http://wiki.ubuntu.com/BugSquad

affects: update-manager (Ubuntu) → aptitude (Ubuntu)
Revision history for this message
Daniel Hartwig (wigs) wrote :

RedSingularity (redsingularity) wrote on 2012-03-11:
> Moving package, though I doubt it is the fault of the apt system.

Indeed. The unpacking line indicates that control has passed to dpkg at this point.

affects: aptitude (Ubuntu) → dpkg (Ubuntu)
summary: - Apt-get unbelievably slow when unpacking in Lubuntu Precise (Beta 1)
+ Unpacking linux-headers unbelievably slow in Lubuntu Precise (Beta 1)
Revision history for this message
Raphaël Hertzog (hertzog) wrote :

Well, you should definitely give more information about your configuration.

What filesystem are you using ?

Is it significantly faster if you use dpkg's --force-unsafe-io ?

What's the process state in the "ps" output ?

Daniel Hartwig (wigs)
Changed in dpkg (Ubuntu):
status: New → Incomplete
Revision history for this message
Cefn (6-launchpad-net-cefn-com) wrote :

The build is a stock Lubuntu Precise installed one its own, taking up the whole disk with no special partitioning or formatting options. It's an 8GB expandable virtual disk provided by Oracle Virtualbox.

It's hard to tell for sure what the process is which is taking so long. It could in principle be some kind of build step, but if this is the case, something should be shown at the console, else reporters have to assume it's the 'unpacking' step as indicated.

The process did eventually complete. I'll see if I can recreate the next time I do an update and carry out the diagnostic steps you describe.

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :
Download full text (54.2 KiB)

The process just seems to be very metadata heavy. For example at one point in the transaction:

open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cpuidle.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cpuidle.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cpuidle.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/spi.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/spi.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/spi.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/sram.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/sram.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/sram.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/timex.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/timex.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/timex.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cputype.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cputype.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cputype.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cdce949.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cdce949.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/cdce949.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/gpio-davinci.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/gpio-davinci.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/gpio-davinci.h") = 0
open("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/usb.h.dpkg-new", O_WRONLY|O_LARGEFILE) = 7
fsync(7) = 0
close(7) = 0
rename("/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/usb.h.dpkg-new", "/usr/src/linux-headers-3.2.0-20/arch/arm/mach-davinci/include/mach/usb.h") = 0
open("/usr/src/linux-he...

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for dpkg (Ubuntu) because there has been no activity for 60 days.]

Changed in dpkg (Ubuntu):
status: Incomplete → Expired
Revision history for this message
Brian J. Murrell (brian-interlinx) wrote : Re: [Bug 947664] Re: Unpacking linux-headers unbelievably slow in Lubuntu Precise (Beta 1)

On 12-05-30 12:18 AM, Launchpad Bug Tracker wrote:
> [Expired for dpkg (Ubuntu) because there has been no activity for 60
> days.]

This is BS. Just because Ubuntu ignore a bug for 60 days does not make
it eligible for Expriy. The bug still exists and will exist until
somebody does something about it. It doesn't just magically go away
after 60 days.

Changed in dpkg (Ubuntu):
status: Expired → Incomplete
Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :

The filesystem being used here is, no surprise, ext3. I really don't think this is a particular filesystem problem.

I'm currently testing dpkg's --force-unsafe-io. I expect it will speed things up, yes. But what if it does? Will Ubuntu actually use it in it's package managers given that it's unsafe across power outages? If not, there's really no point in even testing it, right?

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :

Hrm. Something's awry here. The baseline control package install only took 27 seconds. In fact it only takes 10 seconds on subsequent installations (i.e. after a dpkg -P). This really did take much much longer previously.

However, the same operation over NFS on Gige does take 25 minutes.

Over the same NFS with --force-unsafe-io it takes 26 minutes and that's probably because in analysing with strace, I don't see dpkg doing anything different per file being installed:

read(9, "./usr/src/linux-headers-3.2.0-24"..., 512) = 512
lstat64("/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h", 0xbfd1e310) = -1 ENOENT (No such file or directory)
rename("/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h.dpkg-tmp", "/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h") = -1 ENOENT (No such file or directory)
rmdir("/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h.dpkg-tmp") = -1 ENOENT (No such file or directory)
rmdir("/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h.dpkg-new") = -1 ENOENT (No such file or directory)
open("/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h.dpkg-new", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0) = 10
read(9, "/*\n * Generic GPIO API implement"..., 1196) = 1196
write(10, "/*\n * Generic GPIO API implement"..., 1196) = 1196
read(9, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 340) = 340
sync_file_range(0xa, 0, 0, 0) = 0
fchown32(10, 0, 0) = 0
fchmod(10, 0644) = 0
close(10) = 0
utimes("/usr/src/linux-headers-3.2.0-24/arch/alpha/include/asm/gpio.h.dpkg-new", {{1338377227, 0}, {1325721344, 0}}) = 0

So it seems there is still a sync being done after each file is written, even with --force-unsafe-io.

Revision history for this message
Raphaël Hertzog (hertzog) wrote : Re: [Bug 947664] Re: Unpacking linux-headers unbelievably slow in Lubuntu Precise (Beta 1)

On Wed, 30 May 2012, Brian J. Murrell wrote:
> Over the same NFS with --force-unsafe-io it takes 26 minutes and that's
> probably because in analysing with strace, I don't see dpkg doing
> anything different per file being installed:

That's because the fsync() are done in a batch at the end of the
unpacking. So the per-file operations at the start are the same...
but when you use --force-unsafe-io you don't have the serie of fsync()
at the end.

> read(9, "/*\n * Generic GPIO API implement"..., 1196) = 1196
> write(10, "/*\n * Generic GPIO API implement"..., 1196) = 1196
> read(9, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 340) = 340
> sync_file_range(0xa, 0, 0, 0) = 0

> So it seems there is still a sync being done after each file is written,
> even with --force-unsafe-io.

sync_file_range() is something different, it just tells the filesystem to
start the writeback, it doesn't wait for it to have happened (contrary to
fsync()).

So it should be irrelevant for the time lost, unless Linux's NFS driver
has a poor handling of this Linux-specific function.

Cheers,
--
Raphaël Hertzog ◈ Debian Developer

Get the Debian Administrator's Handbook:
http://debian-handbook.info/get/

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :

On 12-05-30 01:11 PM, Raphaël Hertzog wrote:
>
> sync_file_range() is something different, it just tells the filesystem to
> start the writeback, it doesn't wait for it to have happened (contrary to
> fsync()).
>
> So it should be irrelevant for the time lost, unless Linux's NFS driver
> has a poor handling of this Linux-specific function.

Well, I tried NOOPing it with an LD_PRELOAD:

sync_file_range(int fd, off64_t offset, off64_t nbytes,
                unsigned int flags) {

        printf("sync_file_range()\n");
        return 0;
}

but that doesn't seem to have helped performance any.

Revision history for this message
David Vonka (vonkad) wrote :

have been waiting for 20 minutes now. guest: Stock 12.04 LTS server, host: 12.04 lts lubuntu desktop. 8gb disk, 2gb out of 6gb memory. the whole dist-upgrade process is rather slow, but these headers really kill me.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for dpkg (Ubuntu) because there has been no activity for 60 days.]

Changed in dpkg (Ubuntu):
status: Incomplete → Expired
Revision history for this message
Benedikt (benedikt-klotz) wrote :

I can confirm this bug in a quantal vbox. In a Precise vbox the unpack process takes under one minute. Under quantal it takes longer than 5 min. Linux Headers is the only package that has this problem.

Changed in dpkg (Ubuntu):
status: Expired → Confirmed
Changed in dpkg (Ubuntu):
importance: Undecided → High
Revision history for this message
Raphaël Hertzog (hertzog) wrote :

Hello,

On Sun, 16 Sep 2012, Benedikt wrote:
> I can confirm this bug in a quantal vbox. In a Precise vbox the unpack
> process takes under one minute. Under quantal it takes longer than 5
> min. Linux Headers is the only package that has this problem.

Are you using btrfs in quantal but ext4 in precise ?

That might explain the difference.

Cheers,
--
Raphaël Hertzog ◈ Debian Developer

Do you like what I do? Support my free software work on Debian and Ubuntu:
http://raphaelhertzog.com/support-my-work/

Revision history for this message
Christian Reis (kiko) wrote :

On our Precise amd64 nfs-root systems, installing linux-headers is definitely what takes ages -- the rest of the package upgrades are always zippy. For some reason, I see 3 different dpkg-deb instances kicked off:

root 21184 0.0 0.0 16868 772 pts/1 S+ 22:05 0:00 dpkg-deb --fsys-tarfile /var/cache/apt/archives/linux-headers-3.2.0-31_3.2.0-31.50_all.deb
root 21185 0.0 0.0 16868 352 pts/1 S+ 22:05 0:00 dpkg-deb --fsys-tarfile /var/cache/apt/archives/linux-headers-3.2.0-31_3.2.0-31.50_all.deb
root 21186 0.0 0.0 16872 352 pts/1 S+ 22:05 0:00 dpkg-deb --fsys-tarfile /var/cache/apt/archives/linux-headers-3.2.0-31_3.2.0-31.50_all.deb

Stracing 21186:
write(1, "eturn pte;\n}\n\nstatic inline int "..., 4096) = 4096
write(1, "_id));\n}\n\nstatic inline int chp_"..., 4096) = 4096
write(1, "x0100-0x00fc];\t/* 0x00fc */\n\tpsw"..., 4096) = 4096
write(1, "01a0-0x0180];\t/* 0x0180 */\n\tpsw_"..., 4096) = 4096
write(1, "/*\n * Pull in the generic implem"..., 4096) = 4096
(printing about 2-3 lines per second)

Stracing 21185:
write(5, "\306\312\340@\242\334Kg`\273\326f\271\377\330\276%/\224y\301\273\215\327>{\357\244tK\20\340"..., 32768) = 32768
read(3, "0\6R\1\3514\206e\355-Q\255\340+V\340 \374\365\360\226\245\36a\365\7\25266T\2313"..., 32768) = 32768
write(5, "0\6R\1\3514\206e\355-Q\255\340+V\340 \374\365\360\226\245\36a\365\7\25266T\2313"..., 32768) = 32768
(Taking about 5 seconds between lines)

Stracing 21184:
wait4(21186, ^C <unfinished ...>

Is this purely an NFS issue?

Revision history for this message
yoyoq (yoyoq) wrote :

i see the same behaviour , xubuutu 12.04
ext4 hard drive, no NFS involved

Revision history for this message
Richard James (rjames13) wrote :

I have this problem in Xubuntu 12.10, ext4 on a SSD (no "discard" option in fstab)

I had a previous Xubuntu 12.04 install on this machine previously (different mechanical hard drive ext4) and had no problem.

Revision history for this message
Ted (ted276) wrote :

I'm experiencing this problem as well. I suggest all of those with the problem post some information to help narrow the issue down. In my case I'm using a newly installed SSD and have a feeling the slow linux-headers unpacking is just a side effect of slow write speeds.

Hard drive: SSD (Corsair CSSD-V60GB2 w/ firmware update)
Size: 60GB
Filesystem: ext4 with dm_crypt full-disk encryption

Ubuntu 12.04 LTS
Kernel: 3.2.0-36-generic

Approximate time to unpack: 30 min

# hdparm -tT /dev/sda2
/dev/sda2:
 Timing cached reads: 6526 MB in 2.00 seconds = 3270.42 MB/sec
 Timing buffered disk reads: 398 MB in 3.00 seconds = 132.57 MB/sec

# dd if=/dev/zero of=tempfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 16.0206 s, 67.0 MB/s

During the delay:
iHR linux-headers-3.2.0-37-generic 3.2.0-37.58

Immediately after the delay:
iU linux-headers-3.2.0-37-generic 3.2.0-37.58

linux-headers is unpacking 8152 small files, so my hunch is that there is nothing special about the package, nor a problem with dpkg, but instead that linux-headers is simply demonstrating the already slow disk speeds we have but don't notice with other packages.

Revision history for this message
Peter Pelmatro (gruselgurke) wrote :

I have the same issue. One package that took about 50 minutes to extract was the webmin .deb file, using the Ubuntu Archive Manager it extracted within 4 seconds.
I'm using 12.10 on a 32GB SSD from A-DATA with ext-4. Going by previous reports that might have something to do with SSDs?
Using the unsafe io option speed up the unpacking of the webmin package. Took only about 20 seconds after that.

Revision history for this message
Ydacha (sdm1rull) wrote :

On crucial v4 ssd 128 Gb, Ubuntu 13.04, FS etx4 with TRIM, extracting linux-headers-3.8.0-25-generic take ower 40 min.

Revision history for this message
A. Denton (aquina) wrote :

I can confirm this bug.

The following operations took about 15 minutes.

sudo apt-get install linux-generic linux-headers-generic linux-headers-generic-lts-quantal linux-image-generic linux-image-generic-lts-quantal linux-tools linux-tools-lts-quantal
[...]
Unpacking linux-headers-3.2.0-53 (from .../linux-headers-3.2.0-53_3.2.0-53.81_all.deb) ...
Unpacking linux-headers-3.2.0-53-generic (from .../linux-headers-3.2.0-53-generic_3.2.0-53.81_amd64.deb) ...

Unpacking linux-headers-3.5.0-40 (from .../linux-headers-3.5.0-40_3.5.0-40.62~precise1_all.deb) ...
Unpacking linux-headers-3.5.0-40-generic (from .../linux-headers-3.5.0-40-generic_3.5.0-40.62~precise1_amd64.deb) ...
[...]

Meanwhile:

:~$ sudo ps aux | grep 'dpkg'
root 1243 1.0 1.2 119988 96836 pts/6 Ds+ 11:56 0:04 /usr/bin/dpkg --status-fd 71 --unpack --auto-deconfigure /var/cache/apt/archives/linux-image-3.2.0-53-generic_3.2.0-53.81_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-40-generic_3.5.0-40.62~precise1_amd64.deb /var/cache/apt/archives/linux-generic_3.2.0.53.63_amd64.deb /var/cache/apt/archives/linux-image-generic_3.2.0.53.63_amd64.deb /var/cache/apt/archives/linux-headers-3.2.0-53_3.2.0-53.81_all.deb /var/cache/apt/archives/linux-headers-3.2.0-53-generic_3.2.0-53.81_amd64.deb /var/cache/apt/archives/linux-headers-generic_3.2.0.53.63_amd64.deb /var/cache/apt/archives/linux-headers-3.5.0-40_3.5.0-40.62~precise1_all.deb /var/cache/apt/archives/linux-headers-3.5.0-40-generic_3.5.0-40.62~precise1_amd64.deb /var/cache/apt/archives/linux-headers-generic-lts-quantal_3.5.0.40.46_amd64.deb /var/cache/apt/archives/linux-image-generic-lts-quantal_3.5.0.40.46_amd64.deb /var/cache/apt/archives/linux-tools-3.2.0-53_3.2.0-53.81_amd64.deb /var/cache/apt/archives/linux-tools_3.2.0.53.63_amd64.deb /var/cache/apt/archives/linux-tools-3.5.0-40_3.5.0-40.62~precise1_amd64.deb /var/cache/apt/archives/linux-tools-lts-quantal_3.5.0.40.46_amd64.deb

Revision history for this message
Fake Name (lemuix) wrote :

I can also confirm this issue.

I'm running vanilla Ubuntu Server 12.03 LTS in a Hyper-V VM.

durr@monsrv:~$ uname -a
Linux monsrv 3.2.0-51-generic #77-Ubuntu SMP Wed Jul 24 20:18:19 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
durr@monsrv:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS"
durr@monsrv:~$ ps -AF | grep dpk
root 12934 12912 0 12295 30440 0 04:04 pts/1 00:00:04 /usr/bin/dpkg --status-fd 57 --unpack --auto-deconfigure /var/cache/apt/archives/linux-headers-3.2.0-57_3.2.0-57.87_all.deb /var/cache/apt/archives/python-lazr.restfulclient_0.12.0-1ubuntu1.1_all.deb /var/cache/apt/archives/wpasupplicant_0.7.3-6ubuntu2.2_amd64.deb
durr 13213 13078 0 2347 944 0 04:18 pts/2 00:00:00 grep --color=auto dpk

It's been running for at least 15 minutes so far with no end in sight.

It's worth noting that I have no idea what the hold-up is. `iotop` isn't showing any meaningful disk-activity, and the CPU isn't loaded at all.

Revision history for this message
Fake Name (lemuix) wrote :

Forgot to mention. I'm using a plain-old ext4 partition.

Hyper-V instrumentation is showing a pretty static ~25-35 KB/sec read *and* write from the vhd disk.

Revision history for this message
yoyoq (yoyoq) wrote :

i see this problem.
i have an ssd and ext4 which i have seen mentioned a few times in other places.

Revision history for this message
yoyoq (yoyoq) wrote :
Download full text (4.2 KiB)

so i just decided to grit my teeth and wait for the update to finish.
it slowly does make progress.
this is the ps output

root 757 0.5 0.2 109944 86956 pts/4 Ds+ 14:34 0:06 /usr/bin/dpkg --status-fd 59 --unpack --auto-deconfigure /var/cache/apt/archives/libdevmapper1.02.1_2%3a1.02.48-4ubuntu7.4_amd64.deb /var/cache/apt/archives/initramfs-tools_0.99ubuntu13.5_all.deb /var/cache/apt/archives/initramfs-tools-bin_0.99ubuntu13.5_amd64.deb /var/cache/apt/archives/dmsetup_2%3a1.02.48-4ubuntu7.4_amd64.deb /var/cache/apt/archives/udev_175-0ubuntu9.5_amd64.deb /var/cache/apt/archives/file_5.09-2ubuntu0.3_amd64.deb /var/cache/apt/archives/libmagic1_5.09-2ubuntu0.3_amd64.deb /var/cache/apt/archives/sudo_1.8.3p1-1ubuntu3.6_amd64.deb /var/cache/apt/archives/xkb-data_2.5-1ubuntu1.5_all.deb /var/cache/apt/archives/bind9-host_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/dnsutils_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/libisc83_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/libdns81_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/libisccc80_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/libisccfg82_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/liblwres80_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/libbind9-80_1%3a9.8.1.dfsg.P1-4ubuntu0.8_amd64.deb /var/cache/apt/archives/openssh-server_1%3a5.9p1-5ubuntu1.2_amd64.deb /var/cache/apt/archives/openssh-client_1%3a5.9p1-5ubuntu1.2_amd64.deb /var/cache/apt/archives/python-apt-common_0.8.3ubuntu7.2_all.deb /var/cache/apt/archives/python-apt_0.8.3ubuntu7.2_amd64.deb /var/cache/apt/archives/gir1.2-gtk-3.0_3.4.2-0ubuntu0.7_amd64.deb /var/cache/apt/archives/update-manager-core_1%3a0.156.14.12_amd64.deb /var/cache/apt/archives/update-manager_1%3a0.156.14.12_all.deb /var/cache/apt/archives/cups-filters_1.0.18-0ubuntu0.2_amd64.deb /var/cache/apt/archives/firefox_28.0+build2-0ubuntu0.12.04.1_amd64.deb /var/cache/apt/archives/firefox-globalmenu_28.0+build2-0ubuntu0.12.04.1_amd64.deb /var/cache/apt/archives/firefox-locale-en_28.0+build2-0ubuntu0.12.04.1_amd64.deb /var/cache/apt/archives/gir1.2-gudev-1.0_175-0ubuntu9.5_amd64.deb /var/cache/apt/archives/printer-driver-postscript-hp_3.12.2-1ubuntu3.4_all.deb /var/cache/apt/archives/printer-driver-hpijs_3.12.2-1ubuntu3.4_amd64.deb /var/cache/apt/archives/libsane-hpaio_3.12.2-1ubuntu3.4_amd64.deb /var/cache/apt/archives/hplip_3.12.2-1ubuntu3.4_amd64.deb /var/cache/apt/archives/libhpmud0_3.12.2-1ubuntu3.4_amd64.deb /var/cache/apt/archives/hplip-data_3.12.2-1ubuntu3.4_all.deb /var/cache/apt/archives/printer-driver-hpcups_3.12.2-1ubuntu3.4_amd64.deb /var/cache/apt/archives/imagemagick-common_8%3a6.6.9.7-5ubuntu3.3_all.deb /var/cache/apt/archives/libmagickcore4_8%3a6.6.9.7-5ubuntu3.3_amd64.deb /var/cache/apt/archives/libmagickwand4_8%3a6.6.9.7-5ubuntu3.3_amd64.deb /var/cache/apt/archives/imagemagick_8%3a6.6.9.7-5ubuntu3.3_amd64.deb /var/cache/apt/archives/jockey-gtk_0.9.7-0ubuntu7.14_all.deb /var/cache/apt/archives/jockey-common_0.9.7-0ubuntu7.14_all.deb /var/cache/apt/archives/libcdt4_2.26.3-10ubuntu1.1_amd64.deb /var/cache/apt/archives/libdevmapper...

Read more...

Revision history for this message
Ted (ted276) wrote :

Anyone experiencing this issue may wish to try these and report back on what, if any, differences they make:

- Use dpkg's force-unsafe-io. Put "force-unsafe-io" in /etc/dpkg/dpkg.cfg.d/unsafe_io

- If using ext4, mount with the "nodelalloc" option. Either add it in /etc/fstab or remount with 'mount / -o remount,nodelalloc,<your other usual options>'

- Use libeatmydata (https://www.flamingspork.com/projects/libeatmydata/), which is an LD_PRELOAD library that will disable fsync() and related functions.

FYI, from the man page for dpkg:

              unsafe-io: Do not perform safe I/O operations when unpacking. Currently this implies not performing file system syncs before file renames, which is known to cause substantial performance degradation on some file systems, unfortunately the ones that require the safe I/O on the first place due to their unreliable behaviour causing zero-length files on abrupt system crashes.

              Note: For ext4, the main offender, consider using instead the mount option nodelalloc, which will fix both the performance degradation and the data safety issues, the latter by making the file system not produce zero-length files on abrupt system crashes with any software not doing syncs before atomic renames.

              Warning: Using this option might improve performance at the cost of losing data, use with care.

Revision history for this message
Dan42 (launchpad-net-dan42) wrote :

For the record, I used to have this problem and neither "force-unsafe-io" nor "nodelalloc" nor "discard" option in fstab solved the problem. Because... my problem was that I was using a very old SSD model that did not even have TRIM. Every write required an erase cycle and obviously that killed performance. Problem fixed with a new SSD.

Revision history for this message
Bazzilic (bazzilic) wrote :

I am confirming this bug. Ubuntu Server 14.04.1 x64 running in VirtualBox 4.3.26 (host OS is Windows). VM specs are 1024 MB RAM, 1 CPU (out of 8 cores on Xeon CPU), hardware acceleration enabled.

Unpacking of linux headers takes on average 20-30 minutes.

ali nagi (alin43958)
Changed in dpkg (Ubuntu):
status: Confirmed → Fix Released
Revision history for this message
Sig Bee (sloppysphincter) wrote :

" ali nagi (alin43958) on 2016-05-06
Changed in dpkg (Ubuntu):
status: Confirmed → Fix Released "

What was the fix?

I continue to have this same problem (various Linux versions on Xubuntu 18.04.2 hosts).

Revision history for this message
S. Christian Collins (s-chriscollins) wrote :

I am running into this bug in all of my Ubuntu (and variants) VirtualBox VMs as well. I didn't notice this issue with vanilla 18.04 installs, but only since upgrading the VMs to the hardware enablement stack, so perhaps this bug only happens with the newer kernels?

Revision history for this message
S. Christian Collins (s-chriscollins) wrote :

Okay, so I have narrowed this down a bit. The issue is the kernel version on the HOST system, not the GUEST. I tested this by installing the package "linux-headers-5.0.0-23" on a Xubuntu 18.04 guest in VirtualBox using two different kernels on the host PC. I used the command "time sudo apt install linux-headers-5.0.0-23" to report the total time for the process to complete. Here is how long the package installation took when running on each host OS kernel, not counting package download time (it was already in the apt cache):

Kernel 4.17.19-generic (and earlier): 20 seconds

Kernel 4.18-rc1-generic (and later): 8 minutes, 43 seconds

Using kernel 4.18 or later is almost 23 times slower! I have not been able to find any possible reasons for this. I have already ruled out differences in I/O schedulers. I can also confirm this issue on both of my Linux systems, the details of which are listed below. Furthermore, unpacking kernel headers on the HOST system only takes a few seconds, and I haven't noticed any other regressions in hard disk performance.

** System #1 **
OS: KDE Neon 5.16 64-bit (Plasma Desktop 5.16.5, KDE Frameworks 5.61.0, Qt 5.12.3)
Motherboard: ASRock X58 Extreme3 (Intel X58 chipset)
CPU: Intel Core i7-990x Bloomfield (3.46 GHz hexa-core, Socket 1366)
RAM: 12GB DDR3
Video: EVGA NVIDIA GeForce GTX 980 Ti SC (GAMING ACX 2.0+?) w/ 6GB RAM (PCI Express)
NVIDIA video driver: 430.40

** My System **
OS: KDE Neon 5.16 64-bit (Plasma Desktop 5.16.5, KDE Frameworks 5.60.0, Qt 5.12.3)
PC: HP Pavilion m6-1035dx
CPU/GPU: AMD A10-4600M APU with Trinity [Radeon HD 7660G] Graphics (using xorg radeon driver)
RAM: 8GB DDR3 800 MHz

Revision history for this message
Roger (antarex882) wrote :

I have noticed this issue also S. Christian Collins.
I have an Ubuntu host running Ubuntu Budgie 18.04 with three different VMs - one Ubuntu 16.04, one Ubuntu Budgie 18.04 and one Ubuntu Budgie 19.04 (the last two are running Linux kernel 5.0).
Had no problems with kernel updates until three weeks ago when I had to reinstall the host machine.
After I reinstalled the host with Ubuntu Budgie 18.04.3 (with linux kernel 5.0.0.x) I noticed that the kernel updates became very, very, very slow. I even got warnings about packages broken!
The original installation ran Ubuntu Budgie from release (18.04.01, kernel 4.15).

Also, I noticed that the same Ubuntu Budgie 18.04.03 guest VirtualBox installation took only 2 minutes to update on Windows 10 host from Linux kernel 5.0.0.27 to 5.0.0.29.

One other comment is that I am using VirtualBox 6.0.10.

Revision history for this message
S. Christian Collins (s-chriscollins) wrote :

Good news! I was able to work around the issue by enabling host I/O cache in the virtual machine settings. Go to "Storage", select "Controller: SATA", and enable "Use Host I/O Cache".

I'm not sure what changed in the Linux kernel to make this change necessary, but I wonder if kernels earlier than 4.18 were using the file cache even if the option was disabled in VirtualBox. Someone with a better knowledge of the Linux Kernal and/or VirtualBox would need to answer that question.

Hopefully this will solve the problem for others as well.

Revision history for this message
Roger (antarex882) wrote :

I think you are on to something here.
I will try this solution on next kernel update.

I also found some information why this setting is not turned on by default:
https://www.electricmonk.nl/log/2016/03/14/terrible-virtualbox-disk-performance/

I have also created an issue at the VirtualBox forum (Kernel updates in VM with Ubuntu 18.04 host (kernel 5.0.0.x) takes very very long time):
https://forums.virtualbox.org/viewtopic.php?f=7&t=94822&sid=86f403dbb1ae5474d61565f11d33486c

/My regards from Sweden
Roger

Revision history for this message
Blake McBride (blake-arahant) wrote :

I tried it by comparing the same VM with and without that setting. Here is what I found:

With "Use Host I/O Cache" disabled, booting took 40 seconds.

With "Use Host I/O Cache" enabled, booting took 12 seconds.

With "Use Host I/O Cache" disabled, doing a mini update took a few minutes.

With "Use Host I/O Cache" enabled, doing a mini update occurred nearly instantaneously.

I'm convinced.

Revision history for this message
Anton Pivovarov (korindou) wrote :

I've also encountered problem recently.
I'm installing headers 4.15.0-112.113. Progress is stuck on unpacking stage at 4%. I've tried clearing cache but this didn't seem to help. dpkg is disk sleep in htop.

Revision history for this message
funicorn (funicorn) wrote :

I encountered the same problem in Ubuntu 20.04 and 20.10. This fundamental dpkg deficit has been there for 8 years! How ridiculous a system like this!

Revision history for this message
Craig (enigma9o7) wrote :

Thanks much @~s-chriscollins! For me that setting in virtualbox was it.

When I started reading this thread and see similar issues since 2012 I wasnt very optimistic... but 7 years later you provide a solution! I don't understand if it's a problem in virtualbox or problem with dkpg, but this is great. I was using hwe in ubuntu 18 based VM and every friggen time I use it there seems to be a kerel update, and waiting an hour each time was ridiculous, thank you so much for posting this, back to a a couple minutes for updates like it should be.

Revision history for this message
xChris (cs-.) wrote :

"fix released" , really ? F OFF devs ! seriously it is the effin 2023 and still this bug exist . FOAD

Revision history for this message
Brian J. Murrell (brian-interlinx) wrote :

@xChris Wow. Gear down there big rig.

Yes, it sucks if you are still experiencing this bug, but just how entitled are you that you think you can react as you have? How much did you pay for your Ubuntu software? Do you have a paid support contract with Canonical? If so surely there is a better channel for you to show your displeasure than this bug tracker.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.