apt-get hangs with "Waiting for headers" for exactly two minutes on dl.google.com apt line
| Affects | Status | Importance | Assigned to | Milestone | |
|---|---|---|---|---|---|
| | APT |
Unknown
|
Unknown
|
||
| | apt (Ubuntu) |
Low
|
daniele | ||
Bug Description
Binary package hint: apt
I installed google chrome in karmic and later lucid. Google chrome adds the following apt line to update itself:
deb http://
When running apt-get update with this line installed it will hang for exactly two minutes saying "Waiting for headers". I've ran a wireshark dump of the connection and it seems to be stopping right after receiving Release.gpg. After some kind of timeout is reached it seems to close the connection and then download everything else and finish without problems.
Attached is the wireshark log of all this. Seems to be some kind of bug in how apt's http code handles the google server. Both wget and curl fetch the file without problems. Why does apt even need its own implementation of an HTTP client?
Related branches
| Pedro Côrte-Real (pedrocr) wrote : | #1 |
| Pedro Côrte-Real (pedrocr) wrote : | #2 |
| Torsten Spindler (tspindler) wrote : | #3 |
I can reproduce the problem on Lucid. A two minute hang when adding the line to sources.
| Changed in apt (Ubuntu): | |
| status: | New → Confirmed |
| Adrian Bridgett (adrian-bridgett) wrote : | #4 |
I also see this on debian unstable using apt-cacher-ng FYI. hangs for a bit, then finishes
apt-cacher-ng: Version: 0.4.4-1~bpo50+1
apt: Version 0.7.25.3
...
Get: 68 http://
99% [Waiting for headers]
99% [Waiting for headers]
99% [Waiting for headers]
Ign http://
Ign http://
Hit http://
Ign http://
Ign http://
Hit http://
Hit http://
Fetched 300kB in 2min 1s (2,460B/s)
| gozdal (gozdal) wrote : | #5 |
It seems to be a problem with HTTP/1.1 pipelining on Google's side. APT sends 6 requests in pipeline to the server and receives only one response:
GET /linux/
Host: dl.google.com
Connection: keep-alive
If-Modified-Since: Sat, 20 Mar 2010 00:00:00 GMT
User-Agent: Ubuntu APT-HTTP/1.3 (0.7.23.1ubuntu2)
GET /linux/
Host: dl.google.com
Connection: keep-alive
User-Agent: Ubuntu APT-HTTP/1.3 (0.7.23.1ubuntu2)
GET /linux/
Host: dl.google.com
Connection: keep-alive
User-Agent: Ubuntu APT-HTTP/1.3 (0.7.23.1ubuntu2)
GET /linux/
Host: dl.google.com
Connection: keep-alive
If-Modified-Since: Sat, 20 Mar 2010 00:00:00 GMT
User-Agent: Ubuntu APT-HTTP/1.3 (0.7.23.1ubuntu2)
GET /linux/
Host: dl.google.com
Connection: keep-alive
User-Agent: Ubuntu APT-HTTP/1.3 (0.7.23.1ubuntu2)
GET /linux/
Host: dl.google.com
Connection: keep-alive
User-Agent: Ubuntu APT-HTTP/1.3 (0.7.23.1ubuntu2)
HTTP/1.1 200 OK
Last-Modified: Sat, 20 Mar 2010 00:00:00 GMT
Accept-Ranges: bytes
Content-Length: 189
Content-Type: application/
ETag: 12798
Vary: *
Date: Sun, 21 Mar 2010 00:05:52 GMT
Server: downloads
X-XSS-Protection: 0
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQBLpAB7oEC
MTcZmYqhpJ6jreL
=u/IL
-----END PGP SIGNATURE-----
As a workaround adding
Acquire:
to to a file in /etc/apt.conf.d fixes the problem for me.
| Stuart Colville (muffinresearch) wrote : | #6 |
gozdal's workaround to disable http/1.1 pipelining does the job, thanks!
Note: The apt conf dir is /etc/apt/
| Nuno Lucas (ntlucas) wrote : | #7 |
I can also confirm that the workaround works for me.
On the other hand it seems the update is slower than before, so while it's a nice workaround the real fix should be to make apt to fallback to this automatically if the previous method times out.
| spinkham (steve-pinkham) wrote : | #8 |
The google Chromium bug, which is linked above, claims the real fix on their end should be out in the next few days.
| Darkmike (mikefaille) wrote : | #9 |
This workaround work for me too.
| sobi3ch (sobi3ch) wrote : | #10 |
You guys mean /etc/apt/
| Nuno Lucas (ntlucas) wrote : | #11 |
sobi3ch: Any file will do. It will only matter if the same configuration is changed on a file read latter (with a higher number).
| Kamus (kamus) wrote : | #12 |
According to upstream comments this issue is solved, please could somebody check if this behaviour is still occurring under latest release of Ubuntu Maverick? Thanks.
| Changed in apt (Ubuntu): | |
| importance: | Undecided → Low |
| status: | Confirmed → Incomplete |
| Wayne Scott (wsc9tt) wrote : | #13 |
I was seeing this problem and it was certainly fixed months ago.
The problem was an incorrectly configured web server at google.
However I am still running Luicd, but I don't see why Maverick should behave
any differently.
| Lucifer (lucifer666inferno) wrote : | #14 |
In Maverick i have the same problem with the google repository.
| David Clayton (dcstar) wrote : | #15 |
This problem affects my 10.10 VMs as all package activities seem to stall waiting for non-existent _AU translation files.
Once whatever timeout is concluded, downloading and updating works but I still have to wait a couple of minutes before things get going.
The pipeline workaround cures this problem for me.
| vasilisc (vasilisc) wrote : | #16 |
/etc/apt/apt.conf
Acquire:
| Changed in apt (Ubuntu): | |
| assignee: | nobody → daniele (dacelli) |
| Jo Vermeulen (jozilla) wrote : | #17 |
Still occurs on 12.04 for me.
| Kyle M Weller (kylew) wrote : | #18 |
i can confirm this on 12.04
| Cris G (selfmedicate0440) wrote : | #19 |
Still occuring as of 8/25/14 on 14.04.


For some reason there is no apt-dbg package so I compiled a new apt package with unstripped binaries. gdb gives me this backtrace of the http method when the connection is hung:
0 0x00007f20dfbe5c53 in select () from /lib/libc.so.6 f4ab0, :RunHeaders (this=0x2080aa0) f4ab0)
#1 0x0000000000404b14 in HttpMethod::Go (this=0x7fff544
ToFile=<value optimized out>, Srv=0x2080aa0) at http.cc:789
#2 0x00000000004060e0 in ServerState:
at http.cc:396
#3 0x0000000000407ecc in HttpMethod::Loop (this=0x7fff544
at http.cc:1151
#4 0x0000000000409fcb in main () at http_main.cc:19
I haven't spent too much time looking at the code but one thing that came to mind was that the file was 189 bytes only so that maybe when it gets to the select it is waiting for extra data after the whole file has already been transmitted and it somehow has missed that.