per links can only be concluded that a ever-growing amount of webservers and proxies prefer to violate a must requirement in the HTTP/1.1 spec (rfc2616 section "188.8.131.52 Pipelining" second sentence: "A server MUST send its responses to those requests in the same order that the requests were received."), nothing else.
The link to the debian bts includes also the agreement of the squid-proxy maintainer that squid should support a pipelining client hence a still open bugreport against squid (which is not the same as supporting pipelining! squid doesn't need to pipeline its own requests, it just needs to ensure that it's responses are in the correct order). And while i haven't tested it, the internet beliefs that newer squid versions supports pipelining clients (and even pipelining for itself!).
As already said in the uds session (and after it) it's rather meaningless to perform a test on a single machine with a low-latency network connection. Obviously pipelining has only a benefit for the client if the connection is flaky/high-latency (like e.g. my phone). For the network in-between and the server it might be that it has to handle fewer packages including requests and the server can deal with the requests faster and therefore being done with serving faster, ready to handle another client. (My preferred realworld example is: Consider a shopping list, do you shop and pay for each item individual or are you packing everything in your cart and pay for all in one go? Congrats, you pipelined your shopping!)
For the record: The opera webbrowser has it enabled since ever, chromium enabled it a bit more than 2 weeks ago in their dev-branch (http://codereview.chromium.org/10170044). Both might try harder to work around buggy servers through (before someone asks: yes if the server would close the connection, apt would fallback to none-pipelining as the spec recommends). Other browsers have it disabled for "the web is a buggy hell"-reasons or concerns about Head-of-line blocking (handling request A is slow - e.g. because it needs to be generated dynamically - so that B - e.g. a static file - could be already sent & done, if we wouldn't need to wait to finish A first…), which doesn't apply to repositories through so isn't an argument here. The "solution" in most clients is just to open a few more connections instead of working with one efficiently (realworld analogy: Oh, a traffic jam! Lets split our car into two cars to get two cars through the jam in the same amount of time…). Googles SPDY and HTTP/2.0 (will) try to fix all these "the web feels sooo slow!" with multiplexing and (drum-roll) pipelining…
So we are back at square one: the web is a buggy mess. Lets just hope that Google will force the web once again (after they fixed there own repository to work with their own browser [reductio ad absurdum]) to be more standard conform and disable it until then by default as i don't have the energy to defend it like previous contributors did (which is the only real conclusion to be taken from the previously mentioned threads) and will just enable it on all my machines.
(And now hands up, who imagined such an outcome after reading the previous four paragraphs? I just needed a reference to point people complaining about the new default to…)