libhttp-async-perl 0.29-1 source package in Ubuntu

Changelog

libhttp-async-perl (0.29-1) unstable; urgency=medium

  * Team upload

  [ Salvatore Bonaccorso ]
  * Update Vcs-Browser URL to cgit web frontend

  [ Axel Beckert ]
  * Remove Edmund von der Burg's homepage from debian/copyright. It's no
    more reachable for at least a week and his (likely working) e-mail
    address should suffice as contact information. Issue reported by DUCK.
  * Add debian/upstream/metadata
  * Import upstream version 0.29
    + Add build-dependency on libtest-fatal-perl.
  * Declare compliance with Debian Policy 3.9.6 (no other changes needed).
  * Mark package as autopkgtestable.

 -- Axel Beckert <email address hidden>  Sat, 30 May 2015 21:34:52 +0200

Upload details

Uploaded by:
Debian Perl Group
Uploaded to:
Sid
Original maintainer:
Debian Perl Group
Architectures:
all
Section:
perl
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section

Builds

Wily: [FULLYBUILT] amd64

Downloads

File Size SHA-256 Checksum
libhttp-async-perl_0.29-1.dsc 2.3 KiB c4b9b743cdaeade79029b8bf7bc151fd66b65d3f616788dc693cf95640e1e648
libhttp-async-perl_0.29.orig.tar.gz 22.5 KiB 3cdbaf164173ef5493266e392089d77649d161778e640d3b6e0504e1de7bae57
libhttp-async-perl_0.29-1.debian.tar.xz 3.0 KiB 08b2422b45de2375cab1d2f2c1ba3ce0cd974ddc0f8dd8d4d45d2d54125ec46c

Available diffs

No changes file available.

Binary packages built by this source

libhttp-async-perl: module for parallel non-blocking processing of multiple HTTP requests

 Although using the conventional LWP::UserAgent is fast and easy it does have
 some drawbacks - the code execution blocks until the request has been
 completed and it is only possible to process one request at a time.
 HTTP::Async attempts to address these limitations.
 .
 It gives you a 'Async' object that you can add requests to, and then get the
 requests off as they finish. The actual sending and receiving of the requests
 is abstracted. As soon as you add a request it is transmitted, if there are
 too many requests in progress at the moment they are queued. There is no
 concept of starting or stopping - it runs continuously.
 .
 Whilst it is waiting to receive data it returns control to the code that
 called it meaning that you can carry out processing whilst fetching data from
 the network. All without forking or threading - it is actually done using
 select lists.