libcache-mmap-perl 0.11-3build1 source package in Ubuntu

Changelog

libcache-mmap-perl (0.11-3build1) utopic; urgency=medium

  * Rebuild for Perl 5.20.0.
 -- Colin Watson <email address hidden>   Wed, 20 Aug 2014 11:43:21 +0100

Upload details

Uploaded by:
Colin Watson
Uploaded to:
Utopic
Original maintainer:
Debian Perl Group
Architectures:
any
Section:
perl
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section

Downloads

File Size SHA-256 Checksum
libcache-mmap-perl_0.11.orig.tar.gz 21.0 KiB 2c9db069fff990c0765e600ca968114895f7e05aa661a18e59e94b3cd841abaa
libcache-mmap-perl_0.11-3build1.debian.tar.gz 3.9 KiB 2a420ac164f6eac5a9e5f3d55af560b7b484accb13a5cdfb728f889c13aab95e
libcache-mmap-perl_0.11-3build1.dsc 2.1 KiB dc1fa8f06dadd72d29fb9772b065db5719beb04f5251996c37230e8b2865e8d0

Available diffs

View changes file

Binary packages built by this source

libcache-mmap-perl: module to provide a shared data cache using memory mapped files

 Cache::Mmap implements a shared data cache, using memory mapped files.
 If routines are provided which interact with the underlying data, access to
 the cache is completely transparent, and the module handles all the details of
 refreshing cache contents, and updating underlying data, if necessary.
 .
 Cache entries are assigned to "buckets" within the cache file, depending on
 the key. Within each bucket, entries are stored approximately in order of last
 access, so that frequently accessed entries will move to the head of the
 bucket, thus decreasing access time. Concurrent accesses to the same bucket are
 prevented by file locking of the relevant section of the cache file.

libcache-mmap-perl-dbgsym: debug symbols for package libcache-mmap-perl

 Cache::Mmap implements a shared data cache, using memory mapped files.
 If routines are provided which interact with the underlying data, access to
 the cache is completely transparent, and the module handles all the details of
 refreshing cache contents, and updating underlying data, if necessary.
 .
 Cache entries are assigned to "buckets" within the cache file, depending on
 the key. Within each bucket, entries are stored approximately in order of last
 access, so that frequently accessed entries will move to the head of the
 bucket, thus decreasing access time. Concurrent accesses to the same bucket are
 prevented by file locking of the relevant section of the cache file.