On 8/24/2010 4:02 PM, Alexander Belchenko wrote:
> I've coded a simple cached FTP transport on top of standard FTP transport in bzrlib, the plugin is available at lp:~bialix/+junk/kftp
> and you need to use kftp://host/path URL instead of ftp://host/path.
> It works good and indeed reduce the tme for pull and even push. I know it's ad-hoc, but it's better than nothing at all. I think it should be safe to cache packs/indices because they're supposed to be write-once data.
>
I think you can arguably cache any downloaded content for the lifetime
of the lock in the outer scope. As long as you take care of invalidating
cached data that you just uploaded. (So if you have something cached by
'get' or 'readv()' then it should be invalided/updated by a 'put' call.)
Note that this will still be significantly more inefficient that sftp,
because you aren't actually requesting a subset. You are just avoiding
requesting the same content twice. (So if you request 10 bytes 10 times
in a 1MB file, you'll still download 1MB, but that is certainly better
than 10MB.)
If you do the cache invalidation, I think we would be fine bringing that
into bzr core.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 8/24/2010 4:02 PM, Alexander Belchenko wrote: path.
> I've coded a simple cached FTP transport on top of standard FTP transport in bzrlib, the plugin is available at lp:~bialix/+junk/kftp
> and you need to use kftp://host/path URL instead of ftp://host/
> It works good and indeed reduce the tme for pull and even push. I know it's ad-hoc, but it's better than nothing at all. I think it should be safe to cache packs/indices because they're supposed to be write-once data.
>
I think you can arguably cache any downloaded content for the lifetime
of the lock in the outer scope. As long as you take care of invalidating
cached data that you just uploaded. (So if you have something cached by
'get' or 'readv()' then it should be invalided/updated by a 'put' call.)
Note that this will still be significantly more inefficient that sftp,
because you aren't actually requesting a subset. You are just avoiding
requesting the same content twice. (So if you request 10 bytes 10 times
in a 1MB file, you'll still download 1MB, but that is certainly better
than 10MB.)
If you do the cache invalidation, I think we would be fine bringing that
into bzr core.
John enigmail. mozdev. org/
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkx 0Ni0ACgkQJdeBCY SNAAM0WACgzGNTq P0NBCiKHohYAkvY EGCM 6MJhzseF65s0ULw oGk9IvrN
O5gAnA3z/
=TGKE
-----END PGP SIGNATURE-----