I am informed that soyuz, when handling incoming uploads, assumes that
all of the components of an upload (the .changes and the files listed
in it) will appear in a single FTP session. This is a violation of
the implied semantics of FTP, and can cause practical problems.
For example, if you use dupload but your upload fails (eg due to
network problems) after successfully completing some files, then
dupload will record success for those files but soyuz will delete
then. If you then rerun dupload it will upload only the files which
were not successfully transferred the first time, but the
already-uploaded files will have been previously deleted by soyuz.
As another example, you might reasonably upload the different parts of
an upload from different systems to save bandwidth on small links.
(Often the .orig.tar.gz is very large.)
As a third example, you might be behind an application relay (web
proxy) which starts a new FTP connection for each transfer. That's
obviously not ideal and is slow and wasteful but it's not demonstrably
wrong.
The correct solution is for soyuz to keep files hanging around rather
than giving each new upload connection a new blank directory. Clashes
between files of different names can be resolved in favour of the most
recent, if each target distribution or namespace has a separate upload
directory. Races can be avoided by careful programming.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
affects /products/soyuz
done
I am informed that soyuz, when handling incoming uploads, assumes that
all of the components of an upload (the .changes and the files listed
in it) will appear in a single FTP session. This is a violation of
the implied semantics of FTP, and can cause practical problems.
For example, if you use dupload but your upload fails (eg due to
network problems) after successfully completing some files, then
dupload will record success for those files but soyuz will delete
then. If you then rerun dupload it will upload only the files which
were not successfully transferred the first time, but the
already-uploaded files will have been previously deleted by soyuz.
As another example, you might reasonably upload the different parts of
an upload from different systems to save bandwidth on small links.
(Often the .orig.tar.gz is very large.)
As a third example, you might be behind an application relay (web
proxy) which starts a new FTP connection for each transfer. That's
obviously not ideal and is slow and wasteful but it's not demonstrably
wrong.
The correct solution is for soyuz to keep files hanging around rather
than giving each new upload connection a new blank directory. Clashes
between files of different names can be resolved in favour of the most
recent, if each target distribution or namespace has a separate upload
directory. Races can be avoided by careful programming.
Ian. mailcrypt. sourceforge. net/>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.6 <http://
iD8DBQFD43hN8jy P9GfyNQARAqM9AJ 4jxO7Jqs/ Z+XMX2Khl0YBr3M mvPwCeKSiR LewbWnNI=
DI+R1DR6Cp52eJf
=CRKM
-----END PGP SIGNATURE-----