sbackups fails over ssh and sftp as root under linux mint

Bug #1415012 reported by Risto Vanhanen
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
sbackup
New
Undecided
Unassigned

Bug Description

Hello,

I am having trouble setting up sbackup in Linux Mint 17.1 MATE 64-bit to do scheduled backups as root over ssh or sftp. A local backup was created just fine. I have done sudo ssh <username>@<remove> so the host is known. Also, according to the log files, sbackup is able to create, write, read and remove the test file succesfully. However, it has trouble with full and incremental backups. The increment was tested against an earlier backup from Ubuntu 10.04 with some sbackup version.

First, setting "ssh" as "Type of service" in remote site gives "Location is not mountable" and refuses to stick with ssh, while setting "sftp" gives a tick next to the address. Therefore I used sftp.

There seems to be a problem in function "openfile_for_write" of "sbackup/fs_backend/_gio_utils.py". I added logging lines to see what is going on, so the function become

<code>
    @classmethod
    def openfile_for_write(cls, path):
        _gfileobj = gio.File(path)
#FIXME: etag should be set to None though it doesn't work then!
        _etag = ''
        try:
            _etag = _gfileobj.query_info(gio.FILE_ATTRIBUTE_ETAG_VALUE)
        except gio.Error, error:
            _logger = log.LogFactory.getLogger()
            _logger.warning("Could not retrieve `etag` for `%s`: `%s`. Setting it to an empty string." % (_gfileobj.get_parse_name(), error))
            _etag = '' # Should be None
        _ostr = '' # this can not be written to, so the next write will raise an exception
        try:
            # use "flags = gio.FILE_CREATE_REPLACE_DESTINATION", perhaps
            #_ostr = _gfileobj.replace(etag = _etag, make_backup = False)
            _ostr = _gfileobj.replace(etag = _etag, make_backup = False, flags = gio.FILE_CREATE_NONE)
        except gio.Error, error:
            _logger = log.LogFactory.getLogger()
            _msg = get_gio_errmsg(error, "Could not replace file `%s`: " % _gfileobj.get_parse_name())
            _logger.warning(_msg)
        return _ostr
</code>

In short, the first file to be written by "commit" in "sbackup/core/snapshot.py" ("commitbasefile" for inremental backups and "commitFormatfile" for full backups) fails to be opened for write. Both use "writetofile" in "sbackup/fs_backend/_gio_utils.py", which calls "openfile_for_write" in the same .py -file. In "openfile_for_write" _gfileobj.replace fails. Looking at the error messages, this seems to be a problem with gio, since gio reports both "No such file or directory" (call to get etag) and "Target file exists" (call to replace) for the same file in the first log.

The first log shows this for "flags = gio.FILE_CREATE_NONE" (default), the second log shows this for "flags = gio.FILE_CREATE_REPLACE_DESTINATION" and the third log shows this for incremental backups.

The sbackup version is:
$ sbackup --version
sbackup 0.11.6
2015-01-27 12:56:00,738 - WARNING: Unable to remove lock file: File not found.

Best Regards,
Risto

Revision history for this message
Risto Vanhanen (risto-vanhanen-k) wrote :
Revision history for this message
Risto Vanhanen (risto-vanhanen-k) wrote :

Workaround:

(1) Use sshfs manually to mount your remote backup location as a local directory.
(2) Add an exception for the local directory, so it will not backup itself recursively.
(3) Use local backup.

I had trouble with step (3) as root, but it worked as a regular user. In this case:

(4) Set up crod to do scheduled backups.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.