Comment 1 for bug 1425705

Revision history for this message
Max Linke (max-linke) wrote :

I've started to look into this more. This seems to be a problem of the new threaded approach to scanning the library.

I guess the problem lies in recursiveScanDirectoryTask

recursiveScanDirectoryTask.cpp : 82

    if (prevHash != newHash) {
        // Rescan that mofo! If importing fails then the scan was cancelled so
        // we return immediately.
        if (!filesToImport.isEmpty()) {
            m_pScanner->queueTask(new ImportFilesTask(m_pScanner, m_scannerGlobal,
                                                      filesToImport, possibleCovers,
                                                      m_pToken));
        }

        // Insert or update the hash in the database.
        emit(directoryHashed(dirPath, !prevHashExists, newHash));
    } else {
        emit(directoryUnchanged(dirPath));
    }

To me this looks like the Code assumes that once I queue an ImportFilesTask it will run for no matter what happens later. But this assumption is wrong when we allow to cancel the scan.

This causes us to save directories as scanned and imported in the database while they are not in truth.

I haven't done any close checking yet. But what I assume happens is that the complete recursiveScan is finished a long time before the importFileTasks are finished.

Suprisingly I see this in the most severe form because I have a SSD. With an HDD there will be only a some tracks left unscanned.

I'm not sure what would be a good easy fix for this. It seems to me a like a design flaw and I don't have a lot of experience with debugging task queues. My current idea would be to keep the recursiveScanTask alive until the importFilesTask it has started is finished and notified of that, only then is the recursiveScanTask allowed to emit the directoryHashed signal. But that might complicate things way to much, it sounds like there should be an easier solution.

@rryan any comments?