Let's put everything in context first: A) The batch analysis of tracks is only performed when the user explicitly presses analyze on the analyze view. B) This analysis can be stopped if needed by pressing the same button that started it. C) We are trying to move from a single thread analysis to a multithread analysis where the CPU resources are maximized (as in used if available). D) The "on demand" analisys is that which happens when the track is loaded into a deck, and such track has not been analyzed yet. If an user does "A" during a live performance, we can only provide a degree of glitch-free response dependant on the OSs ability of priorizing threads. And if it is not enough, the user has option "B". About "C", this is about maximizing performance so we are interested in using the resources, if they are available. It is usual to offer the option to the user to change the amount of threads to use in case the detected setting isn't working as good as it should. In this context, we can only think about "D" as being a "live performance" case, and the truth is that this case is not changing, because that is one single analysis. I am not proposing to do each analysis type into a different thread, since that requires thread synchronization and most probably the waveform analysis takes most of the time, so the whole analysis time would not improve much. (We might try that at some point, but it's not the main idea in this bug) _ _ _ Now, how would I do it? If there isn't any value for our "multithreaded_analysis" setting, or its value is zero, calculate the number of threads and set it. We have two options to make this available to the user: A) have an editbox to enter the value, and use always this value, except if it is less than 1 or higher than max-detected-threads, where we would change it to max-detected-threads and show it to the user that we changed it. B) have an editbox to enter the value and another editbox, non-editable, that shows the actual number of threads that we are going to use, and use always this value, except if it is less than 1 or higher than max-detected-threads, where we would show to that we use max-detected-threads. _ _ _ The process itself, as i described above consists of generating the desired amount of threads, and have them "consume" an initially filled queue of tracks. All threads work independently and the queue is guarded so that only one thread at a time can update it. In this context, thread priority was supposed to play a role since this is by definition a "long-running" task, and we still want to continue working on other parts of the application. As said by RJ Ryan, we should not use idle priority since that doesn't work too well (seems that it's more geared to servers, where these tasks would only work when there is no user interaction). Still, we can give hints with LOW_PRIORITY, NORMAL_PRIORITY and HIGH_PRIORITY. Since we are not necessarily blocking other things, we should not find ourselves with problems. (Not sure about that cachereaderworker... ) About the amount of threads: The amount of threads only matters when they need to do something, since it will take longer for a lower priority thread to get its share of CPU if other more priorized threads are active. But then, if a thread doesn't have anything to do (as in it is sleeping, waiting for events, synchronizing, or otherwise not ready to run), we are not making anything go slower by having a low priority thread run for the whole time-slice. The real problem happens if we need an immediate response. An OS, except for interruptions, will not pause a running thread if it hasn't had it's time-slice. Since this is dependant on the OS, our priorized task might need to wait for a whole time-slice to start (and that's when there's no other similarly priorized task or other types of locks). At last, when i said about reducing by one the number of threads if a track is playing, it is only so that in that case we know it is going to require some extra CPU. (And we UI graphics count too) There's yet another option to play nicely with the user, reduce the number of threads to use if we detect an audio xrun. Do so until there is just one thread. _ _ _ I started looking at how are threads best done with QT, and the more i read about, the more I'm convinced that I can only use "barebones" QThread (just like we do now). We need a Per-thread instance of analyzers (but not a per-task one), we have a constant number of threads and we want them all to do the same, taking as input a synchronized queue of tracks to process. I.e. We are not in a scenario of list of black-box-like tasks to do (that include everything they need in order to acomplish that task) that are passed to a waiting thread that just tells them that they can run.