increase BPM tap filter length

Bug #1882776 reported by ronso0
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mixxx
Confirmed
Low
Unassigned

Bug Description

Currently the filter sample list is limited to 5.
If we increase that to let's say 16 or more the detected BPM would get much more accurate and hitting cur_pos then would create a somewhat usable beat grid on the pretty quickly.

Code is here:
https://github.com/mixxxdj/mixxx/blob/master/src/engine/controls/bpmcontrol.cpp#L33

Changed in mixxx:
status: New → Confirmed
tags: added: easy
ronso0 (ronso0)
Changed in mixxx:
importance: Undecided → Low
assignee: nobody → ronso0 (ronso0)
Revision history for this message
ronso0 (ronso0) wrote :

It seems a bit more complicated that I first thought.
The accuracy doesn't improve with a longer filter list, which may be due to the nature of the InterQuartileMean filter.

I think for long filter lists like 8+ we get better results if we divide the time between the first and last tap and divide it by (number of taps)-1, instead of dropping values considered invalid like the IQM does. (assuming the user taps every beat)

So to improve tap results we'd need
* the IQM for like 4-8 taps = suitable if the momentary BPM matter, like for hand-played music)
* a regular mean filter for 8+ taps = close to perfect results for constant music

What do you think?

Be (be.ing)
Changed in mixxx:
milestone: 2.3.0 → none
Revision history for this message
ronso0 (ronso0) wrote :

Any comments on the proposed change?

Revision history for this message
Ferran Pujol (ferranpujol) wrote :

The interquartile mean effectively takes into account only half of the data points you give to it. From this point of view and assuming a data set with no outliers, interquartile mean needs twice as much data points to get the same accuracy than the regular mean. But with enough data points, it is just as accurate.

The regular mean is very sensitive to outliers. In this case, if you miss one single tap, you almost surely get the bpm wrong. Even if you have bad timing in one or two taps. This is the problem that interquartile mean solves.

If increasing its lenght alone doesn’t improve the behaviour of the filter, I’d try to drop fewer data points, i.e. generalizing the current interquartile mean to a truncated mean where we can choose a discard threshold other than the quartile.

Revision history for this message
ronso0 (ronso0) wrote :

Hmm, 'regular filter' is probably not the correct term. Maybe my proposal wasn't clear enough, sorry.
I mean BPM = (taps-1)/(tap duration) and this is just a
simple division, don't know if that can be called a filter.

Tapping along a 120 BPM track for ~30s gives ~60 beats.
We take the total time and the total amount of taps.
Outliers don't matter as it's not each interval that matters, only the first and last tap that need to be somewhat on-beat. The longer I tap the more the result would approximate 120 BPM.
I never managed to get close to the actual BPM with the IQM filter, it floats around 120 with the same (im)precision no matter how long I tap.

Revision history for this message
Ferran Pujol (ferranpujol) wrote :

Your proposed method is equivalent to the mean of the time intervals between taps.

However, your comment made me realize something. If you think about the taps as normally distributed time intervals (the time intervals between adjacent taps), this means that whenever you tap one beat too late, all subsequent beats will also be too late.

I think the right model in this case is to think of taps as a normally distributed error with respect to the ideal beat position. This means that each tap accuracy is independent of the other taps. If an outlier happens and for example you make one tap too late, the next tap will still be somewhat on time and the time interval between them will be smaller, thus compensating for the longer time interval of the previous tap. I think this models reality better.

From this point of view, the IQM might not be the right tool here. I'm not sure about the regular mean either, because it is over-sensitive on the accuracy of the first and last beat: It doesn't matter how good you tap the beats in between, if you are not accurate on either the first or the last, the bpm will be off.

Maybe we can draw some code from the constant bpm analyzer to solve this.

Revision history for this message
Ferran Pujol (ferranpujol) wrote :

This picture describes the two models I exposed above.

The top figure represents the model where the time intervals are normally distributed. You can see that when an outlier happens, the next beats are off, because intervals are independently distributed in this model, thus the error is not compensated.

In the second figure, we see that when an outlier happens, the next interval is shorter, because here what is independently distributed is not the interval, but the deviation of each beat from the ideal.

It is clear that in the second model (the right one) there's no point on dropping outliers, because they are compensated.

ronso0 (ronso0)
Changed in mixxx:
assignee: ronso0 (ronso0) → nobody
Revision history for this message
Swiftb0y (swiftb0y) wrote :

Mixxx now uses GitHub for bug tracking. This bug has been migrated to:
https://github.com/mixxxdj/mixxx/issues/10010

lock status: Metadata changes locked and limited to project staff
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.