Comment 18 for bug 854082

Revision history for this message
RJ Skerry-Ryan (rryan) wrote : Re: Lowband-EQ is creaking

Sorry shanxS, I didn't see your v2 patch. Ramping at the sample granularity rather than buffer granularity seems promising.

I was worried about the performance implications of putting the ramping in SampleUtil. If we apply a ramp to each individual sample then that introduces a loop dependence (a value of is changing based on the iterations through the loop) and it makes vectorizing the loop harder since every sample gets multiplied by a different value.

If the compiler is good enough, it can optimize away the loop dependence as long as it can determine through static analysis that 'a' is a simple function of # of loop iterations. I'm less optimistic about the compiler being able to vectorize the loop but I will look at a decompilation to check.

I added ramping of the other 2 channels to your patch and measured the time difference. Here is what I got:

Debug [Main]: Stat("EngineFilterBlock SampleUtil::copy3WithGain","count=13556,sum=1.13961e+07ns,average=840.665ns,min=507ns,max=47897ns,variance=381644ns^2,stddev=617.773ns")
Debug [Main]: Stat("EngineFilterBlock SampleUtil::rampCopy3WithGain","count=13556,sum=1.68933e+07ns,average=1246.19ns,min=664ns,max=33024ns,variance=377100ns^2,stddev=614.085ns")

If the measurement isn't being thrown off by external factors (e.g. caching), the difference is only about 400ns on average.
Based on this, I'm not too worried about the performance.

Another issue I have is that the v2 patch is not based on stream time but rather it just divides the total distance to change by the buffer size so that the entire ramp is done in one callback. This works for larger buffer sizes since it makes the change more gradual but for low buffer sizes the if the parameter change is large enough it could cause the jump to be just as bad as if we were not ramping (i.e. the gain difference between two samples could be above the threshold that would generate bad frequencies / pops). I wonder if there is an easy way to characterize what the maximum gain change is between samples that would not produce distortion?

I have some minor nits about the patch :)
* Try to avoid the C rules of defining variables at the start of a function and then using them later. Try to define the variable as close to its first use as possible.
* I see a new include for engine/engineasync.h. I assume that's not relevant to this bug?
* The vars in the constructor should be initialized to 1 (or better yet, filterpotLow->get()). As written the first callback ramps from 0 to 1.

Attached patch has these changes plus the setup I used for timing the change.