Comment 256 for bug 252094

Revision history for this message
Bryce Harrington (bryce) wrote : Re: [i965] Poor graphics performance and rendering errors on Intel GM965 system, Ubuntu 8.10

Good god this bug has gotten far too long in the tooth. 213 comments!

I'll get into some good news and bad news regarding performance, but first some administrivia rantiness.

Okay, here's the deal. This bug report is turning into a mess of different performance-related bugs, and actually isn't useful in getting this problem -- everyone's problems are all intertwined together. Some people are seeing issues with -intel, but others are clearly having problems on the 3D side with mesa. Sadly some specific (perhaps solvable?) issues are getting lost in the noise and as such aren't going to get investigated, at least not here.

The problem is that the title describes a generic symptom, and everyone with a performance-related issue has piled on. This is a good lesson why it's important to file bugs with precise, specific titles. The poor original reporter is probably lost under a pile of replies! :-)

Honestly, I think the best thing to do here is to close this bug report and have people file separate bug reports. But from past experience I know we'll eventually end up with another generic titled "performance is bad" bug report with a gazillion comments and no clear path to a solution.

So... I may as well leave this one open and leave some explanations of how to file a proper performance bug report. Those who take the time to find and read this comment will learn how to file a NEW bug about their issue and include sufficient detail and precision to enable us to easily investigate it and report it upstream (or fix it in Ubuntu if there's a way for us to do that).

Okay, with that out of the way, let me get into the performance stuff. First let's dispense with the bad news: It is extremely unlikely performance problems are going to get fixed in Intrepid.

There's several reasons for this. First is that the issues are structural. A lot of pieces in X are in a multi-year process of being rearchitected to bring both a number of new features and to lay a foundation for much better performance. Unfortunately these changes can be pretty sizable, and inevitably bring some regression bugs - a risk not worth taking in a stable release.

Second, a lot of time the performance fixes require a pretty hefty amount of study and testing. Some of you may recall we had some pretty severe performance regressions in Hardy during development. It took a pretty focused amount of effort to dig in and figure out how to flip on and off various internal features to tune things up. Often a change that fixed one set of cards would cause problems on another set; so it took a lot of testing (including a lot of welcomed involvement from the community) to strike the right balance. The time investment for doing this tuning work is hard to understate.

Third, and sort of getting back to my original points, even if the above two problems didn't exist, the issues are not reported with sufficient specificity and detail to be able to begin investigation. But this is something you all can help with, and I'll explain how in a follow up.

Enough bad news, now for the good news. Looking forward, things are definitely getting better. In playing around with Jaunty Timo and I both notice the better performance on -intel, and it sounds like others are seeing it too. I'm chalking the credit for this up to upstream's structural work in mesa, xserver, libdrm, and all the other X components.

Several people have reported "performance regressions" in Jaunty relating to reduction in glxgears fps. Don't panic - remember how you've always read that "glxgears is not a benchmark"? I think this is one of those times where you need to not trust it as a benchmark. Judge performance using a game or even just your own senses instead. glxgears really just measures speed of blitting images to the screen, whereas in real 3d apps the bottleneck is in the rendering of the image. Monitors have a maximum rate they can refresh anyway, and besides your eye can only perceive up to a certain rate (which I guess is why movies are shot at 1/24). So as a rough rule of thumb I'd guess that unless you're getting less than 100fps on glxgears, don't really worry about what glxgears says - measure it with Tremulous or some other 3D game that gives a real workload.

Another bit of good news is that Intel upstream developers are currently very focused on architectural-level performance improvements as their top priority. A lot of this is experimental still, and our testing so far has not found it stable enough for regular usage yet. But it's in the pipeline and is promising some serious performance benefits. Before you ask, no, it's structural stuff so not really amenable to be backported; we'll get it out to you all as soon as we feel it is stable enough. (If you're up for building stuff from upstream's git branches, please do so! They'd appreciate feedback, and whatever upstream testing you do will help us all out later on.)

But if you're more of the mind of drawing the line at testing Jaunty, I'll explain in more detail how to do that in my next reply since this one is already far, far too long (but thanks for reading, if you got this far!)