Poor performance with WebKit on yakkety with Intel modesetting enabled

Bug #1615871 reported by Jeremy Bicha on 2016-08-23
56
This bug affects 10 people
Affects Status Importance Assigned to Milestone
WebKit
Fix Released
Medium
xf86-video-intel
Unknown
High
webkit2gtk (Ubuntu)
High
Unassigned
webkitgtk (Ubuntu)
Undecided
Unassigned
xorg-server (Ubuntu)
Undecided
Unassigned

Bug Description

https://tjaalton.wordpress.com/2016/07/23/intel-graphics-gen4-and-newer-now-defaults-to-modesetting-driver-on-x/

There are reports that WebKitGTK performance is much worse with Intel's modesetting driver (which was enabled a month ago for yakkety).

Upstream bug attached.

With Epiphany/WebKitGTK+, scrolling on many pages (such as any search results page on duckduckgo.com) is extremely slow and choppy. Launching epiphany with LIBGL_DRI3_DISABLE=1 causes these pages to work perfectly.

This is a critical bug for us since DuckDuckGo is our default search engine.

mesa-dri-drivers-10.3-1.20140927.fc21.x86_64
kernel-3.17.0-300.fc21.x86_64

Does Mesa commit f7a355556ef5fe23056299a77414f9ad8b5e5a1d help?

(It works around DRI3 sync slowdown, but there are also other slowdown bugs for DRI3 here in FDO bugtracker, just search for "DRI3".)

Does Epiphany/WebKitGTK+ uses GL for something else besides WebGL, or do these problematic pages use WebGL? Or how Mesa is involved in this (DRI3 is mainly X server side thing)?

(In reply to Eero Tamminen from comment #1)
> Does Mesa commit f7a355556ef5fe23056299a77414f9ad8b5e5a1d help?

I'll try to test this as soon as I can. Sorry for the delay.

> (It works around DRI3 sync slowdown, but there are also other slowdown bugs
> for DRI3 here in FDO bugtracker, just search for "DRI3".)
>
> Does Epiphany/WebKitGTK+ uses GL for something else besides WebGL

Yes, most notably for accelerated compositing:

"alex: I mean, accelerated compositing is activated by default and probably is creating some graphic layers that are composited using GL in that webpage"

> , or do
> these problematic pages use WebGL?

We actually have WebGL disabled by default.

> Or how Mesa is involved in this (DRI3 is
> mainly X server side thing)?

It might not be a mesa bug; I just filed against mesa because bug #81623 was filed against mesa.

(In reply to Michael Catanzaro from comment #2)
> (In reply to Eero Tamminen from comment #1)
> > Does Mesa commit f7a355556ef5fe23056299a77414f9ad8b5e5a1d help?
>
> I'll try to test this as soon as I can. Sorry for the delay.

Ah drat. This won't happen anytime soon, sorry. :/

(In reply to Michael Catanzaro from comment #3)
> > I'll try to test this as soon as I can. Sorry for the delay.
>
> Ah drat. This won't happen anytime soon, sorry. :/

Ok. What are your X server and X Intel driver versions?

xorg-x11-server-Xorg-1.16.1-1.fc21
xorg-x11-drv-intel-2.99.916-2.fc21

I've also just now tested mesa-dri-drivers-10.3.2-1.20141028.fc21, which surely includes the commit you asked me to test. Unfortunately the issue is still present.

We're also getting reports that YouTube videos are extremely choppy unless LIBGL_DRI3_DISABLE is used.

For the sanity of anyone trying to debug this: I'm pushing out a new version of Epiphany for Fedora that sets this environment variable at the beginning of main, since we really have no other choice at this point, so don't use Fedora Epiphany for testing this bug.

Created attachment 109481
xorg log after backing off to Oct 1 commit

looks to me like it is something with the xf86-video-intel code.

I re-compiled the driver to commit de7185bbf48ca2f617466b98328d0fdae4df1b44 from October 1st, and the issue for me is gone.

I think something happened between that one and 9a5ca59d2b7b209e6f56dd3f94d4ae6f06e1ecdc on the 15th of October -- that is when I started having this issue on my SNB Intel chip.

(In reply to Matt Hessel from comment #8)
> looks to me like it is something with the xf86-video-intel code.
>
> I re-compiled the driver to commit de7185bbf48ca2f617466b98328d0fdae4df1b44
> from October 1st, and the issue for me is gone.
>
> I think something happened between that one and
> 9a5ca59d2b7b209e6f56dd3f94d4ae6f06e1ecdc on the 15th of October -- that is
> when I started having this issue on my SNB Intel chip.

Hm, I'm not sure, since I first noticed the issue in Fedora 21 on September 13. That's not to say that the issue was introduced around that time: that was just the date I first tested Fedora 21.

I think there are multiple issues regarding DRI3, not limited to the sandboxing stuff in Chrome (which has been an issue for me longer than this.) But I started having screen update issues with Guake in a terminal in October. When typing it would update the cursor and the letters similar to running Wordperfect on my old PC in 1989. (type 4 digits, and it shows them to you after the third or fourth)

On top of that, I wasn't able to get chrome or chromium to run in Gnome3 at all at that point.

Backing the driver out to this version eliminates these issues for me. Other way I could get it working was to disable dri3 in xorg.conf.

Terminal acts normal again, and chrome will run without (too much) stupidity..

(In reply to Matt Hessel from comment #10)
> I think there are multiple issues regarding DRI3, not limited to the
> sandboxing stuff in Chrome (which has been an issue for me longer than
> this.) But I started having screen update issues with Guake in a terminal
> in October. When typing it would update the cursor and the letters similar
> to running Wordperfect on my old PC in 1989. (type 4 digits, and it shows
> them to you after the third or fourth)

Does your X server use latest X intel driver and latest of the related extensions (e.g. present)? Is your compositor also using latest versions?

Does this still happen with current versions of the graphics stack, in particular libxcb 1.11.1 or newer?

(In reply to Michel Dänzer from comment #12)
> Does this still happen with current versions of the graphics stack, in
> particular libxcb 1.11.1 or newer?

An Arch user complained to me about this bug just last month, and the LIBGL_DRI3_DISABLE=1 trick "fixed" the issue for him. I see libxcb 1.11.1 has been in Arch since September, so it seems extremely likely that he was using it at the time.

The issue is 100% reproducible on any DuckDuckGo search results page (e.g. [1]) when DRI3 is enabled on a wide variety of hardware. However, my distro of choice, Fedora, has disabled DRI3, and therefore I am personally no longer able to easily reproduce.

[1] https://duckduckgo.com/?q=freedesktop&t=epiphany&ia=about

Created attachment 121508
Envdump of epiphany on an up-to-date ArchLinux (2016-02-04), not showing the problem

Hello Michael,

Sorry for the long delay. I just installed gnome and epiphany on my machine and failed to reproduce any slow scrolling, even with the linked page. I confirmed that I am using DRI3 using env_dump (will attach the full report). How can I get in touch with the user who reported this issue?

Also, as I was trying to reproduce the choppiness of youtube videos, I only managed to get black videos on both Youtube and Vimeo. When displaying the "stats for nerds", it says the resolution is 0x0. The sound was playing perfectly and the preview in the progress bar also worked as expected. Is this a known bug?

(In reply to Martin Peres from comment #14)
> How can I get in touch with the user who reported this issue?

I've responded with contact info via email.

> Also, as I was trying to reproduce the choppiness of youtube videos, I only
> managed to get black videos on both Youtube and Vimeo. When displaying the
> "stats for nerds", it says the resolution is 0x0. The sound was playing
> perfectly and the preview in the progress bar also worked as expected. Is
> this a known bug?

No, that is not a known bug. YouTube works fine for me (on Fedora 23 with DRI2). Bug reports on bugzilla.webkit.org would be welcome (prefix the title with [GStreamer] and select component: Media Elements).

I have never seen Vimeo work before though; it seems to require an encumbered codec.

(In reply to Michael Catanzaro from comment #15)
> (In reply to Martin Peres from comment #14)
> > How can I get in touch with the user who reported this issue?
>
> I've responded with contact info via email.

Thanks, but why not continue here? If there is a privacy issue, I guess I can just keep people up to date.

>
> > Also, as I was trying to reproduce the choppiness of youtube videos, I only
> > managed to get black videos on both Youtube and Vimeo. When displaying the
> > "stats for nerds", it says the resolution is 0x0. The sound was playing
> > perfectly and the preview in the progress bar also worked as expected. Is
> > this a known bug?
>
> No, that is not a known bug. YouTube works fine for me (on Fedora 23 with
> DRI2). Bug reports on bugzilla.webkit.org would be welcome (prefix the title
> with [GStreamer] and select component: Media Elements).

Thanks! I will try again tomorrow morning and have a closer look at the logs, there may be something wrong with my setup. If I cannot find anything, I will report the bug.

>
> I have never seen Vimeo work before though; it seems to require an
> encumbered codec.

Ok, I should have tried dailymotion then.

Created attachment 121528
output of running epiphany with LIBGL_DEBUG=verbose

I believe I am that arch linux user. Here's the logs from running:

LIBGL_DEBUG=verbose epiphany

Created attachment 121529
glxinfo from bastianilso

..and here's my glxinfo.

Created attachment 121530
bastianilso's Xorg.0.log from February 4th

..and finally my Xorg.0.log (found in ~/.local/share/xorg/).

Before testing, I made a full system update.

Thanks Bastian! I have easy access to a Broadwell GT2, let's hope I can reproduce the issue.

In the mean time, could you try installing xf86-video-intel and try to reproduce the issue with it? Right now, you are using the modesetting driver.

(In reply to Martin Peres from comment #20)
> Thanks Bastian! I have easy access to a Broadwell GT2, let's hope I can
> reproduce the issue.
>
> In the mean time, could you try installing xf86-video-intel and try to
> reproduce the issue with it? Right now, you are using the modesetting driver.

Well, look no further, this is your issue. I tried using the modesetting driver and got absolute garbage out. Stale rendering, extreme slowness. Why didn't I think of this before...

Anyway, will try to see if I can fix this issue. Seems like I will have a happy time in glamor and mesa's code tomorrow.

Thanks, because you are the first one who finally answered me and provided me with real information.

(In reply to Martin Peres from comment #21)
> I tried using the modesetting driver and got absolute garbage out. Stale
> rendering, extreme slowness. Why didn't I think of this before...
>
> Anyway, will try to see if I can fix this issue. Seems like I will have a
> happy time in glamor and mesa's code tomorrow.

Note that I couldn't reproduce this problem with xf86-video-ati using glamor. I guess the problem could be in the Mesa driver, but the modesetting driver seems more likely.

(In reply to Michel Dänzer from comment #22)
> (In reply to Martin Peres from comment #21)
> > I tried using the modesetting driver and got absolute garbage out. Stale
> > rendering, extreme slowness. Why didn't I think of this before...
> >
> > Anyway, will try to see if I can fix this issue. Seems like I will have a
> > happy time in glamor and mesa's code tomorrow.
>
> Note that I couldn't reproduce this problem with xf86-video-ati using
> glamor. I guess the problem could be in the Mesa driver, but the modesetting
> driver seems more likely.

Well, the problem is likely limited to Intel. We will likely need a patch like this one[1] until we get explicit fencing.

[1] http://cgit.freedesktop.org/xorg/driver/xf86-video-intel/commit/?id=fc984e8953d61901b255422c8f56eb79a2dd2a28

(In reply to Michel Dänzer from comment #22)
> (In reply to Martin Peres from comment #21)
> > I tried using the modesetting driver and got absolute garbage out. Stale
> > rendering, extreme slowness. Why didn't I think of this before...
> >
> > Anyway, will try to see if I can fix this issue. Seems like I will have a
> > happy time in glamor and mesa's code tomorrow.
>
> Note that I couldn't reproduce this problem with xf86-video-ati using
> glamor. I guess the problem could be in the Mesa driver, but the modesetting
> driver seems more likely.

So, since the NVIDIA GTX 750 is not supported by Nouveau and the modesetting driver is used instead, I decided to try it out only to find out it has the exact same issue as Intel.

So, it not exactly rules out the possibility of a bug in mesa, but it is more likely to be a bug in the modesetting driver. Time to check out its code, but at least it is super easy to reproduce :)

(In reply to Martin Peres from comment #20)
> Thanks Bastian! I have easy access to a Broadwell GT2, let's hope I can
> reproduce the issue.
>
> In the mean time, could you try installing xf86-video-intel and try to
> reproduce the issue with it? Right now, you are using the modesetting driver.

I do have xf86-video-intel installed. Perhaps I need to do some manual intervention to disable using the modesetting driver?

I've seen this bug on several Linux distros (including Fedora and, I believe, openSUSE and Ubuntu) and I fear it's quite unlikely I was using the modesetting driver in those cases. But looking at the time I reported this bug compared to the time of the xf86-video-intel commit in comment #23, perhaps the bug used to affect xf86-video-intel but no longer does?

(In reply to Bastian Ilso from comment #25)
> (In reply to Martin Peres from comment #20)
> > Thanks Bastian! I have easy access to a Broadwell GT2, let's hope I can
> > reproduce the issue.
> >
> > In the mean time, could you try installing xf86-video-intel and try to
> > reproduce the issue with it? Right now, you are using the modesetting driver.
>
> I do have xf86-video-intel installed. Perhaps I need to do some manual
> intervention to disable using the modesetting driver?

I would suggest reviewing your configuration in /etc/X11/xorg.conf.d/ and /usr/share/X11/xorg.conf.d/.

(In reply to Michael Catanzaro from comment #26)
> I've seen this bug on several Linux distros (including Fedora and, I
> believe, openSUSE and Ubuntu) and I fear it's quite unlikely I was using the
> modesetting driver in those cases. But looking at the time I reported this
> bug compared to the time of the xf86-video-intel commit in comment #23,
> perhaps the bug used to affect xf86-video-intel but no longer does?

For sure, Intel was affected. I will try to revert the patch and see if I can reproduce the same behaviour as the modesetting driver. If so, that will make it clear what I need to do for the modesetting driver.

I started reading the code of glamor and the modesetting driver, let's hope I can pull this off. Otherwise, I will contact its original developers for help.

I think I bumped into this today. Running gentoo and this package set:

x11-drivers/xf86-video-intel-2.99.917_p20160218 USE="dri3 uxa" (so GLAMOR, not SNA)
media-libs/mesa-11.0.6 USE=dri3
www-client/google-chrome-48.0.2564.116_p1
x11-base/xorg-server-1.18.1 USE=glamor
x11-wm/i3-4.11

Hardware: DELL E7440 Haswell, with HDMI monitor directly connected, DP monitor connected through dock port.

Strange thing compared to previous comments is that Chrome slows down to a crawl *only* when moved to the third, DP monitor. Everything works OK when chrome is sent to laptop eDP panel, or HDMI monitor.

Launching with

$ LIBGL_DRI3_DISABLE=1 google-chrome-stable

successfully works around the problem, so seems to indicate this bug is the right place to be. Let me know if there's another, possibly more accurate bug available.

What's the possible next step here?

(In reply to Leho Kraav (:macmaN :lkraav) from comment #29)
> I think I bumped into this today. Running gentoo and this package set:
>
> x11-drivers/xf86-video-intel-2.99.917_p20160218 USE="dri3 uxa" (so GLAMOR,
> not SNA)
> media-libs/mesa-11.0.6 USE=dri3
> www-client/google-chrome-48.0.2564.116_p1
> x11-base/xorg-server-1.18.1 USE=glamor
> x11-wm/i3-4.11
>
> Hardware: DELL E7440 Haswell, with HDMI monitor directly connected, DP
> monitor connected through dock port.
>
> Strange thing compared to previous comments is that Chrome slows down to a
> crawl *only* when moved to the third, DP monitor. Everything works OK when
> chrome is sent to laptop eDP panel, or HDMI monitor.
>
> Launching with
>
> $ LIBGL_DRI3_DISABLE=1 google-chrome-stable
>
> successfully works around the problem, so seems to indicate this bug is the
> right place to be. Let me know if there's another, possibly more accurate
> bug available.
>
> What's the possible next step here?

Please send me your Xorg logs. I want to make sure you really are using the intel driver and not the modesetting driver.

Note that Chrome does not use WebKitGTK+ and has a completely different graphics architecture; might be worth reporting a second bug.

hi guys

i just wanted to add that this happened to me in the last few days after an intel driver update on debian testing (stretch) running the current gnome 3.20 desktop

if i log out and switch to a wayland session the bug completely disappears and life is back to normal fast scrolly land

this is on a thinkpad t420 with intel graphics (3000 series i believe)

(In reply to fakefur from comment #32)
> hi guys
>
> i just wanted to add that this happened to me in the last few days after an
> intel driver update on debian testing (stretch) running the current gnome
> 3.20 desktop

Yeah, it's due to https://tjaalton.wordpress.com/2016/07/23/intel-graphics-gen4-and-newer-now-defaults-to-modesetting-driver-on-x/

(In reply to Michael Catanzaro from comment #31)
> Note that Chrome does not use WebKitGTK+ and has a completely different
> graphics architecture; might be worth reporting a second bug.

Oh and to complicate matters further, we're switching to a new graphics architecture in WebKitGTK+ 2.14.0, which is available now for testing in WebKitGTK+ 2.13.4. It might cause this bug to occur all of the time (because accelerated compositing mode is now used always), or never (who knows!), but it should definitely no longer occur on a small subset of web pages anymore.

(In reply to Michael Catanzaro from comment #34)
> Oh and to complicate matters further, we're switching to a new graphics
> architecture in WebKitGTK+ 2.14.0, which is available now for testing in
> WebKitGTK+ 2.13.4. It might cause this bug to occur all of the time (because
> accelerated compositing mode is now used always), or never (who knows!)

It now occurs all of the time.

I've posted a possible fix to xorg-devel, this is due to the offscreen rendering getting throttle due to the driver not being able to link it to a crtc.

I'm not sure if the fix I posted is the full answer.

00:46 < airlied> keithp: dri3/present question
00:46 -!- ofourdan [~ofourdan@107-2.ar.fundp.ac.be] has quit [Ping timeout: 260 seconds]
00:46 < airlied> keithp: epiphany/webkit appears to be drawing offscreen
00:46 < keithp> a fine plan
00:46 < airlied> and currently -modeseting returned no crtc for that
00:46 -!- ofourdan [~ofourdan@107-2.ar.fundp.ac.be] has joined #xorg-devel
00:46 < airlied> so we ended up throttling to the 1s fake crtc
00:47 < keithp> oh, 'offscreen' and not to a pixmap?
00:47 < airlied> yup offscreen not a pixmap
00:47 < keithp> wtf?
00:47 < airlied> I can say bong :)
00:47 < keithp> well, sucks to do something stupid?
00:47 < airlied> yes appears to be a window at +2000 or something
00:47 < keithp> and so what would they like us to do?
00:48 < airlied> "x = 2881,
00:48 < airlied> y = 0, width = 2910, height = 1783"
00:48 < airlied> well -amdgpu didn't hit the problem I think by accident, as it alwasys picked the primary crtc no matter what
00:48 < airlied> if nothing else fit
00:48 < keithp> are they doing they're own compositing or something?
00:49 < airlied> it's one of those multi-process rendering things
00:49 < airlied> web browser and per-process web renderer
00:49 < keithp> sure, which is obviously a fine plan
00:49 < keithp> how are they capturing those pixels then?
00:49 < airlied> not sure, how they get them into the final image
00:49 < airlied> must be compositing them somehow I suppose
00:50 < keithp> I assume it's a deeply nested child window and they're doing their own compositing
00:50 < keithp> So, they can't use a pixmap because GL sucks, I assume
00:50 < airlied> yeah most likely a GL suckage
00:50 < airlied> as they are defintely swapbuffersing
00:52 < airlied> but yeah I'd just think we need to standardise the response to this behaviour :)
00:52 < keithp> I already did
00:52 < airlied> rather than luck of the driver maintainer draw
00:52 < keithp> anyone using DRI3 will get one frame per second
00:53 < airlied> so that's likely a big change from DRI2 behaviour
00:53 < keithp> DRI2 behaviour used to be unthrottled entirely
00:53 < keithp> you'd get infinite FPS
00:54 < keithp> which kinda sucked when the screen saver fired and your CPU/GPU utilization went to 100%
00:54 < airlied> I suppose GLX specifies nothing useful either

(In reply to Dave Airlie from comment #37)
> 00:50 < keithp> So, they can't use a pixmap because GL sucks, I assume
> 00:50 < airlied> yeah most likely a GL suckage

[Citation needed]

Why exactly can't they use pixmaps for this?

> 00:53 < keithp> DRI2 behaviour used to be unthrottled entirely
> 00:53 < keithp> you'd get infinite FPS
> 00:54 < keithp> which kinda sucked when the screen saver fired and your
> CPU/GPU utilization went to 100%

Actually, the DRI2 behaviour depends on the driver. The -ati/amdgpu drivers probably synchronize to the same CRTC as with DRI3, and extrapolate the refresh timings using timers while the CRTC is off.

01:24 < Kayden> seems like they could have also used the swap control extensions to stop vsync'ing themselves... :/

(In reply to Dave Airlie from comment #37)
> 00:46 < airlied> keithp: dri3/present question
> 00:46 -!- ofourdan [~ofourdan@107-2.ar.fundp.ac.be] has quit [Ping timeout:
> 260 seconds]
> 00:46 < airlied> keithp: epiphany/webkit appears to be drawing offscreen
> 00:46 < keithp> a fine plan
> 00:46 < airlied> and currently -modeseting returned no crtc for that
> 00:46 -!- ofourdan [~ofourdan@107-2.ar.fundp.ac.be] has joined #xorg-devel
> 00:46 < airlied> so we ended up throttling to the 1s fake crtc
> 00:47 < keithp> oh, 'offscreen' and not to a pixmap?
> 00:47 < airlied> yup offscreen not a pixmap
> 00:47 < keithp> wtf?
> 00:47 < airlied> I can say bong :)
> 00:47 < keithp> well, sucks to do something stupid?
> 00:47 < airlied> yes appears to be a window at +2000 or something
> 00:47 < keithp> and so what would they like us to do?
> 00:48 < airlied> "x = 2881,
> 00:48 < airlied> y = 0, width = 2910, height = 1783"

Yes, this is because we use the screen width + 1 to position our window offscreen:

WidthOfScreen(screen) + 1, 0, 1, 1,

and it turns out that simply using -1, -1 as coordinates fixes the problem.

This is becoming the default in several linux distributions and it makes WebKitGTK+ unusable in accelerated compositing mode, which is now always enabled because since we switched to use the threaded compositor. The problem seems to be an optimization of the intel driver for windows that are offscreen, and our redirected window is always positioned at ScreenWidth + 1, 0.

Created attachment 285205
Patch

(In reply to Carlos Garcia Campos from comment #40)
> Yes, this is because we use the screen width + 1 to position our window
> offscreen:
>
> WidthOfScreen(screen) + 1, 0, 1, 1,

FWIW, WidthOfScreen is bad anyway because that's just how wide the screen happened to be when the X11 display connection was established. The screen can be widened after that via the RandR extension, in which case the window would become visible.

> and it turns out that simply using -1, -1 as coordinates fixes the problem.

Weird, not sure why that avoids the problem; maybe there's an off-by-one bug somewhere which causes the window to be incorrectly considered on-screen. I wouldn't recommend relying on that. Also, again I'm not sure that negative coordinates can never be visible.

So, why does this code use a window instead of a pixmap?

Assuming a window is really the only possibility, (why) can't it set the swap interval to 0 via one of the GLX extensions for this, to prevent SwapBuffers operations from getting throttled?

I was tempted to r+ but it looks like this is an undesirable side effect, so I'd prefer not to rely on that. From the freedesktop bug:

"Weird, not sure why that avoids the problem; maybe there's an off-by-one bug somewhere which causes the window to be incorrectly considered on-screen. I wouldn't recommend relying on that. Also, again I'm not sure that negative coordinates can never be visible."

Also

"Assuming a window is really the only possibility, (why) can't it set the swap interval to 0 via one of the GLX extensions for this, to prevent SwapBuffers operations from getting throttled?"

Can't we do this instead?

(In reply to comment #2)
> I was tempted to r+ but it looks like this is an undesirable side effect, so
> I'd prefer not to rely on that. From the freedesktop bug:
>
> "Weird, not sure why that avoids the problem; maybe there's an off-by-one
> bug somewhere which causes the window to be incorrectly considered
> on-screen. I wouldn't recommend relying on that. Also, again I'm not sure
> that negative coordinates can never be visible."

Yes, but he also said that using WidthOfScreen is bad in any case, so if the patch fixes that and works as a workaround, we could land it in any case and then continue working a better solution, but with a workaround to make WebKitGTK+ usable again.

> Also
>
> "Assuming a window is really the only possibility, (why) can't it set the
> swap interval to 0 via one of the GLX extensions for this, to prevent
> SwapBuffers operations from getting throttled?"
>
> Can't we do this instead?

I guess, I'm not a graphics expert, and I don't even know if we really need to use a window or not. I'll take a look at the swap interval thing in any case.

(In reply to Michel Dänzer from comment #41)
> (In reply to Carlos Garcia Campos from comment #40)
> > Yes, this is because we use the screen width + 1 to position our window
> > offscreen:
> >
> > WidthOfScreen(screen) + 1, 0, 1, 1,
>
> FWIW, WidthOfScreen is bad anyway because that's just how wide the screen
> happened to be when the X11 display connection was established. The screen
> can be widened after that via the RandR extension, in which case the window
> would become visible.

So, I guess using -1, -1 is still a good idea even if it fixes the problem by casuality.

> > and it turns out that simply using -1, -1 as coordinates fixes the problem.
>
> Weird, not sure why that avoids the problem; maybe there's an off-by-one bug
> somewhere which causes the window to be incorrectly considered on-screen. I
> wouldn't recommend relying on that. Also, again I'm not sure that negative
> coordinates can never be visible.

Not sure it's off by one, using -100, -100 also works. Yes, that's more a workaround to make WebKitGTK+ usable again while we find the right solution.

>
> So, why does this code use a window instead of a pixmap?

I don't really know it, I'm not a graphics expert.

> Assuming a window is really the only possibility, (why) can't it set the
> swap interval to 0 via one of the GLX extensions for this, to prevent
> SwapBuffers operations from getting throttled?

I tried using glXSwapIntervalSGI(0); after glXMakeCurrent but didn't help.

(In reply to comment #3)
> Yes, but he also said that using WidthOfScreen is bad in any case

True. r=me on that basis, but I recommend giving the upstream bug a couple days to settle before committing, now that the Xorg folks are actively looking into the issue.

We also should have an answer to their question as to why we use an offscreen window instead of a pixmap, which is clearly their recommendation.

CCing folks who might possibly have an answer to the question in comment #4.

(In reply to comment #2)
> I was tempted to r+ but it looks like this is an undesirable side effect, so
> I'd prefer not to rely on that.

My main concern is that it might be a bug that the negative offset works to avoid throttling.

(In reply to comment #6)
> (In reply to comment #2)
> > I was tempted to r+ but it looks like this is an undesirable side effect, so
> > I'd prefer not to rely on that.
>
> My main concern is that it might be a bug that the negative offset works to
> avoid throttling.

Right, that's why I don't consider it a fix, if that's actually a bug and is fixed eventually the performance will be bad again, but I'm pretty sure we will not be the only ones affected like now, because most of the code that creates an offscreen window use negative coords, either -1, -1 or -100, -100.

Question:

Why are such things not being discussed in

Component: xorg
Product: Driver/modesetting

?

Also:

Why was xf86-video-modesetting implemented into xorg-server?

Now that Debian and Ubuntu are using xf86-video-modesetting instead of xf86-video-intel, it's no longer possible to upgrade the DDX driver without upgrading xorg-server, since xf86-video-modesetting no longer is a separate package.

Oibaf has already said that he will not include updates for xorg-server in his PPA:

https://www.phoronix.com/forums/forum/linux-graphics-x-org-drivers/opengl-vulkan-mesa-gallium3d/24959-updated-and-optimized-ubuntu-free-graphics-drivers/page168

Regards

(In reply to nw9165-3201 from comment #43)
> Question:
>
> Why are such things not being discussed in
>
> Component: xorg
> Product: Driver/modesetting
>
> ?

Seems like a better place indeed. I just copied the product/component from bug #81623. Originally it was an Intel driver issue anyway; it wasn't until earlier this year that Martin realized the Intel driver had been fixed and only the modesetting driver was still affected.

Jeremy Bicha (jbicha) on 2016-08-23
description: updated
summary: - Poor performance with WebKit2 on yakkety with Intel modesetting enabled
+ Poor performance with WebKit on yakkety with Intel modesetting enabled
Timo Aaltonen (tjaalton) on 2016-08-23
affects: xserver-xorg-video-intel (Ubuntu) → xorg-server (Ubuntu)
Changed in webkit2gtk (Ubuntu):
importance: Undecided → High

There will be a workaround for this issue in the upcoming WebKitGTK+ 2.12.4 release (which should occur within the next few days).

Changed in webkit-open-source:
importance: Unknown → Medium
status: Unknown → Fix Released

WebKitGTK+ 2.12.4 was released earlier today.

Jeremy Bicha (jbicha) on 2016-08-25
Changed in webkit2gtk (Ubuntu):
status: New → Fix Committed
Jeremy Bicha (jbicha) on 2016-09-29
Changed in webkit2gtk (Ubuntu):
status: Fix Committed → Fix Released
tags: added: performance
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in webkitgtk (Ubuntu):
status: New → Confirmed
Changed in xorg-server (Ubuntu):
status: New → Confirmed

(In reply to Carlos Garcia Campos from comment #42)
> So, I guess using -1, -1 is still a good idea even if it fixes the problem
> by casuality.

Note that Carlos "fixed" this issue in WebKit last summer by taking this approach, so it should no longer be possible to reproduce this issue in WebKit. It feels like a workaround, though, so I don't know whether this issue should be closed.

bagl0312 (bagl0312) wrote :

I am cross posting here my comment about chrome being slow in ubuntu 16.04.2 when multiple chrome windows are opened on different workspaces:

https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/1628866/comments/7

The #1628866 bug is marked as duplicated of this, however I am not sure that the problem is WebKitGTK.

Is there a way to test the new WebKit release on ubuntu 16.04 ?

   Thanks

Jeremy Bicha (jbicha) wrote :

bagl0312, Chrome doesn't use WebKit; they use Blink (which is admittedly a webkitgtk fork).

You already have the latest webkit2gtk since those are being pushed out as security updates to Ubuntu 16.04 LTS. The webkit2gtk fix was a workaround according to the webkitgtk developers. That's why the xorg part of this bug is still open.

bagl0312 (bagl0312) wrote :

@jbicha thanks for you explanation.

So does it mean that the bug 1628866 is not a duplicated of this one ? Or that I am affected by another bug ? I saw around a few report about the same problem with chrome (in ubuntu 16.10 and 16.04).

I checked that also chromium is affected

Changed in xserver-xorg-video-intel:
importance: Unknown → High
status: Unknown → Confirmed

(In reply to Carlos Garcia Campos from comment #42)
> I tried using glXSwapIntervalSGI(0); after glXMakeCurrent but didn't help.

glXSwapIntervalSGI doesn't support 0, try glXSwapIntervalMESA instead.

-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/xorg/xserver/issues/59.

Changed in xserver-xorg-video-intel:
status: Confirmed → Unknown

I experience bad performance with accelerated compositing mode on using WebkitGTK+ 2.22.5 on Fedora 29 with Intel Iris 540 (Skylake mobile) graphics. Turning it off makes pretty much everything faster, but particularly Facebook, which is nearly unusable with accelerated compositing on.

Also, when playing videos or scrolling through a site like Facebook, the laptop becomes hot and the fan goes full blast, which does not happen with accelerated compositing off.

Would this bug be the issue I'm experiencing, or should I file a new bug about this?

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.