gpu-manager causing long startup delays

Bug #1307069 reported by ImorePH@gmail.com on 2014-04-13
30
This bug affects 4 people
Affects Status Importance Assigned to Milestone
ubuntu-drivers-common (Ubuntu)
High
Alberto Milone
Trusty
High
Alberto Milone
Wily
High
Alberto Milone

Bug Description

I have installing ubuntu 14.04 beta2 and i have update it to latest. My computer take 40 seconds for start up from displaying grub until displaying LightDM on my SSD. My ubuntu 12.04 only takes up 13 seconds for start up on the same SSD.

reported as bug from question:
#246899
https://answers.launchpad.net/ubuntu/+question/246899

Computer specification:
Ati Mobility Radeon HD 5650 and using Mesa 10.2 from Oibaf's PPA (the start up time is same as default Gallium Mesa 10.1)
Processors: Intel Core i5-460M (Arrandale)
8 GB DDR3 RAM 1333 MHz
Vendor : Acer Aspire 4745G

running dmesg | less
showing:
intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung (many rows)

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: grub2 2.02~beta2-9
ProcVersionSignature: Ubuntu 3.13.0-24.46-generic 3.13.9
Uname: Linux 3.13.0-24-generic i686
NonfreeKernelModules: wl
ApportVersion: 2.14.1-0ubuntu2
Architecture: i386
CurrentDesktop: Unity
Date: Sun Apr 13 11:25:46 2014
InstallationDate: Installed on 2014-04-07 (5 days ago)
InstallationMedia: Ubuntu 14.04 LTS "Trusty Tahr" - Beta i386 (20140326)
SourcePackage: grub2
UpgradeStatus: No upgrade log present (probably fresh install)

ImorePH@gmail.com (imoreph) wrote :
ImorePH@gmail.com (imoreph) wrote :
Download full text (3.5 KiB)

running command:

dmesg | less

output:
[ 115.654087] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 115.857987] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 116.057970] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 116.257979] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 116.465892] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 116.665900] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 116.869912] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 117.069965] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 117.277921] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 117.485865] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 117.693884] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 117.893861] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 118.109871] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 118.317804] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 118.545822] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 118.745784] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 118.945821] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 119.145812] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 119.345770] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 119.553762] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 119.789762] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 119.989762] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 120.189729] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 120.413728] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 120.617813] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 120.861672] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 121.073686] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 121.273654] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 121.605715] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 122.013690] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 122.457665] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 122.825652] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 123.101652] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 123.385634] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 123.597608] intel ips 0000:00:1f.6: ME failed to update for more than 1s, likely hung
[ 123.861608] intel ips 0000:00:1f.6: ME fai...

Read more...

Phillip Susi (psusi) wrote :

Please install the bootchart package and attach the resulting chart in /var/log/bootchart after rebooting twice.

affects: grub2 (Ubuntu) → ubuntu
Changed in ubuntu:
status: New → Incomplete
ImorePH@gmail.com (imoreph) wrote :

1st

ImorePH@gmail.com (imoreph) wrote :

2nd

ImorePH@gmail.com (imoreph) wrote :
ImorePH@gmail.com (imoreph) wrote :

the last..

4 files from bootchart finish uploaded.

Phillip Susi (psusi) wrote :

You appear to have boinc installed and that is slowing things down somewhat. Try removing it.

ImorePH@gmail.com (imoreph) wrote :

I have remove boinc-client and related to it. But it seems it hasn't any changes. Still 40 to boot up. I have this problem since the first time after installation.

Is this problems related because I'm using ATI's gpu?

Other problem occur that annoying me (and my friends too) is I cannot use huawei modem since 14.04, bug #1306848. Many and so many users using huawei for linux.

Hope it will be fixed before 17th.

Phillip Susi (psusi) wrote :

It looks like gpu-manager may be causing the delay.

Phillip Susi (psusi) wrote :

The messages from the kernel seem to indicate that there is a hardware problem with the temperature monitor. This message comes from the driver that tries to share the power limits between the cpu and the intel integrated gpu to maximize performance while avoiding overheating. I wonder if that may be indirectly causing this gpu-manager to take so long.

affects: ubuntu → ubuntu-drivers-common (Ubuntu)
Changed in ubuntu-drivers-common (Ubuntu):
status: Incomplete → New
Martin Pitt (pitti) on 2014-04-16
summary: - Ubuntu 14.04 takes long start up.
+ gpu-manager causing long startup delays
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in ubuntu-drivers-common (Ubuntu):
status: New → Confirmed
Bryan Quigley (bryanquigley) wrote :

This looks like it might be caused by the 2 calls to dmesg.

Changed in ubuntu-drivers-common (Ubuntu):
assignee: nobody → Alberto Milone (albertomilone)
importance: Undecided → Medium
status: Confirmed → Triaged
Changed in ubuntu-drivers-common (Ubuntu):
status: Triaged → In Progress
importance: Medium → High
Bryan Quigley (bryanquigley) wrote :

Awesome, the simple change dmesg | grep, to grep syslog saves me about 1 second in gpu-manager, and possibly 2 in total boot time (from 7 seconds to 5). If the changes are proportional for a rotational HDD this will be a major speed improvement there..

I was thinking about using the journalctl instead, but that didn't work - seems to take longer.

Find seems to be the other big time culprit:
- "find /lib/modules/$(uname -r) -name '%s*.ko' -print",
+ "find /lib/modules/$(uname -r)/updates/dkms -name '%s*.ko' -print",

Do we always know that the modules will be under updates/dkms or could they be elsewhere? With that, it's far under 500ms for me (with strace on).

Launchpad Janitor (janitor) wrote :

This bug was fixed in the package ubuntu-drivers-common - 1:0.4.11

---------------
ubuntu-drivers-common (1:0.4.11) wily; urgency=medium

  * gpu-manager.c:
    - Rely on /var/log/syslog to get information about unloaded modules.
      This should minimise the current slowdown on boot (LP: #1307069).
    - Switch from intel to modesetting as the default driver on hybrid
      intel/nvidia systems because of a regression in the intel driver
      (LP: #1507676).

 -- Alberto Milone <email address hidden> Mon, 19 Oct 2015 17:35:12 +0200

Changed in ubuntu-drivers-common (Ubuntu):
status: In Progress → Fix Released
Bryan Quigley (bryanquigley) wrote :

Unfortunately on rotational HDD this may have made things worse...

On 20-10-15 15:01:27, Bryan Quigley wrote:
> Find seems to be the other big time culprit:
> - "find /lib/modules/$(uname -r) -name '%s*.ko' -print",
> + "find /lib/modules/$(uname -r)/updates/dkms -name '%s*.ko' -print",
>
> Do we always know that the modules will be under updates/dkms or could
> they be elsewhere? With that, it's far under 500ms for me (with strace
> on).

Yes, I think so, at least for fglrx and for nvidia.

Alberto Milone (albertomilone) wrote :

I'm reopening the bug report, as more can definitely be done to solve the problem.

Changed in ubuntu-drivers-common (Ubuntu):
status: Fix Released → Triaged
Bryan Quigley (bryanquigley) wrote :

I tried to moving some of the syslogs to journalctl again.. much better results. - http://pastebin.ubuntu.com/12980242/

Should I just propose merges to the wily branch or is there an upstream for this project?

Alberto Milone (albertomilone) wrote :

On 27-10-15 15:19:39, Bryan Quigley wrote:
> I tried to moving some of the syslogs to journalctl again.. much better
> results. - http://pastebin.ubuntu.com/12980242/
>
> Should I just propose merges to the wily branch or is there an upstream
> for this project?
>

Please use our git branch:
https://github.com/tseliot/ubuntu-drivers-common

I will take care of the backport once the change is in.

Thanks.

Alberto Milone (albertomilone) wrote :

Also, please keep in mind that there is no journalctl in 14.04. It's probably worth adding a check for it in the code.

Martin Pitt (pitti) wrote :

- snprintf(command, sizeof(command), "grep -q \"%s: module\" /var/log/syslog",
> + snprintf(command, sizeof(command), "journalctl -p 4 -k -o cat --no-pager | grep -q \"%s: module\"",

Sorry to be blunt here, but this makes me weep, and this kind of code is absolutely wrong. What on earth is this trying to do? A C program calling system() on boot to grep the syslog/journal can't possibly be a correct solution. If you merely want to check if a module is present/not present, check if /sys/module/<name> exists/does not exist.

Please let's find a proper solution here.

Martin Pitt (pitti) wrote :

Sorry, I apologize for the tone, that wasn't appropriate.

So, let's try this again: What is this grepping of syslogs trying to do? Can we replace this with checking for presence/absence of /sys/module/<name>?

Martin Pitt (pitti) wrote :

Another thing: grepping /var/log/syslog does not tell you much because that can span multiple boots. If you ask journalctl anything in this context, you should call it with "-b" so that it only shows the current boot, not all boots (if you enabled persistent journal). But still, this is very expensive and racy.

Alberto Milone (albertomilone) wrote :

On 28-10-15 06:31:17, Martin Pitt wrote:
> So, let's try this again: What is this grepping of syslogs trying to do?
> Can we replace this with checking for presence/absence of
> /sys/module/<name>?
>

The module can be loaded on boot and then it can (automatically) be
unloaded (at some point) if the GPU is disabled in a hybrid system. When
that happens, we have no way of telling a system where the discrete GPU
was disabled from one where the GPU is not there (e.g. disabled from the
BIOS, dead, or simply not there).

Both fglrx and nvidia print a line with the BusID of the device when
they are loaded. Gpu-manager collects the BusID from the log and adds it
to the list of available devices.

You can see an example of their output in set_unloaded_module_in_dmesg()
in tests/gpu-manager.py.

Please keep in mind that, at least for nvidia, gpu-manager needs to run
on log out too, not only on boot.

What do you recommend that we use?

Martin Pitt (pitti) wrote :

> The module can be loaded on boot and then it can (automatically) be unloaded (at some point) if the GPU is disabled in a hybrid system.

Then I suggest to install an udev rule like

  ACTION=="remove", SUBSYSTEM=="module", DEVPATH=="*/nvidia", RUN+="/bin/touch /run/nvidia_unloaded"

(this assumes the module is called "nvidia.ko" -- adjust the DEVPATH match as appropriate) and check whether /run/nvidia_unloaded exists.

Alberto Milone (albertomilone) wrote :

On 28-10-15 13:56:42, Martin Pitt wrote:
> > The module can be loaded on boot and then it can (automatically) be
> unloaded (at some point) if the GPU is disabled in a hybrid system.
>
> Then I suggest to install an udev rule like
>
> ACTION=="remove", SUBSYSTEM=="module", DEVPATH=="*/nvidia",
> RUN+="/bin/touch /run/nvidia_unloaded"
>
> (this assumes the module is called "nvidia.ko" -- adjust the DEVPATH
> match as appropriate) and check whether /run/nvidia_unloaded exists.
>

That is only part of what I need though, is there a udev rule that can
print out the PCI BusID of the device when the module is loaded?

Martin Pitt (pitti) wrote :

The PCI device will exist whether or not the module was loaded; so you can just iterate over /sys/bus/pci/devices/* (* expands to the PCI bus IDs) and check the attributes in each directory; e. g. you probably want to pick out the ones with class == 0x030000 (graphics card), and perhaps vendor == 10DE (nvidia).

If you want to do that via an udev rule: I'm not entirely sure which kinds of events you get when the nvidia driver gets loaded. You can unload it, run

  udevadm monitor -e --udev

then load it, and see what kind of events you get. For sure you'll see an "add" event for SUBSYSTEM=="module", DEVPATH=="*/nvidia", but either on boot or when loading the module you should also see an "add" or "change" event for the graphics card itself.

This is an initial sketch of a rule which selects a PCI card whose driver is nvidia:

  ACTION=="add|change", SUBSYSTEM=="pci", DRIVER=="nvidia*", RUN+= "touch /run/nvidia-loaded-for-pci-id-$env{PCI_ID}"

(using "nvidia*" here in case the modules might be called something like nvidia_123). You can use udev properties like $env{PCI_ID} and also attributes from sysfs like the above, with e. g. $attr{vendor}. See man udev(7) for other macros you can use in RUN clauses.

Then you don't need the "remove" rule any more -- if /sys/modules/nvidia does not exist but your stamp in /run does exist, you know that it was once loaded but then removed.

Alberto Milone (albertomilone) wrote :

On 29-10-15 07:52:37, Martin Pitt wrote:
> The PCI device will exist whether or not the module was loaded; so you
> can just iterate over /sys/bus/pci/devices/* (* expands to the PCI bus
> IDs) and check the attributes in each directory; e. g. you probably want
> to pick out the ones with class == 0x030000 (graphics card), and perhaps
> vendor == 10DE (nvidia).
>

Yes, I use libpciaccess for that. Of course it doesn't work when the GPU
is disabled through an ACPI call (i.e. power saving mode in hybrid
graphics).

It usually works like this on boot:

The GPU is already enabled, the module is loaded, then the ACPI call
disables the GPU (i.e. the PCI device disappears), and the module is
unloaded.

> If you want to do that via an udev rule: I'm not entirely sure which
> kinds of events you get when the nvidia driver gets loaded. You can
> unload it, run
>
> udevadm monitor -e --udev
>
> then load it, and see what kind of events you get. For sure you'll see
> an "add" event for SUBSYSTEM=="module", DEVPATH=="*/nvidia", but either
> on boot or when loading the module you should also see an "add" or
> "change" event for the graphics card itself.
>
> This is an initial sketch of a rule which selects a PCI card whose
> driver is nvidia:
>
> ACTION=="add|change", SUBSYSTEM=="pci", DRIVER=="nvidia*", RUN+=
> "touch /run/nvidia-loaded-for-pci-id-$env{PCI_ID}"
>
> (using "nvidia*" here in case the modules might be called something like
> nvidia_123). You can use udev properties like $env{PCI_ID} and also
> attributes from sysfs like the above, with e. g. $attr{vendor}. See man
> udev(7) for other macros you can use in RUN clauses.
>
> Then you don't need the "remove" rule any more -- if /sys/modules/nvidia
> does not exist but your stamp in /run does exist, you know that it was
> once loaded but then removed.

Yes, that would work. I already have udev rules in place to start other
applications when an NVIDIA GPU is made available, so I can simply
create that file in /run and get rid of the whole parsing code.

This should be a huge help. Thanks a lot!

Changed in ubuntu-drivers-common (Ubuntu):
status: Triaged → In Progress
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package ubuntu-drivers-common - 1:0.4.14

---------------
ubuntu-drivers-common (1:0.4.14) xenial; urgency=medium

  * gpu-manager.{c|py}:
    - Rely on udev for card detection. This helps detecting if a module
      was unloaded or if a card was disabled on a hybrid system.
      In addition to being more accurate (LP: #1485236), this should also
      be much faster on boot (LP: #1307069).
    - Restrict the search for built modules to the dkms directory. This
      improves performance.
  * debian/rules, setup.py, 71-u-d-c-gpu-detection.rules,
    u-d-c-print-pci-ids:
    - Provide a udev rule to detect cards and modules.

 -- Alberto Milone <email address hidden> Wed, 13 Jan 2016 10:35:18 +0100

Changed in ubuntu-drivers-common (Ubuntu):
status: In Progress → Fix Released
William T. Lowther (624wtl) wrote :

Is the fix available for Ubuntu 15.10? My bootup takes about 1 min from grub to when my desktop appears. Gpu-manager.service consumes about 18 seconds (from systemd-analyze). Based on the output of systemd-analyze plot, it looks like gpu-manager is on the critical path, altho it isn't listed in the output of critical-chain.
If more info on my bootup would be helpful, please let me know what you would like to see.
TIA,
Bill

Bryan Quigley (bryanquigley) wrote :

@William
Nope, the fix never made it back to Wily (and I'm not sure we're going to push for it to go there). It was a relatively big change. It will be in 16.04 though.. Sorry!

William T. Lowther (624wtl) wrote :

No prob. I will be going 16.04 LTS anyway.
Bill

Robie Basak (racb) wrote :

Related is bug 1565436.

Changed in ubuntu-drivers-common (Ubuntu Trusty):
status: New → Triaged
Changed in ubuntu-drivers-common (Ubuntu Wily):
status: New → Triaged
Changed in ubuntu-drivers-common (Ubuntu Trusty):
assignee: nobody → Alberto Milone (albertomilone)
Changed in ubuntu-drivers-common (Ubuntu Wily):
assignee: nobody → Alberto Milone (albertomilone)
Changed in ubuntu-drivers-common (Ubuntu Trusty):
importance: Undecided → High
Changed in ubuntu-drivers-common (Ubuntu Wily):
importance: Undecided → High

This seems to still be present in 16.04. I'm experiencing very slow boot because of gpu-manager - from systemd-analyze:

gpu-manager.service (50.583s)

Any thoughts?

William T. Lowther (624wtl) wrote :

I installed 16.04 and am now getting:
$ systemd-analyze
Startup finished in 7.329s (firmware) + 9.417s (loader) + 4.224s (kernel) + 15.292s (userspace) = 36.264s
$ systemd-analyze blame
          7.911s dev-sda6.device
          6.276s ufw.service
          5.863s systemd-journald.service
          5.716s systemd-tmpfiles-setup-dev.service
          4.949s systemd-sysctl.service
          2.793s systemd-fsck@dev-disk-by\x2duuid-7C3B\x2d12DA.service
          2.005s NetworkManager.service
          1.551s ModemManager.service
          1.537s gpu-manager.service
          1.531s accounts-daemon.service
          1.236s thermald.service
          1.005s grub-common.service
           928ms lightdm.service
           919ms systemd-modules-load.service
           894ms upower.service
           698ms apparmor.service
           643ms irqbalance.service
           548ms networking.service
           542ms console-setup.service
           480ms systemd-logind.service
           470ms dev-hugepages.mount
           466ms sys-kernel-debug.mount
           443ms plymouth-quit-wait.service
           404ms binfmt-support.service
           392ms speech-dispatcher.service
           387ms polkitd.service
           387ms ondemand.service
           360ms dev-mqueue.mount
           357ms alsa-restore.service
           344ms pppd-dns.service
           324ms rsyslog.service
           323ms apport.service
           312ms colord.service
           291ms udisks2.service
           275ms avahi-daemon.service
           251ms systemd-timesyncd.service
           232ms systemd-update-utmp.service
           204ms plymouth-read-write.service
           200ms systemd-udev-trigger.service
           195ms dev-sda7.swap
           189ms kmod-static-nodes.service
           148ms systemd-udevd.service
           144ms systemd-hostnamed.service
           142ms boot-efi.mount
           117ms systemd-localed.service
            94ms systemd-tmpfiles-setup.service
            80ms plymouth-start.service
            78ms user@1000.service
            73ms systemd-remount-fs.service
            63ms systemd-journal-flush.service
            37ms dns-clean.service
            30ms systemd-user-sessions.service
            20ms proc-sys-fs-binfmt_misc.mount
            17ms snapd.socket
            11ms systemd-update-utmp-runlevel.service
            10ms resolvconf.service
            10ms ureadahead-stop.service
             7ms rtkit-daemon.service
             6ms systemd-random-seed.service
             6ms sys-fs-fuse-connections.mount
             3ms rc-local.service
Bill

Alberto Milone (albertomilone) wrote :

@Jeremy @Bill - what is your system configuration (CPU/GPU)? Can you attach the output of lspci -vvv and your /var/log/gpu-manager.log please?

William T. Lowther (624wtl) wrote :
Download full text (10.6 KiB)

CPU - no gaming.

~$ uname -a
Linux Hal 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

:~$ lspci -vvv
00:00.0 Host bridge: Intel Corporation Atom Processor Z36xxx/Z37xxx Series SoC Transaction Register (rev 0e)
 Subsystem: Acer Incorporated [ALI] Atom Processor Z36xxx/Z37xxx Series SoC Transaction Register
 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
 Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
 Latency: 0
 Kernel driver in use: iosf_mbi_pci

00:02.0 VGA compatible controller: Intel Corporation Atom Processor Z36xxx/Z37xxx Series Graphics & Display (rev 0e) (prog-if 00 [VGA controller])
 Subsystem: Acer Incorporated [ALI] Atom Processor Z36xxx/Z37xxx Series Graphics & Display
 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
 Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
 Latency: 0
 Interrupt: pin A routed to IRQ 90
 Region 0: Memory at b0000000 (32-bit, non-prefetchable) [size=4M]
 Region 2: Memory at a0000000 (32-bit, prefetchable) [size=256M]
 Region 4: I/O ports at f080 [size=8]
 Expansion ROM at <unassigned> [disabled]
 Capabilities: <access denied>
 Kernel driver in use: i915
 Kernel modules: i915

00:13.0 SATA controller: Intel Corporation Atom Processor E3800 Series SATA AHCI Controller (rev 0e) (prog-if 01 [AHCI 1.0])
 Subsystem: Acer Incorporated [ALI] Atom Processor E3800 Series SATA AHCI Controller
 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
 Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
 Latency: 0
 Interrupt: pin A routed to IRQ 88
 Region 0: I/O ports at f070 [size=8]
 Region 1: I/O ports at f060 [size=4]
 Region 2: I/O ports at f050 [size=8]
 Region 3: I/O ports at f040 [size=4]
 Region 4: I/O ports at f020 [size=32]
 Region 5: Memory at b0716000 (32-bit, non-prefetchable) [size=2K]
 Capabilities: <access denied>
 Kernel driver in use: ahci
 Kernel modules: ahci

00:14.0 USB controller: Intel Corporation Atom Processor Z36xxx/Z37xxx, Celeron N2000 Series USB xHCI (rev 0e) (prog-if 30 [XHCI])
 Subsystem: Acer Incorporated [ALI] Atom Processor Z36xxx/Z37xxx, Celeron N2000 Series USB xHCI
 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
 Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
 Latency: 0
 Interrupt: pin A routed to IRQ 87
 Region 0: Memory at b0700000 (64-bit, non-prefetchable) [size=64K]
 Capabilities: <access denied>
 Kernel driver in use: xhci_hcd

00:1a.0 Encryption controller: Intel Corporation Atom Processor Z36xxx/Z37xxx Series Trusted Execution Engine (rev 0e)
 Subsystem: Acer Incorporated [ALI] Atom Processor Z36xxx/Z37xxx Series Trusted Execution Engine
 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
 Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort...

Alberto Milone (albertomilone) wrote :

I did a little profiling, and it seems that the call that gpu-manager does to drm to get the connected outputs is quite expensive, and I think there are better ways to check the connector status.

While certainly relevant to boot time, this is a separate problem (the original one being caused by an actual regression), and I have filed a separate bug report to keep track of it (LP: #1586933).

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers