On Thu, 2010-04-22 at 03:39 +0000, bcrowell wrote: > Oops, #43 reads "succeeded" on the second line where it should read > "failed." The corrected version is this: > > > Okay, here is the result of ls -l /dev/nvidia* from a session that > failed: > > crw-rw-rw- 1 root root 195, 255 2010-04-21 20:32 /dev/nvidiactl > > Here it is from a session that was successful: > > crw-rw-rw- 1 root root 195, 0 2010-04-21 20:34 /dev/nvidia0 > crw-rw-rw- 1 root root 195, 255 2010-04-21 20:34 /dev/nvidiactl > So there is a missing device node. This could be because the nvidia (nvidia-current) kernel module failed to load successfully or some problem with X driver. The documentation says (/usr/share/doc/nvidia-current/README.txt.gz): Q. How and when are the the NVIDIA device files created? A. Depending on the target system's configuration, the NVIDIA device files used to be created in one of three different ways: o at installation time, using mknod o at module load time, via devfs (Linux device file system) o at module load time, via hotplug/udev With current NVIDIA driver releases, device files are created or modified by the X driver when the X server is started. By default, the NVIDIA driver will attempt to create device files with the following attributes: UID: 0 - 'root' GID: 0 - 'root' Mode: 0666 - 'rw-rw-rw-' Existing device files are changed if their attributes don't match these defaults. If you want the NVIDIA driver to create the device files with different attributes, you can specify them with the "NVreg_DeviceFileUID" (user), "NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA Linux kernel module parameters. For example, the NVIDIA driver can be instructed to create device files with UID=0 (root), GID=44 (video) and Mode=0660 by passing the following module parameters to the NVIDIA Linux kernel module: NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=44 NVreg_DeviceFileMode=0660 I don't see, in your logs or mine, any reference to creating the device nodes in the udev logs so I'm not clear what is responsible for creating them at this point. As a test could you reboot into Recovery Mode which is single-user and won't start X and, in theory, will not load the nvidia driver. At the recovery menu choose to start a shell then: 1. Check whether the module is currently loaded: lsmod | grep nvidia 2. Check for device nodes: ls -l /dev/nvidia* 3. If nvidia module is loaded, unload it: modprobe -vr --first-time -o nvidia nvidia-current 3a. If you see: FATAL: Module nvidia is in use. then something is already using the module and we have to think some more on how to proceed. Move on to step 5. 3b. Otherwise, continue onwards... 4. Load the nvidia module manually with verbose reporting enabled: modprobe -v --first-time -o nvidia nvidia-current 5. Try manually starting X with verbose logging: startx -- -verbose 5 -logverbose 5 6. Check if the device node has been created: ls -l /dev/nvidia* 7. If the device node /dev/nvidia0 is not present or the X server fails to start please copy the output of these steps to this bug report. 8. As the aim here is to manually reproduce the failure you'll need to repeat these steps if at first the problem doesn't occur. Also, please report whether each test is done from a cold-boot (power cord removed from the wall outlet), cold-standby (PSU providing +5V stand-by), or warm-reboot (power not cycled; system just restarted) since this *might* be a hardware-related issue exposed by the newer xorg version. 8a. If running in single-user recovery mode doesn't provoke the problem then we will have to revise these steps to prevent gdm starting during a normal multi-user start so the same tests can be run manually there.