Nvidia

From Gentoo-en
Jump to: navigation, search

This article aims to be a comprehensive guide to the official Nvidia graphics card drivers. An official guide exists, but this article aims to be more comprehensive.

Different drivers

Before continuing with the article, it is important to provide you with some information.

Legacy vs. Current

The drivers for older "legacy" graphics cards used to be held in a separate package called nvidia-legacy-drivers. This is no longer the case and all drivers are now in the nvidia-drivers package. See "Installing the drivers" further down for selecting the correct driver.

Nouveau vs. nv vs. nVidia Drivers

There are several different groups developing drivers that support nVidia graphics cards on Linux. The X.Org team has an open source driver called 'nv' that you can install as part of xorg-x11, which has only basic support and does not support recent cards. There is also a reverse-engineered open source driver with experimental 3D support called nouveau, see this page if you wish to use it. The driver that is the focus of this article is the proprietary driver released by the fine folks at nVidia, called 'nvidia'. Only a small amount of code is given to be able to interface the kernel and the actual nvidia driver. These two projects have nothing to do with each other, and keeping them straight is important, especially when you start configuring X.Org near the end of this article. You only need one or the other - not both - and this article will help you install the one released by nVidia.

Preparing Your System

The official nVidia drivers are provided in 2 parts - the kernel module and the X.Org driver, both of which are held in the x11-drivers/nvidia-drivers package. Therefore, you need to make sure your kernel is set up to support module loading and to provide access to Memory Type Range Registers (MTRR).

Uninstalling the Open Source Nouveau Driver

If switching to the NVidia proprietary drivers, it is best to clear the open source version off the system. Adjust the following kernel options:

Linux Kernel Configuration:
Device Drivers  --->
    [*] Staging Drivers  --->
           < > Nouveau (nVidia) cards
        Graphics Support --->
           < > Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)  --->

If nouveaufb was in use for a console framebuffer, switch to the VESA framebuffer or a standard text console:

Linux Kernel Configuration:
Device Drivers  --->
        Graphics Support --->
               Console display driver support  --->
                  [*] VGA text console

OR

Device Drivers  --->
        Graphics Support --->
           <*> Support for frame buffer devices  --->
                  [*] Enable firmware EDID
                  [*] Enable Video Mode Handling Helpers
                  [*] VESA VGA graphics support

Remember to configure the VESA framebuffer prior to use or it will fallback to VGA text console if compiled. The screen will freeze otherwise.

Selecting the Right Kernel

Kernel module packages use the /usr/src/linux symlink to determine which kernel they should build against. If this link is already correct, move on to Required Kernel Settings.

Usually kernel modules should be built against the currently running kernel, so find out what that is by running:
uname -a

Gentoo provides a handy tool for changing lots of settings on the system called eselect. One of eselect's modules is for changing the /usr/src/linux symlink.

Note: You can find out more about eselect and the settings it can change by running:
eselect help
List all available kernel source directories with:
eselect kernel list
Find out which one the symlink is currently pointing to with:
eselect kernel show
Now set the kernel source directory to point the /usr/src/linux symlink to with eselect kernel set <n> where <n> is the number next to the kernel you wish to set the symlink to point to. For example, if the symlink should point to the number 5 item in the list, run:
eselect kernel set 5

Required Kernel Settings

Note: If you built your kernel with genkernel, you should be able to skip this section.

Make sure you have the following options enabled:

Linux Kernel Configuration: Kernel Configuration
General setup --->
 [*] System V IPC
Loadable Module Support --->
 [*] Enable Loadable Module Support
Processor Type and Features --->
 [*] MTRR (Memory Type Range Register) Support

AGP support is optional, dependent on your type of graphics card:

Linux Kernel Configuration: Kernel Configuration
Device Drivers --->
 Character devices --->
 (in kernel 2.6.24 it is:
 Graphics support --->)
  [*] /dev/agpgart (AGP Support)

Make sure you have the following options disabled. These options conflict with nVidia's driver:

Linux Kernel Configuration: Kernel Configuration
Device Drivers --->
 Graphics Support --->
   (in kernel 2.6.24 it is:
   Support for frame buffer devices --->)
  < >   nVidia Framebuffer Support
  < >   nVidia Riva support

If you need help configuring, building, and installing your new kernel, read the official Gentoo kernel guide.

Selecting the nVidia Driver Version

Determining Your Card ID and Model

Use lspci to find out what card you have. Note the identifier of the target card you wish to enable support for. (Adding -v or -vv will increase verbosity.)

Code: lspci
00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM  Controller/Host-Hub Interface (rev 01)
00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 01)
00:1d.0 USB Controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 01)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 81)
00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 01)
00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 01)
00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01)
01:00.0 VGA compatible controller: nVidia Corporation NV17GL [Quadro4 200/400 NVS] (rev a3)
05:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VM (LOM) Ethernet Controller (rev 81)

Using the -ns option followed by the identifier will allow you to see the actual ID # as it should appear on the List of Supported Devices.

Code: lspci -ns 01:00.0
01:00.0 0300: 10de:017a (rev a3)

Installing the drivers

To install the driver, you need to install x11-drivers/nvidia-drivers.

The nvidia-drivers package supports the full range of available nVidia cards. Multiple versions are available for installation, depending on the card(s) you have.

  • Newer cards such as the GeForce 8, 7 and 6 series should use the newer drivers from the 200.x series.
  • Older cards such as GeForce FX 5 series and related Quadro FX cards require the 173.x drivers. For these cards, you should mask >=x11-drivers/nvidia-drivers-180.00 in your /etc/portage/package.mask file.
  • Older cards such as the GeForce 3 or GeForce 4 and related Quadro4 and some Quadro2 series require the 96.x drivers. For these cards, you should mask >=x11-drivers/nvidia-drivers-97.00 in your /etc/portage/package.mask file. This will prevent newer versions of the driver which are incompatible with your card from being installed.
  • Old NV2x-based cards (such as TNT, TNT2, GeForce, GeForce 2, Quadro and Quadro2) require the older 71.x drivers (such as nvidia-drivers-71.86.01). For these cards, you should mask >=x11-drivers/nvidia-drivers-87.00 in /etc/portage/package.mask.

See also the Nvidia Legacy GPU page.

Installing the Latest "Unstable" Release

If you want to run the latest "testing" release, i.e. the latest release from nVidia, you will need to unmask and install the newest x11-drivers/nvidia-drivers package. Unmask the nvidia-drivers package by adding it to your /etc/portage/package.keywords file:

echo "x11-drivers/nvidia-drivers" >> /etc/portage/package.keywords

Beta Drivers

On some occasions, the very latest nVidia drivers will be masked by package.mask. This usually happens when those drivers are considered "beta" by nVidia - ie. even nVidia considers them unstable. To use these drivers, you'll need to add the drivers to both your /etc/portage/package.unmask and /etc/portage/package.keywords files:

echo "x11-drivers/nvidia-drivers" >> /etc/portage/package.keywords echo "x11-drivers/nvidia-drivers" >> /etc/portage/package.unmask

Configuring X.Org/X11

Switching to the New Driver

The X.Org configuration file needs to be updated so that X will use the new driver. To do this, edit your /etc/X11/xorg.conf file find the 'Device' section where the graphics card is configured and replace the Driver entry with 'nvidia'. The following shows an example "before and after".

File: xorg.conf: Change Driver to 'nvidia'

OLD:

 Section "Device"
  Identifier "GeForce2 Pro/GTS"
  Driver     "nv"
 EndSection

NEW:

 Section "Device"
  Identifier "GeForce2 Pro/GTS"
  Driver     "nvidia"
 EndSection

Activating GLX

To enable 3D acceleration, the GLX module needs to be activated and both the DRI and GLcore modules must be deactivated.

In the xorg.conf file:

File: xorg.conf: Settings required for nvidia 3D acceleration
Section "Module"
 ...
Load "glx"
# Load "dri"
# Load "GLcore"
 ... 
EndSection
Note: To prevent DRI from loading, commenting may not be sufficient, and you will need to the following instead.
Disable "dri"
Note: For ~amd64 arch, it is necessary to add explicit library path to /etc/X11/xorg.conf for module loading. Otherwise, X will fail with "Module does not exist" error"
File: xorg.conf: Fix up missing DRI/DIR2 error on ~amd64
  Section "Files"
        ModulePath "/usr/lib64/xorg/modules"
        ModulePath "/usr/lib64/xorg/modules/extensions"
        ModulePath "/usr/lib64/xorg/opengl/xorg-x11"
        ModulePath "/usr/lib64/xorg/opengl/xorg-x11/extensions"
  EndSection

Now, the module is enabled in configuration, but in order for it to work X must be running in either 16 or 24-bit color mode.

To do this set the DefaultDepth setting in the 'Screen' section of the xorg.conf as shown in the example below. Please note that there must be 16-bit and/or 24-bit modes in the "Display" subsection of the "Screen" section.

File: xorg.conf: Setting the Default Color Depth
 Section "Screen"
  ...
  DefaultDepth 24
  SubSection "Display"
   ...
  ...
 EndSection

There are a number of different opengl libraries available and it's possible to install more than one. To manage this situation, Gentoo uses the eselect tool (as already shown earlier).

Note: To find out more about the eselect tool and its options, run:
eselect help
To tell Gentoo to use the nvidia opengl implementation, run:
eselect opengl set nvidia

Wrap It Up

Adding Users to the Video Group

To protect the system from malicious activities, linux restricts the users that can access a given piece of hardware. In the case of the video card (which needs to be accessed for 3D acceleration), users must be members of the 'video' group.

For each user you wish to allow to use 3D acceleration, run the following command, where <username> is the name of the user you wish to add to the group:
gpasswd -a <username> video

(Re)starting X

Now X must be restarted with the new configuration. You'll want to read all of this section before carrying out any commands.

If you are logged in to X, first log out (usually by choosing "Log Out" or "quit" from the menu of your desktop environment.

If you're at the console, restart X with:
startx

If you're at a graphical login screen, you still need to restart X. This can be achieved by pressing Ctrl + Alt + Backspace. This key combination kills the currently running X server. X is then restarted by the desktop manager (xdm, gdm or kdm).

Testing Your Configuration

To ensure that 3D acceleration is working, from a console running inside of X and as a normal user (not as root), run:
glxinfo | grep direct

You should see that direct rendering is enabled, as shown in the example output below.

If you get the message "bash: glxinfo: command not found", install the mesa-progs package with:
emerge -a x11-apps/mesa-progs
Code: glxinfo output example
direct rendering: Yes
Note: Another tool for testing GLX is 'glxgears', however this doesn't show whether direct rendering is actually enabled and can't be used as a benchmark.

Extra Configuration Options

Remove the nvidia splash screen

Normally when X starts with the nvidia drivers installed, a splash screen is shown. This can be removed by setting the NoLogo option to "true" as shown in the example below.

File: xorg.conf: Disable nvidia splash screen
 Section "Device"
  Identifier "GeForce2 Pro/GTS"
  Driver     "nvidia"
  VideoRam   65536
  Option     "NoLogo" "true"
 EndSection

Activating NV30 Emulation for lower architectures

Unless you know what this does, you should not use it. It will not increase performance. It is possible (even if no longer advertised on nvidia page) to emulate NV30 architecture on older cards. For example, you can run FX pixel shaders on an NVIDIA GeForce 2 Go. This can be achieved by adding the option "NVEmulate" to the /etc/X11/xorg.conf file, section that concerns the nvidia device:

Code: Enabling emulation of NV30 NVIDIA architecture
 Section "Device"
  ...

  Option "NVEmulate"  "30"  
...
  Driver "nvidia"
   ...
  ...
 EndSection

Maybe it is possible to use the value "40" instead of "30" in order to see if there is some difference... maybe it means that also NV40 architecture can be emulated. KEYWORDS for search engines (since it is difficult to find this info): __GL_NV30EMULATE , GL_NV30_EMULATE


Activating Coolbits; Overclocking Controls for nVIDIA Settings

Warning: Proceed with caution! You can damage your GPU through over-clocking.

There are many fine pages about where to begin with overclocking, you should read several and fully understand what you are doing before altering the settings for your card.

Beginning with version 1.0-7664, Coolbits, support for GPU clock manipulation, is included.

To activate Coolbits, open xorg.conf in a text editor and add the following line in Section "Device" :

File: xorg.conf: Enabling Coolbits
Option   "Coolbits" "1"

Restart your X-server and nvidia-settings.

There will be a new item in the left column list of categories in nvidia settings; Clock Frequencies. Click on the "Enable Overclocking" checkbox and read and accept the license agreeement.

You can now set the frequencies yourself or use the auto detect feature to find "optimal" values. The overclock settings will not survive restarting X.

To fix this, add this line to your ~/.xinitrc:

File: .xinitrc
nvidia-settings --assign "[gpu:0]/GPUOverclockingState=1" \
                --assign "[gpu:0]/GPU2DClockFreqs=<gpu clock>,<mem clock>" \
                --assign="[gpu:0]/GPU3DClockFreqs=<gpu clock>,<mem clock>" &

The first set of tags (GPU2DClockFreqs) is for 2D and the second set of tags (GPU3DClockFreqs) is for 3D. Substitute <gpu clock> and <mem clock> with your desired GPU and memory clock frequencies, respectively. If you have a second graphic card in your system, add another line and change [gpu:0] to [gpu:1].

Manual Fan Control for nVIDIA Settings

Some combinations of nvidia cards and driver versions report that fan-speed is "variable", but do not actually ever change the fan speed regardless of temperature. If you experience an unreasonably hot GPU and nvidia-settings reports your fan speed as "Variable" but never leaves its assigned value, try the below.


It's probably a good idea to read about the CoolBits option before we begin. Take a look at the nvidia-settings manual (man nvidia-settings), and the nvidia-drivers manual, available at /usr/share/doc/nvidia-drivers-<VERSION>/html/xconfigoptions.html or http://us.download.nvidia.com/XFree86/Linux-x86/195.36.24/README/xconfigoptions.html (adjust the version in the URL as appropriate - be careful about looking at out-of-date documentation about the CoolBits option!)

Warning: The CoolBits setting is also used for overclocking - make sure that you have checked this guide against the docs linked above (update this article if things have changed)
File: /etc/X11/xorg.conf
Section "Device"
     ...
     Option "Coolbits" "4"
     ...
EndSection

If your card is described in multiple "Device" sections, put the above in each of them.

Warning: Be careful setting fan speed manually - it is possible to break your card by letting it get too hot!

Inside X, run nvidia-settings. You should now find in the "Thermal Settings" section, "GPU Fan Settings" controls. My suggestion is to crank this up to 100.

You may also modify your fan speed from the command line;

Enable GPU fan control:

nvidia-settings -a [gpu:0]/GPUFanControlState=1

Find out the fan's resource id using:

nvidia-settings -q fans

Then set the speed using:

nvidia-settings -a [fan:0]/GPUCurrentFanSpeed=<n>

Where <n> is percentage of full speed.

These settings will not be permanent - to have them take effect every time that X is launched, add the below to your ~/.xinitrc

File: ~/.xinitrc
nvidia-settings \
	-a "[gpu:0]/GPUFanControlState=1" \
	-a "[fan:0]/GPUCurrentFanSpeed=100" &

KDE 4 users will need to add a symlink to ~/.xinitrc in the Autostart directory since ~/.xinitrc isn't sourced by KDM:

cd ~/.kde4/Autostart ln -s ~/.xinitrc xinitrc chmod +x ~/.xinitrc

If ~/.xinitrc is not being autostarted, then make sure your ~/.xinitrc has a shebang (#!/bin/sh) at the top.

Troubleshooting

Black/Blank screen when starting X

Symptoms: A black/blank screen when X starts, followed by the monitor going in standby mode after a moment. Ctrl + Alt + Backspace doesn't kill X and get you back to the console.

Problem: This issue is caused by bad refresh rate values given to X in your xorg.conf file.

Check the X.Org log file, usually /var/log/XOrg.0.log to find what values are actually being used and where they're being obtained from.

In order to fix this, you will need to find the correct HorizSync and VertRefresh values for your monitor. Sources for possible values include:

  • Values that do work with vesa driver
  • Values given in the technical specs of your monitor (you should be able to find these in the manual, usually available on the manufacturers website).
  • Values from other linux installs (check the xorg.conf).
  • You can also attempt searching online for others' configurations for your monitor (Search for your monitors model and "xorg.conf" or "modeline").
  • The "nvidia-xconfig" utility may generate correct values.
  • You may need to tweak the Devices section to indicate which monitor is connected:
Code: Establishing the connected monitor
 Section "Device"
  ...
  Option "ConnectedMonitor" "CRT-1"  
  ...
 EndSection

Unable to validate video modes

If you have an older monitor with bad or no DDC/EDID information, nvidia-auto-select may fail to validate your perfectly good modelines that have worked for years, leaving you stranded with 1024x768, or worse. To fix this, add Metamodes to your monitor section like this:

Code: xorg.conf
Section "Monitor"
     ...
     Option "Metamodes" "1600x1200"
EndSection

This apparently tricks the nvidia driver into actually trying to find a good mode for your monitor, rather than just giving you a bad default.

Error: libnvidia-tls.so.1: cannot handle TLS data

After re-emerging several times the nvidia drivers, it may happen that the glx module fails to load without any apparent reason, with the error "libnvidia-tls.so.1: cannot handle TLS data". This issue is caused by two files being inverted, /usr/lib/opengl/nvidia/tls/libnvidia-tls.so.1.0.8762 and /usr/lib/opengl/nvidia/no-tls/libnvidia-tls.so.1.0.8762. The fix is quite simple: swap the two files. Before trying this, check to see if the libnvidia-tls.so.1.0.8762 file in the no-tls folder is smaller than the one in the tls folder. If it is the case, then the files are already in the correct folder, so do not swap them. If it is not the case, then you can swap them with this command:

cd /usr/lib/opengl/nvidia mv tls/libnvidia-tls.so.1.0.8762 tls/libnvidia-tls.so.1.0.8762.bak mv no-tls/libnvidia-tls.so.1.0.8762 tls/libnvidia-tls.so.1.0.8762 mv tls/libnvidia-tls.so.1.0.8762.bak no-tls/libnvidia-tls.so.1.0.8762

Restart X and the glx module should load fine, this time. If not, update your glibc!

UDev Users: Fix Device Creation Problem

Udev doesn't like nVidia... or maybe nVidia doesn't like Udev. Either way, you have to run 'NVmakedevices.sh' to build the character devices that allow your computer to access your card. Here's the rub. You'll probably have to run NVmakedevices.sh every time you boot up your computer. Which isn't difficult. Just do the following:

echo 'NVmakedevices.sh' >> /etc/conf.d/local.start

Ok, problem solved. Your local.start script will run NVmakedevices.sh during boot, before the computer switches to your default runlevel so you're safe to have your computer boot into GDM or whatever graphical login manager you choose.

dmesg or building the module returns unknown symbol errors

dmesg output gives something like this:

Code: dmesg
nvidia: module license 'NVIDIA' taints kernel.
nvidia: Unknown symbol remap_page_range
nvidia: Unknown symbol pci_find_class
nvidia: Unknown symbol remap_page_range
nvidia: Unknown symbol pci_find_class
nvidia: Unknown symbol remap_page_range

Possible solutions:

  • Disable ccache
    • Most likely the fault is ccache Forum thread Bug report. Disable it when rebuilding this module by issuing the command:
      FEATURES="-ccache" emerge -a nvidia-drivers
  • Another cause could be the version of your nvidia driver or of the kernel. Simply try another driver/kernel.
  • If the problem persists, you might try this:

Edit your kernel menuconfig and check:

Linux Kernel Configuration: SiS Kernel Support
--> Device drivers
    --> Character devices
        [*] SiS chipset support
        [*] SiS video cards

This might help because pci_find_class is in the drivers/video/sis/sis.h file.

When you attempt to load the kernel module you receive a "insmod: error inserting ... Invalid module format"

This type of error can be diagnosed by running:
dmesg

The output from this command will indicate the source of the problem.

Common reasons for this error include:

  • Using the wrong kernel preemption option.
  • Your kernel module was compiled with a different gcc version than your kernel. In this case simply re-emerge the kernel and nvidia kernel module.

Preemption

In this case you'll probably see a message like: should be "<arch><kernel> preempt <gcc version>"

If the kernel is not compiled to be a preemptible kernel then trying to insert the nvidia.ko module will fail. To make the kernel preemptible use your favorite text editor to modify your kernel's configuration file (.config by default) and be sure to ammend: CONFIG_PREEMPT=y

As well as commenting out all other CONFIG_PREEMPT type options.

X freezes when running glxinfo or OpenGL apps

It may happen that X.org freezes everytime you launch an OpenGL app, even glxinfo does the job. This is caused by the nvidia driver oopsing in the background, which you can watch by tail-ing /var/log/messages over ssh. One reason for this is the nvidia drivers not liking a kernel with PaX (hardened-sources) or an enabled NX/XD-Bit (NoeXecute/eXecuteDisable) in your BIOS. If you have no special reason to have this enabled, disable it and nvidia will work - at the loss of some security. Otherwise there are several patches floating around the nvnews.net forums.

Whole system freezes on Logout or Switching to Console

A workaround for this problem is to disable the framebuffer console, by Compiling a Kernel without vesafb-tng, or (if you use standard vesafb) by not using "vga=" kernel parameters in your bootloader. [1]


50 Hz refresh rate

If the refresh rate is shown as 50hz and you know it shouldn't be, then disable DynamicTwinView in /etc/X11/xorg.conf in the Device or Screen sections. This option is also described in /usr/share/doc/nvidia-drivers-*/README.bz2

Code: xorg.conf
Section "Device" or Section "Screen"
Option "DynamicTwinView" "False"

nvidia_drv.so: undefined symbol with xorg-server-1.5

x11-drivers/nvidia-drivers-71.86.09 does not support x11-base/xorg-server-1.5.

The following error is produced:

File: /var/log/Xorg.0.log
(II) Loading /usr/lib/xorg/modules/drivers//nvidia_drv.so
dlopen: /usr/lib/xorg/modules/drivers//nvidia_drv.so: undefined symbol: Allocate$
(EE) Failed to load /usr/lib/xorg/modules/drivers//nvidia_drv.so
(II) UnloadModule: "nvidia"

The solution is to wind back the clock to x11-base/xorg-server-1.3.

Blank Screen When Switching From X to Console

When you're using the Nvidia binary driver, it may at times, conflict with the default kernel (tty) console causing it to show blank. (ie. Using "chvt 1".) The console still works, it's just blank or not viewable. Blind typing will work.

If you really want a console, a work-around is to configure the kernel for a tty serial console. This requires a null serial (DB9) cable. Default is connecting it from COM1 (/dev/ttyS0) to COM1 on the other computer.

File: /usr/src/linux/.config
CONFIG_SERIAL_8250_CONSOLE=y

Then, configure boot kernel parameters. For example:

File: /boot/grub/menu.lst
title Gentoo Console on ttyS0
#:0 <-- type: 0 => linux, 1 => windows, 2 => other
root (hd0,1)
kernel /boot/vmlinuz root=/dev/sda2 no_console_suspend console=ttyS0,115200n8 console=tty0 loglevel=7 print-fatal-signals=1 resume2=swap:/dev/sda1 video=uvesafb:off

Emerge & Configure kermit on the remote computer:

# emerge kermit

As user, not root:

File: ~/.kermrc
set modem type none
set line /dev/ttyS0
set speed 115200
set carrier-watch off
log session ~/kermit/session.log

Start kermit as user and type "connect" and reboot your other computer with its nvidia driver.

This will get you a working dmesg output to the remote computer. For getting a login tty terminal for logging into:

File: /etc/inittab
s0:12345:respawn:/sbin/agetty -L 115200 ttyS0

This will restart init (without hopefully rebooting) and simply reload your configuration file.

# init q

Bingo! A console TTY terminal to go! The nice thing about this, you can plug/unplug the serial cable anytime -- leaving the exported terminal active. If you enjoy this, checkout KGDB. ;-)

(If you really want, you can also export the init startup info printed on console, but it's a one way deal. You won't see it on both monitors if you export it to the remote computer. I find it unnecessary screen clutter.)

See Also