Bumblebee (简体中文)

翻译状态: 本文是英文页面 Bumblebee翻译,最后翻译时间:2012-07-11,点击这里可以查看翻译后英文页面的改动。

Tango-preferences-desktop-locale.png

Tango-preferences-desktop-locale.png

本页面或部分需要翻译,部分内容可能已经与英文文章脱节。如果您希望贡献翻译,请访问简体中文翻译组

附注: 英文页面改动较大,需要同步翻译.

引自 Bumblebee FAQ:

"Bumblebee 致力于使 NVIDIA Optimus 在 GNU/Linux 系统上可用,实现两块不同的供电配置的显卡同时插入使用,共享同一个 framebuffer。"

Bumblebee: Linux上的擎天柱

Optimus 技术 是不依赖于硬件复杂结构的 交火显卡 实现。独立显卡按需渲染,并传输给集成显卡,集成显卡则负责显示功能。当笔记本通过电池供电时,独立显卡将关闭,以延长电池寿命。在使用 Intel 集成显卡和 NVIDIA 独立显卡的台式机上也能使用这项技术。

Bumblebee 通过软件来实现它的功能,包括两个部分:

  • 利用独立显卡渲染程序,并通过集成显卡将图像显示在屏幕上。这是利用 VirtualGL 或 primus (见后面小节)实现的,相当于连接到了一个供独立显卡使用的 X 服务器。
  • 独立显卡空闲的时候会被禁用。(参见 #电源管理

Bumblebee 试图模拟 Optimus 技术的行为;当需要的时候,使用独立显卡进行渲染,不使用的时候则关闭。当前的版本仅支持按需渲染,高负荷程序自动调用独立显卡的功能仍然在开发之中。

安装

安装 Bumblebee 之前,检查你的 BIOS 已激活 Optimus (共享显卡, BISO有可能没有提供此项设置),并且为次要显卡安装 Intel 驱动

完整的配置需要安装这些包:

  • bumblebee (或 bumblebee-git) - 提供守护进程以及程序的主要安装包。
  • (可选) bbswitch (或 dkms-bbswitch[broken link: package not found]) - 推荐安装,用来关闭 Nvidia 显卡以省电。
  • (可选) 如果你不止想节电,也就是说要在独立的 NVIDIA 显卡上渲染程序,那么你也需要:
    • NVIDIA 显卡驱动。开源的 nouveau 驱动或是闭源的 NVIDIA 驱动,参见下面小节。
    • 渲染与显示的连接。目前有两个包可以使用: primus (或 primus-git) 和 virtualgl。只需要安装其中一个包,不过两个都安装也不会有问题。
注意: 如果你想在64位系统上运行32位程序,你必须为你的程序安装正确的lib32-*库。除此之外,你还要根据你安装的渲染连接来安装 lib32-virtualgl 或者 lib32-primus (或 lib32-primus-git)。如果你打算使用Primus,你得运行 primusrun 而不是 optirun

为 Intel / NVIDIA 安装 Bumblebee

安装 intel-dri[broken link: replaced by mesa]xf86-video-intelbumblebeenvidia。即使你已经安装好了 intel-dri[broken link: replaced by mesa]xf86-video-intel,你仍需要将他们一起安装以避免 intel-dri[broken link: replaced by mesa]nvidia 的依赖冲突。

# pacman -S intel-dri xf86-video-intel bumblebee nvidia

如果你想要在64位系统上运行32位的程序 (比如用wine来运行游戏) 你需要安装 lib32-nvidia-utils。(如果你同时需要用到 primusrun,还需要安装lib32-intel-dri[broken link: replaced by lib32-mesa]

# pacman -S lib32-nvidia-utils
警告: 不要安装lib32-nvidia-libgl! Bumblebee不需要它来找到正确的32位NVIDIA运行库。

为 Intel / nouveau 安装 Bumblebee

安装以下包:

启动 Bumblebee

使用之前,请确保添加相关用户到 Bumblebee 组:

# gpasswd -a $USER bumblebee

其中 $USER 是要添加的用户登录名。之后注销,并重新登录,以使组变更生效。

小贴士: 要自动启动 Bumblebee,你需要在 systemd 启用名为bumblebeed的服务:
# systemctl enable bumblebeed.service

重启之后,使用 optirun 命令即可开启 Optimus NVIDIA 渲染!

如果你只是想禁用独立显卡,记得安装 bbswitch ,然后就可以到此为止了。bumblebeed 守护进程在启动的时候默认会利用 bbswitch 关闭独立显卡。参见 #电源管理 小节。

使用

随 Bumblebee 发布的 optirun 程序是用来使用 Optimus NVIDIA 显卡的最好方法。

测试 Bumblebee 是否支持你的 Optimus 系统:

$ optirun glxgears -info

如果在终端中看到一个关于你的 Nvidia 的提示,恭喜你,Bumblebee 和 Optimus 已经开始工作了。

常用命令:

$ optirun [options] application [application-parameters]

一些例子:

启动 firefox ,并使用 Optimus 加速:

$ optirun firefox

启动 Windows 应用程序:

$ optirun wine windows application.exe

启动 NVIDIA 设置:

$ optirun -b none nvidia-settings -c :8

要获取 optirun 的可用选项,运行:

$ optirun --help

primus 由于更强的性能即将成为 Bumblee 默认的程序,将来会直接通过 optirun 执行。但目前你需要使用单独的命令来使用它(不支持 optirun 的选项)

$ primusrun glxgears

配置

你可以按需配置 Bumblebee 的行为,可以通过 /etc/bumblebee/bumblebee.conf 来调节诸如优化,电源管理,以及其他任务。

速度优化

Bumblebee 使用你的 Optimus NVIDIA 显卡来渲染一个配置了 VirtualGL 的不可见的 X 服务器,并且将结果传输到你当前的 X 服务器上。传输之前将压缩侦,这可以节省带宽并且能够用于加速 bumblebee 的优化。

要为单个应用程序指定不同的压缩方法:

$ optirun -c <compress-method> application

压缩方法会影响 GPU性能和GPU使用,压缩方法(比如 jpeg)会最大限度的使用 CPU,并且尽可能少的使用 GPU;非压缩的方法最大限度的使用 GPU,而尽可能少的使用 CPU。

压缩方法如有: jpeg, rgb, yuv

非压缩方法有: proxy, xv

要为所有应用程序使用一个标准的压缩方法,在 /etc/bumblebee/bumblebee.conf 中设置 VGLTransportcompress-method

/etc/bumblebee/bumblebee.conf
[...]
[optirun]
VGLTransport=proxy
[...]

You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting VGL_READBACK environment variable to pbo should increase the performance. Compare these two:

# PBO should be faster.
VGL_READBACK=pbo optirun glxspheres
# The default value is sync.
VGL_READBACK=sync optirun glxspheres
注意: CPU频率调节可能会影响渲染的性能。

电源管理

电源管理的目的是为了自动关闭 bumblebee 不再使用的 NVIDIA 显卡。 If bbswitch is installed, it will be detected automatically when the Bumblebee daemon starts. No additional configuration is necessary.

默认的 NVIDIA 电源状态

The default behavior of bbswitch is to leave the card power state unchanged. bumblebeed does disable the card when started, so the following is only necessary if you use bbswitch without bumblebeed.

根据你的需要,设置 load_state 以及 unload_state 模块选项 (参考 bbswitch 文档)。

/etc/modprobe.d/bbswitch.conf
options bbswitch load_state=0 unload_state=1

关机时启用 NVIDIA 显卡

如果系统上次关闭时,NVIDIA显卡为关闭(断电)状态,NVIDIA 显卡可能在启动时不能正确初始化。一个解决的办法是在/etc/bumblebee/bumblebee.conf文件中设置TurnCardOffAtExit=false ,但是这会导致每次你停止 Bumblebee 守护进程时,NVIDIA 显卡都会启用,即使是你手工关闭守护进程。要保证在系统关闭时 NVIDIA 显卡总是供电状态,增加如下systemd 服务(假如你使用 bbswitch的话):

/etc/systemd/system/nvidia-enable.service
[Unit]
Description=Enable NVIDIA card
DefaultDependencies=no

[Service]
Type=oneshot
ExecStart=/bin/sh -c 'echo ON > /proc/acpi/bbswitch'

[Install]
WantedBy=shutdown.target

然后通过以root身份执行systemctl enable nvidia-enable.service来使服务开机自动启动。

多显示器

显示器连接在 Intel 显卡上

你可以通过 xorg.conf 设置多个显示器。设置他们使用 Intel 显卡,然后 Bumblebee 可以仍然使用 NVIDIA 显卡。下面的配置文件示例配置了两个不同的 1080p 的显示器,并且使用了 HDMI 输出。

/etc/X11/xorg.conf
Section "Screen"
    Identifier     "Screen0"
    Device         "intelgpu0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "TwinView" "0"
    SubSection "Display"
        Depth          24
        Modes          "1980x1080_60.00"
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "intelgpu1"
    Monitor        "Monitor1"
    DefaultDepth   24
    Option         "TwinView" "0"
    SubSection "Display"
        Depth          24
        Modes          "1980x1080_60.00"
    EndSubSection
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    Option         "Enable" "true"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    Option         "Enable" "true"
EndSection

Section "Device"
    Identifier     "intelgpu0"
    Driver         "intel"
    Option         "XvMC" "true"
    Option         "UseEvents" "true"
    Option         "AccelMethod" "UXA"
    BusID          "PCI:0:2:0"
EndSection

Section "Device"
    Identifier     "intelgpu1"
    Driver         "intel"
    Option         "XvMC" "true"
    Option         "UseEvents" "true"
    Option         "AccelMethod" "UXA"
    BusID          "PCI:0:2:0"
EndSection

Section "Device"
    Identifier "nvidiagpu1"
    Driver "nvidia"
    BusID "PCI:0:1:0"
EndSection

你可能需要调整 BusID 字段的值:

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)

BusID 的值为 0:2:0

显示器连接在 NVIDIA 显卡上

On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaniously, you have to run 2 X Servers. The first will be using the Intel driver for the notebooks panel and a display connected on VGA. The second will be started through optirun on the NVIDIA card, and drives the digital display.

There are currently several instructions on the web how such a setup can be made to work. One can be found on the bumblebee wiki page. Another approach is described below.

xf86-video-intel-virtual-crtc and hybrid-screenclone

This method uses a patched Intel driver, which is extended to have a VIRTUAL Display, and the program hybrid-screenclone which is used to copy the display over from the virtual display to a second X Server which is running on the NVIDIA card using Optirun. Credit goes to Triple-head monitors on a Thinkpad T520 which has a detailed explanation on how this is done on a Ubuntu system.

For simplicity, DP is used below to refer to the Digital Output (DisplayPort). The instructions should be the same if the notebook has a HDMI port instead.

  • Set system to use NVIDIA card exclusively, test DP/Monitor combination and generate xorg.nvidia.conf. This step is not required, but recommended if your system Bios has an option to switch the graphics into NVIDIA-only mode. To do this, first uninstall the bumblebee package and install just the NVIDIA driver. Then reboot, enter the Bios and switch the Graphics to NVIDIA-only. When back in Arch, connect you Monitor on DP and use startx to test if it is working in principle. Use Xorg -configure to generate an xorg.conf file for your NVIDIA card. This will come in handy further down below.
  • Reinstall bumlbebee and bbswitch, reboot and set the system Gfx back to Hybrid in the BIOS.
  • Install xf86-video-intel-virtual-crtc, and replace your xf86-video-intel package with it.
  • Install screenclone-git
  • Change these bumblebee.conf settings:
/etc/bumblebee/bumblebee.conf
KeepUnusedXServer=true
Driver=nvidia
Note: Leave the PMMethod set to "bumblebee". This is contrary to the instructions linked in the article above, but on arch this options needs to be left alone so that bbswitch module is automatically loaded
  • Copy the xorg.conf generated in Step 1 to /etc/X11 (e.g. /etc/X11/xorg.nvidia.conf). In the [driver-nvidia] section of bumblebee.conf, change XorgConfFile to point to it.
  • Test if your /etc/X11/xorg.nvidia.conf is working with startx -- -config /etc/X11/xorg.nvidia.conf
  • In order for your DP Monitor to show up with the correct resolution in your VIRTUAL Display you might have to edit the Monitor section in your /etc/xorg.nvidia.conf. Since this is extra work, you could try to continue with your auto-generated file. Come back to this step in the instructions if you find that the resolution of the VIRTUAL Display as shown by xrandr is not correct.
    • First you have to generate a Modeline. You can use the tool amlc, which will genearte a Modeline if you input a few basic parameters.
Example: 24" 1920x1080 Monitor
start the tool with amlc -c
Monitor Identifier: Samsung 2494
Aspect Ratio: 2
physical size[cm]: 60
Ideal refresh rate, in Hz: 60
min HSync, kHz: 40
max HSync, kHz: 90
min VSync, Hz: 50
max VSync, Hz: 70
max pixel Clock, MHz: 400

This is the Monitor section which amlc generated for this input:

Section "Monitor"
    Identifier     "Samsung 2494"
    ModelName      "Generated by Another Modeline Calculator"
    HorizSync      40-90
    VertRefresh    50-70
    DisplaySize    532 299  # Aspect ratio 1.778:1
    # Custom modes
    Modeline "1920x1080" 174.83 1920 2056 2248 2536 1080 1081 1084 1149             # 174.83 MHz,  68.94 kHz,  60.00 Hz
EndSection  # Samsung 2494

Change your xorg.nvidia.conf to include this Monitor section. You can also trim down your file so that it only contains ServerLayout, Monitor, Device and Screen sections. For reference, here is mine:

/etc/X11/xorg.nvidia.conf
Section "ServerLayout"
        Identifier     "X.org Nvidia DP"
        Screen      0  "Screen0" 0 0
        InputDevice    "Mouse0" "CorePointer"
        InputDevice    "Keyboard0" "CoreKeyboard"
EndSection

Section "Monitor"
    Identifier     "Samsung 2494"
    ModelName      "Generated by Another Modeline Calculator"
    HorizSync      40-90
    VertRefresh    50-70
    DisplaySize    532 299  # Aspect ratio 1.778:1
    # Custom modes
    Modeline "1920x1080" 174.83 1920 2056 2248 2536 1080 1081 1084 1149             # 174.83 MHz,  68.94 kHz,  60.00 Hz
EndSection  # Samsung 2494

Section "Device"
        Identifier  "DiscreteNvidia"
        Driver      "nvidia"
        BusID       "PCI:1:0:0"
EndSection

Section "Screen"
        Identifier "Screen0"
        Device     "DiscreteNvidia"
        Monitor    "Samsung 2494"
        SubSection "Display"
                Viewport   0 0
                Depth     24
        EndSubSection
EndSection
  • Plug in both external monitors and startx. Look at your /var/log/Xorg.0.log. Check that your VGA Monitor is detected with the correct Modes there. You should also see a VIRTUAL output with modes show up.
  • Run xrandr and three displays should be listed there, along with the supported modes.
  • If the listed Modelines for your VIRTUAL display doesn't have your Monitors native resolution, make note of the exact output name. For me that is VIRTUAL1. Then have a look again in the Xorg.0.log file. You should see a message: "Output VIRTUAL1 has no monitor section" there. We will change this by putting a file with the needed Monitor section into /etc/X11/xorg.conf.d. Exit and Restart X afterward.
/etc/X11/xorg.conf.d/20-monitor_samsung.conf
Section "Monitor"
    Identifier     "VIRTUAL1"
    ModelName      "Generated by Another Modeline Calculator"
    HorizSync      40-90
    VertRefresh    50-70
    DisplaySize    532 299  # Aspect ratio 1.778:1
    # Custom modes
    Modeline "1920x1080" 174.83 1920 2056 2248 2536 1080 1081 1084 1149             # 174.83 MHz,  68.94 kHz,  60.00 Hz
EndSection  # Samsung 2494
  • Turn the NVIDIA card on by running: sudo tee /proc/acpi/bbswitch <<< ON
  • Start another X server for the DisplayPort monitor: sudo optirun true
  • Check the log of the second X server in /var/log/Xorg.8.log
  • Run xrandr to set up the VIRTUAL display to be the right size and placement, eg.: xrandr --output VGA1 --auto --rotate normal --pos 0x0 --output VIRTUAL1 --mode 1920x1080 --right-of VGA1 --output LVDS1 --auto --rotate normal --right-of VIRTUAL1
  • Take note of the position of the VIRTUAL display in the list of Outputs as shown by xrandr. The counting starts from zero, i.e. if it is the third display shown, you would specify -x 2 as parameter to screenclone (Note: This might not always be correct. If you see your internal laptop display cloned on the monitor, try -x 2 anyway.)
  • Clone the contents of the VIRTUAL display onto the X server created by bumblebee, which is connected to the DisplayPort monitor via the NVIDIA chip:
screenclone -d :8 -x 2

Thats it, all three displays should be up and running now.

Switch between discrete and integrated like Windows

In Windows, the way that Optimus works is NVIDIA has a whitelist of applications that require Optimus for, and you can add applications to this whitelist as needed. When you launch the application, it automatically decides which card to use.

To mimic this behavior in Linux, you can use libgl-switcheroo-git. After installing, you can add the below in your .xprofile.

~/.xprofile
mkdir -p /tmp/libgl-switcheroo-$USER/fs
gtkglswitch &
libgl-switcheroo /tmp/libgl-switcheroo-$USER/fs &

To enable this, you must add the below to the shell that you intend to launch applications from (I simply added it to the .xprofile file)

export LD_LIBRARY_PATH=/tmp/libgl-switcheroo-$USER/fs/\$LIB${LD_LIBRARY_PATH+:}$LD_LIBRARY_PATH

Once this has all been done, every application you launch from this shell will pop up a GTK+ window asking which card you want to run it with (you can also add an application to the whitelist in the configuration). The configuration is located in $XDG_CONFIG_HOME/libgl-switcheroo.conf, usually ~/.config/libgl-switcheroo.conf

不依赖Bumblebee来使用CUDA

关于这个的文档不是很多,但是你不需要Bumblebee来使用CUDA。就算实在optirun无法正常运行的机器上CUDA也可能正常运作。如需在Lenovo IdeaPad Y580(使用GeForce 660M显卡)上让使用CUDA,参见:https://wiki.archlinux.org/index.php/Lenovo_IdeaPad_Y580#NVIDIA_Card。那些步骤很有可能也能在其他机器上使用(除了要在acpi上做点小改动,可能不需要这个)。

疑难问题

注意: 报告 Bug 的位置在 Bumblebee-Project,Bumblebee 的 Wiki 中描述。

[VGL] ERROR: Could not open display :8

There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic")

This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:

$ optirun -b primus wine windows program.exe

If this does not work, an alternative walkaround for this problem is:

$ optirun bash
$ optirun wine windows program.exe

If using NVIDIA drivers a fix for this problem is to edit /etc/bumblebee/xorg.conf.nvidia and change Option ConnectedMonitor to CRT-0.

[ERROR]Cannot access secondary GPU

No devices detected

有时候,运行optirun会返回:

[ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected.
[ERROR]Aborting because fallback start is disabled.

这种情况下,你需要把文件/etc/X11/xorg.conf.d/20-intel.conf 移动到别的地方,重启bumblebeed 守护进程,之后应该正常了。 If you do need to change some features on Intel module, a workaround is to move your /etc/X11/xorg.conf.d/20-intel.conf to /etc/X11/xorg.conf.

It could be also necessary to comment the driver line in /etc/X11/xorg.conf.d/10-monitor.conf.

If you're using the nouveau driver you could try switching to the nVidia driver.

You might need to define the NVIDIA card somewhere (e.g. file /etc/X11/xorg.conf.d), and remember to change the BusID using lspci.

Section "Device"
    Identifier "nvidiagpu1"
    Driver "nvidia"
    BusID "PCI:0:1:0"
EndSection

NVIDIA(0): Failed to assign any connected display devices to X screen 0

If the console output is:

[ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to assign any connected display devices to X screen 0
[ERROR]Aborting because fallback start is disabled.


You can change this line in /etc/bumblebee/xorg.conf.nvidia:

Option "ConnectedMonitor" "DFP"

to:

Option "ConnectedMonitor" "CRT"

Could not load GPU driver

If the console output is:

[ERROR]Cannot access secondary GPU - error: Could not load GPU driver

and if you try to load the nvidia module you get:

modprobe nvidia
modprobe: ERROR: could not insert 'nvidia': Exec format error

You should try manually compiling the nvidia packages against your current kernel.

yaourt -Sb nvidia

Should do the trick

Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)

Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters. Original topic can be found here.

ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored

You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the "Note" box in Installation.

Fatal IO error 11 (Resource temporarily unavailable) on X server

Change KeepUnusedXServer in /etc/bumblebee/bumblebee.conf from false to true. Your program forks into background and bumblebee don't know anything about it.

视频撕裂

视频撕裂是 Bumblebee 上一个常见的问题,要修复这个问题,你需要启用 vsync。默认情况下,Intel 显卡已启用此设置,但是还是检查一下 Xorg 的日志,要检查 nvidia 是否启用了此设置,运行:

$ optirun nvidia-settings -c :8

X Server XVideo Settings -> Sync to VBlank 以及 OpenGL Settings -> Sync to VBlank 应该都是已经启用状态。 Intel 显卡通常有比较少的撕裂,所以应该作为视频回放设备。特别是使用 VA-API 编码视频的时候(比如:mplayer-vaapi 以及 -vsync 参数)。

参考Intel了解如何修复 Intel 显卡的视频撕裂。

如果仍然无效,尝试从桌面环境禁用 compositing。同时可尝试禁用 triple buffering。

Bumblebee cannot connect to socket

You might get something like:

$ optirun glxspheres
[ 1648.179533] [ERROR]You've no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
[ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?

If you are already in the bumblebee group ($ groups | grep bumblebee), you may try removing the socket /var/run/bumblebeed.socket.

重要的链接

Join us at #bumblebee at freenode.net