Archive for the 'Bits & Bytes' Category

Debian an Dell Latitude D830

January 12th, 2010

The Dell Lattitude D830 is a quite capable business notebook. While the notebook itself is not considered high-end it still allows for good configurations. A definite downside is the battery which lasted only for 1 year and had to be replaced with a new battery pack (not part of warrenty of course).

All the following descriptions refer to the current Debian Squeeze (testing) using Linux Kernel 2.6.32 (trunk-5) from unstable repository.

As Dell allows different setups of the same hardware model here are the specs:

Hardware Status
CPU: Intel Core 2 Duo T7500 (2.2GHz)
2MB/4MB L2 Cache, 800MHz FSB
All cores recognized. Hardware Virtualisation (VT extensions) working (must be enabled in the BIOS).
Memory: 2GB (2x1GB) 667MHz DDR2 SDRAM Works
Storage: Intel SATA IDE Controller Works
Using ata_piix module. AHCI untested.
Graphics: Nvidia Quadro NVS 135M Works
Using binary nvidia driver (nvidia-kernel-source from testing)
LAN: Integrated Broadcom Gigabit Ethernet Works
Uses in kernel tg3 driver.
WLAN: Broadcom BCM4328 802.11a/b/g/n Wireless Works
Uses broadcom-sta module, also working with ndiswrapper
Audio: Intel HD Audio Controller Works
Works using ALSA
Keyboard: Hotkeys Partially Working
Most Fn keys work, except those not producing key codes (Volume, Display Highlight, Sleep is working)

For a different hardware configuration see this post.

Hardware List (as shown by lspci):

00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c)
00:01.0 PCI bridge: Intel Corporation Mobile PM965/GM965/GL960 PCI Express Root Port (rev 0c)
00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 02)
00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 02)
00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 02)
00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 02)
00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 02)
00:1c.5 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 6 (rev 02)
00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 02)
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f2)
00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 02)
00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 02)
00:1f.2 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA IDE Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 02)
01:00.0 VGA compatible controller: nVidia Corporation Quadro NVS 135M (rev a1)
03:01.0 CardBus bridge: O2 Micro, Inc. Cardbus bridge (rev 21)
03:01.4 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02)
09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5755M Gigabit Ethernet PCI Express (rev 02)
0c:00.0 Network controller: Broadcom Corporation BCM4328 802.11a/b/g/n (rev 03)

As most stuff works out of the box I will refer here only to the setup of the graphics controller and the wireless device.

Setup Nvidia Quadro NVS135M

First install the required packages (as root):

sh# aptitude install nvidia-kernel-common nvidia-kernel-source nvidia-glx

Then create the required kernel module (this assumes you have booted into the kernel already):

sh# m-a a-i nvidia
sh# modprobe nvidia

Be sure to configure your X server accordingly (here is a shortened xorg.conf):

Section "ServerLayout"
    Identifier     "Default Layout"
    Screen      0  "Screen0" 0 0

Section "ServerFlags"
    Option         "Xinerama" "0"

Section "Monitor"
    Identifier     "Generic Monitor"
    HorizSync       28.0 - 84.0
    VertRefresh     43.0 - 60.0
    Option         "DPMS"

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Seiko"
    HorizSync       30.0 - 75.0
    VertRefresh     60.0

Section "Device"
    Identifier     "Videocard0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "Quadro NVS 135M"
    Option         "AllowGLXWithComposite" "true"

Section "Screen"
    Identifier     "Screen0"
    Device         "Videocard0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "TwinView" "0"
    Option         "metamodes" "1680x1050 +0+0; 1280x800 +0+0; 1024x768 +0+0; 800x600 +0+0; 640x480 +0+0"
    Option         "AddARGBGLXVisuals" "True"
    SubSection     "Display"
        Depth       24

Setup Broadcom BCM4328 driver

First install the required packages (as root):

sh# aptitude install broadcom-sta-common broadcom-sta-source

Then create the required kernel module (this assumes you have booted into the kernel already):

sh# m-a a-i broadcom-sta
sh# modprobe wl

This will install and enable the broadcom-sta driver, which is known as the “wl.ko” kernel module. Be sure the unload all drivers using the device (e.g. ndiswrapper) beofre loading the wl.ko module.
Using the network-manager package a good mobile network configuration utility is available for CLI and desktop (system tray).


Perl cache modules performance

January 4th, 2010

There are a lot of cache modules available on the great CPAN already. The newest kid on the block is CHI, a Moose based intelligent and flexible caching solution with a very sane API and good design, separating the driver backends from the caching logic as much as possible.

The results show, that Cache::FastMmap is by far the most efficient implementation. Cache::FastMmap (just like memory based caches) is limited to the local host by design, whereas the Memcached based caches allow distributed caching, that can be accessed from various hosts. A distributed cache is a more flexible solution for scaling your application and caching requirements. With CHI using a subcache is possible, and the L1 cache subcache implementation is exactly what can be used to combine a fast local cache with a slower but persistent “across application restarts) cache.

It seems that CHI has a still room for improvement in terms of efficiency, which probably is caused by using Moose and not doing any XS optimizations yet. But still the CHI::L1 combination of memory and memcached is quite efficient, when dealing with a high number of cache reads compared to writes (see 1:100 ratio in the test below).

Of course this benchmark does not mimic any real world scenario (and most notably not yours), but should give some overview of what overhead the caching layer itself poses. Keys used for storing are always 36 character UUID strings. The values used for caching are separated into small, medium and large datasets. Small values are actually random binary UUIDs (16 bytes), the medium dataset the same but with 10 times longer (160 bytes) values, and the large dataset with 100 times the UUID length (1600 bytes). The tests how set/get ratios related always use the medium dataset (160 byte values) and the used ratios are 1:1 (you should probably not do any caching in that situation anyway), 1:10 and 1:100.

The script used to do the benchmarks is attached and is intended to be run with prove -v (it’s actually a test useing Test::More and not cleaned up). But if you find some obvious mistakes in generating the benchmarks I would be interested to know.

The script used to generate these results is available here: (and is being tuned to allow some graphing of the results)

    The abbriviations used below are listed here:

  • CHI:Mc:lIP … CHI::Driver::Memcached::libmemcached over IP
  • CHI:Mc:l … CHI::Driver::Memcached::libmemcached over Socket
  • CHI:L1 … CHI::Driver::Memcached::libmemcached (IP) with CHI::Driver::Memory L1 cache
  • CHI:FMmap … CHI::Driver::FastMmap
  • CHI:Mem … CHI::Driver::Memory (with max_size set)
  • C:Mc:lIP … Cache::Memcached::libmemcached over IP
  • C:Mc:l … Cache::Memcached::libmemcached over Socket
  • C:FMmap … Cache::FastMmap

And here are the results being generated on a Dell D830 dual-core laptop using perl-5.10.1 of Debian testing:

Benchmarking caches with ratio 1:10 and small values

              Rate CHI:Mc:lIP CHI:Mc:l CHI:L1 CHI:FMmap CHI:Mem C:Mc:lIP C:Mc:l C:FMmap
CHI:Mc:lIP  5107/s         --      -2%   -23%      -26%    -43%     -43%   -49%    -56%
CHI:Mc:l    5219/s         2%       --   -22%      -25%    -41%     -42%   -48%    -55%
CHI:L1      6669/s        31%      28%     --       -4%    -25%     -26%   -34%    -42%
CHI:FMmap   6920/s        36%      33%     4%        --    -22%     -23%   -31%    -40%
CHI:Mem     8885/s        74%      70%    33%       28%      --      -1%   -12%    -23%
C:Mc:lIP    8986/s        76%      72%    35%       30%      1%       --   -11%    -22%
C:Mc:l     10087/s        98%      93%    51%       46%     14%      12%     --    -12%
C:FMmap    11498/s       125%     120%    72%       66%     29%      28%    14%      --

Benchmarking caches with ratio 1:10 and medium values

              Rate CHI:Mc:lIP CHI:Mc:l CHI:L1 CHI:FMmap C:Mc:lIP CHI:Mem C:Mc:l C:FMmap
CHI:Mc:lIP  4628/s         --     -10%   -30%      -30%     -46%    -47%   -55%    -59%
CHI:Mc:l    5140/s        11%       --   -23%      -23%     -40%    -41%   -50%    -54%
CHI:L1      6639/s        43%      29%     --       -0%     -23%    -23%   -35%    -41%
CHI:FMmap   6643/s        44%      29%     0%        --     -23%    -23%   -35%    -41%
C:Mc:lIP    8615/s        86%      68%    30%       30%       --     -1%   -15%    -23%
CHI:Mem     8661/s        87%      69%    30%       30%       1%      --   -15%    -23%
C:Mc:l     10188/s       120%      98%    53%       53%      18%     18%     --     -9%
C:FMmap    11201/s       142%     118%    69%       69%      30%     29%    10%      --

Benchmarking caches with ratio 1:10 and large values

              Rate CHI:Mc:lIP CHI:Mc:l CHI:FMmap CHI:L1 CHI:Mem C:Mc:lIP C:Mc:l C:FMmap
CHI:Mc:lIP  4139/s         --      -5%      -28%   -28%    -45%     -47%   -56%    -60%
CHI:Mc:l    4380/s         6%       --      -24%   -24%    -42%     -44%   -54%    -57%
CHI:FMmap   5731/s        38%      31%        --    -1%    -24%     -26%   -40%    -44%
CHI:L1      5777/s        40%      32%        1%     --    -23%     -26%   -39%    -44%
CHI:Mem     7501/s        81%      71%       31%    30%      --      -4%   -21%    -27%
C:Mc:lIP    7779/s        88%      78%       36%    35%      4%       --   -18%    -24%
C:Mc:l      9484/s       129%     117%       65%    64%     26%      22%     --     -7%
C:FMmap    10230/s       147%     134%       78%    77%     36%      32%     8%      --

Benchmarking caches with ratio 1:1 and medium values

              Rate CHI:L1 CHI:Mem CHI:Mc:lIP CHI:Mc:l CHI:FMmap C:FMmap C:Mc:lIP C:Mc:l
CHI:L1      2192/s     --    -42%       -42%     -47%      -54%    -59%     -73%   -79%
CHI:Mem     3787/s    73%      --        -0%      -9%      -21%    -29%     -53%   -64%
CHI:Mc:lIP  3806/s    74%      0%         --      -8%      -20%    -29%     -53%   -64%
CHI:Mc:l    4155/s    90%     10%         9%       --      -13%    -23%     -48%   -60%
CHI:FMmap   4781/s   118%     26%        26%      15%        --    -11%     -41%   -54%
C:FMmap     5368/s   145%     42%        41%      29%       12%      --     -33%   -49%
C:Mc:lIP    8043/s   267%    112%       111%      94%       68%     50%       --   -23%
C:Mc:l     10441/s   376%    176%       174%     151%      118%     94%      30%     --

Benchmarking caches with ratio 1:10 and medium values

              Rate CHI:Mc:lIP CHI:Mc:l CHI:L1 CHI:FMmap CHI:Mem C:Mc:lIP C:Mc:l C:FMmap
CHI:Mc:lIP  4630/s         --      -7%   -28%      -30%    -45%     -48%   -55%    -59%
CHI:Mc:l    4953/s         7%       --   -23%      -25%    -42%     -44%   -52%    -56%
CHI:L1      6408/s        38%      29%     --       -3%    -25%     -27%   -37%    -43%
CHI:FMmap   6604/s        43%      33%     3%        --    -22%     -25%   -35%    -42%
CHI:Mem     8493/s        83%      71%    33%       29%      --      -4%   -17%    -25%
C:Mc:lIP    8834/s        91%      78%    38%       34%      4%       --   -14%    -22%
C:Mc:l     10218/s       121%     106%    59%       55%     20%      16%     --    -10%
C:FMmap    11298/s       144%     128%    76%       71%     33%      28%    11%      --

Benchmarking caches with ratio 1:100 and medium values

              Rate CHI:Mc:lIP CHI:Mc:l CHI:FMmap CHI:L1 C:Mc:lIP CHI:Mem C:Mc:l C:FMmap
CHI:Mc:lIP  4626/s         --     -10%      -34%   -44%     -47%    -53%   -56%    -64%
CHI:Mc:l    5141/s        11%       --      -27%   -38%     -42%    -48%   -51%    -60%
CHI:FMmap   7004/s        51%      36%        --   -15%     -20%    -30%   -33%    -45%
CHI:L1      8279/s        79%      61%       18%     --      -6%    -17%   -21%    -36%
C:Mc:lIP    8799/s        90%      71%       26%     6%       --    -12%   -16%    -32%
CHI:Mem     9943/s       115%      93%       42%    20%      13%      --    -6%    -23%
C:Mc:l     10525/s       128%     105%       50%    27%      20%      6%     --    -18%
C:FMmap    12849/s       178%     150%       83%    55%      46%     29%    22%      --

These results were producted with Perl 5.10.1, here is perl -V output for reference:

Summary of my perl5 (revision 5 version 10 subversion 1) configuration:
    osname=linux, osvers=, archname=i486-linux-gnu-thread-multi
    uname='linux murphy #1 smp tue nov 10 09:21:59 cet 2009 i686 gnulinux '
    config_args='-Dusethreads -Duselargefiles -Dccflags=-DDEBIAN -Dcccdlflags=-fPIC -Darchname=i486-linux-gnu -Dprefix=/usr -Dprivlib=/usr/share/perl/5.10 -Darchlib=/usr/lib/perl/5.10 -Dvendorprefix=/usr -Dvendorlib=/usr/share/perl5 -Dvendorarch=/usr/lib/perl5 -Dsiteprefix=/usr/local -Dsitelib=/usr/local/share/perl/5.10.1 -Dsitearch=/usr/local/lib/perl/5.10.1 -Dman1dir=/usr/share/man/man1 -Dman3dir=/usr/share/man/man3 -Dsiteman1dir=/usr/local/man/man1 -Dsiteman3dir=/usr/local/man/man3 -Dman1ext=1 -Dman3ext=3perl -Dpager=/usr/bin/sensible-pager -Uafs -Ud_csh -Ud_ualarm -Uusesfio -Uusenm -DDEBUGGING=-g -Doptimize=-O2 -Duseshrplib -Dd_dosuid -des'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=define, usemultiplicity=define
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=undef, use64bitall=undef, uselongdouble=undef
    usemymalloc=n, bincompat5005=undef
    cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
    optimize='-O2 -g',
    cppflags='-D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include'
    ccversion='', gccversion='4.3.4', gccosandvers=''
    intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
    ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=4, prototype=define
  Linker and Libraries:
    ld='cc', ldflags =' -fstack-protector -L/usr/local/lib'
    libpth=/usr/local/lib /lib /usr/lib /usr/lib64
    libs=-lgdbm -lgdbm_compat -ldb -ldl -lm -lpthread -lc -lcrypt
    perllibs=-ldl -lm -lpthread -lc -lcrypt
    libc=/lib/, so=so, useshrplib=true,
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E'
    cccdlflags='-fPIC', lddlflags='-shared -O2 -g -L/usr/local/lib -fstack-protector'

Characteristics of this binary (from libperl): 
  Built under linux
  Compiled at Nov 21 2009 22:39:09

Add benchmarking tests to MooseX::Log::Log4perl to verify overhead

May 20th, 2009

After a while I had the chance to get back to MooseX::Log::Log4perl, which is Role (based on Moose) that can be easily reused in classes requiring logging functionality.

While it is really simple to use, I still found myself often directly using the default logger approach by creating a class variable and using that. So instead of:

use Moose;
with MooseX::Log::Log4perl;
sub whatever {
    my $self = shift;
    $self->log->debug("Here I am") if $self->log->is_debug;

mostly the direct logger was used in the classes.

use Log::Log4perl;
use vars qw($log);
$log = Log::Log4perl->get_logger(__PACKAGE__);
sub whatever {
    my $self = shift;
    $log->debug("Here I am") if $log->is_debug;

One reason was that during that time I optimized for speed and found a hotspot to be the additional method call for the “log” method. As perl has some overhead in calling functions, this still holds true to some extend, so that’s why I added a benchmarking test to the testsuite of MooseX::Log::Log4perl.

So if you have the chance, I’d like to see if in your test environment still the performance limits (keep overhead lower than 5% compared to using Log::Log4perl directly). To run the test simply get the sources and run the test.

cpan> look MooseX::Log::Log4perl
shell# TEST_MAINT=1 prove -l -v t/99_bench.t
t/99bench.t .. 
ok 1 - Bench instance for MooseX::Log::Log4perl isa BenchMooseXLogLog4perl
ok 2 - Bench instance for Log::Log4perl isa BenchLogLog4perl
                     Rate MooseX-L4p log MooseX-L4p logger Log4perl method Log4perl direct
MooseX-L4p log    21235/s             --               -0%             -4%             -6%
MooseX-L4p logger 21273/s             0%                --             -4%             -6%
Log4perl method   22102/s             4%                4%              --             -2%
Log4perl direct   22535/s             6%                6%              2%              --

If all tests pass you stayed within the limits (around 95% compared to using Log4perl directly). I’d like to see your results. So please comment on it and add the comparison table to it.

Browsers benchmarked on Linux

May 13th, 2009

The system itself is a Dell D830 laptop with a nvidia NV135 chip and an Intel Core2 Duo T7500 CPU with 2GB RAM. For actual benchmarking the new futuremark paecekeeper browser benchmark has been put to use.

The compared browsers are:

  • Midori 0.1.4 (lightweight GTK browser using webkit)
  • Iceweasel 3.0.9 (the debian rebranded Firefox 3.0.9)
  • FIrefox 3.5b4 (latest beta of firefox 3.5)

Sunspider results are:

RESULTS (means and 95% confidence intervals) for SunSpider 0.9
Firefox 3.5b4:               2405.6ms +/- 4.0%
Midori 0.1.4:                3917.2ms +/- 2.6%
Iceweasel 3.0.9:             4465.6ms +/- 1.6%

Here are the results for the Futuremark Peacekeeper browser benchmark:

The results are quite unexpected, as working with the browser show a different perceived quality. Although midori scores highest in the peacekeeper benchmark, the rendering on some sites is sometimes flaky and seem to take longer as with both firefox version. On the other hand the sunspider benchmark shows midori far behind firefox, which also is strange as SunSpider is from the webkit project and should therefor be quite fast on a browser using webkit.

Setting up your own simple debian repository

April 27th, 2009

In the process of getting a debian repostiry ready to allow easy installtion with debian’s apt, a small series of posts are going to be created here. So here is the initial post describing a very simple debian repository layout.

All that’s required to setup your own debian repository is the dpkg-dev package and a web or ftp server for serving the files. For a local deployement (e.g. within a company) also a file-system based approch is possible through a NFS mounted directory.

Make sure the dpkg-dev package is installed:

sh# aptitude install dpkg-dev

Copy all files into a directory binary on the server. So the layout will look something like this:

+-- debian
    +-- binary
    |   +-- myweb-2.0-1_i386.deb
    |   +-- myweb-utils-2.0-1_i386.deb
    +-- source

Here the packages in the binary directory can new be used to create the repository index and serve this then as a debian repository. To scan the packages and create the index use:

$ cd webserver-root
$ dpkg-scanpackages binary /dev/null | gzip -9c > binary/Packages.gz
$ dpkg-scansources source /dev/null | gzip -9c > source/Sources.gz

Now the repository can be used by adding the following to /etc/apt/sources.list:

deb ./
deb-src ./

See for more details.

Dual-Screen setup with Xorg, RandR1.2 and ATI 9.2 (v8.528) Linux x86_64 drivers

February 22nd, 2009

After seeing that ATI has released again drivers for Linux with a new major version number (and not the scarry .0 behind it) I thought it was time to battle the ATI dragons again. As Debian usually lags behind on the proprietary ATI stuff (who wonders…) I’m using aain the binary packages provided by ATI.

The system is Debian testing, which is now codenamed “Squeeze” as Lenny finally stablized.
As the build required libstdc++5 I had to reinstall this package:

sh# aptitude install libstdc++5

Downloaded the latest and ran the usual:

sh# sudo sh --buildpkg Debian/testing

which results in the following packages being built:

  • fglrx-amdcccle_8.582-1_amd64.deb
  • fglrx-driver_8.582-1_amd64.deb
  • fglrx-driver-dev_8.582-1_amd64.deb
  • fglrx-kernel-src_8.582-1_amd64.deb

Install those and if RandR 1.2 is compiled into Xorg (which it is usually) your dual-screen setup probably switches back to a cloned view. In case you never had a working dual-screen (called big desktop by ATI) setup, run the following (and be sure to backup your /etc/X11/xorg.conf in case you have anything special in there), otherwise skip this section:

sh# sudo dpkg-reconfigure -phigh xserver-xorg
sh# aticonfig --initial --desktop-setup=horizontal --overlay-on=1

Then be sure to have added a “Virtual” desktop size to your “Display” subsection like so (for two 1680×1050 screens thats 3360×1050):

Section "Screen"
        Identifier "aticonfig-Screen[0]-0"
        Device     "aticonfig-Device[0]-0"
        Monitor    "aticonfig-Monitor[0]-0"
        DefaultDepth     24
        SubSection "Display"
                Viewport   0 0
                Depth     24
                Virtual   3360 1050

After restarting X11 (Ctrl-Alt-Backspace) login and open a terminal and run:

sh# xrandr --output DFP2 --right-of DFP1

This will setup display DFP2 to be right of DFP2, please check the correct display names to use with

xrandr -q


Of course you can also use the graphical tools arandr or grandr to accomplish this.

  •   Category: Bits & Bytes   A-Tags: ,
  • Comments Off on Dual-Screen setup with Xorg, RandR1.2 and ATI 9.2 (v8.528) Linux x86_64 drivers

« Prev - Next »