<http://bugs.opensolaris.org/view_bug.do?bug_id=6685465>
This bug is caused by Xephyr not handling the RGB byte order correctly
of the server where Xephyr is displaying on. The previous code just
assumed that the order was RGB and did not take into account that
Xservers may use different order (such as BGR).
The fix is to add a function to calculate the byte order and bits
to shift based on the visual mask and the visual bits_per_rgb (which
is usually 8, but could be server dependent). Since the shifts won't
change once the display connection has been made, I can cache these
values so that Xephyr doesn't have to keep recalculating them everytime
it tries to translate the Xephyr colormap entries for Xephyr clients to
the actual server colormap entries (i.e. calling the function
hostx_set_cmap_entry() repeatedly for every colormap entry).
RandR 1.1 has a physical size for each mode. It used to be that the DIX would
remember these modes and pass them back up to the DDX when changing the screen
configuration. The DDX uses RR_GET_MODE_MM to query the driver for the physical
dimensions of the screen, allowing it to preserve the DPI.
With RandR 1.2, the physical dimensions are stored as part of the output, rather
than per mode. The DIX only uses the sizes passed in from the DDX to select the
mode pool for the "default" output, and forgets the physical sizes. Then, when
reconfiguring the screen, it makes up a new RRScreenSizeRec using the dimensions
from the output, screwing up the DPI.
This change works around this problem by ignoring the DIX and querying the real
size from the driver.
This reverts commit 76576c87b0.
which was an incorrect revert of previous ABI bumps. Those
responsible for the accidental ABI bumps in both directions
have all been sacked.
This allows xf86-input-mouse to build again, for example.
Previously, the code was using PKG_CHECK_EXISTS before PKG_CHECK_MODULES,
(to cater to OpenBSD systems that include openssl by default but without
a .pc file). But this meant that systems that didn't have openssl installed
at all would not get any error message at configure time.
Now, if the SHA1_Init function is found in -lcrypto without any additional
flags, then that's used. Otherwise, pkg-config is used to find the right
flags to link against libcrypto. And if that fails, a nice error message
is now generated.
Using id = 0 only worked pre-MPX since XInput didn't allow XOpenDevice for the
core devices (0 and 1). Now we can now legally register for events so we may
overwrite our device-independent classes with the ones selected for the VCP.
So, increase the EMASKSIZE to MAX_DEVICES + 1 and use MAX_DEVICES as the ID
when we don't have a device.
Some reasons to embed fonts by default:
1. X server doesn't pick a good default font path so it's easiest just
to built in the core fonts and let new X hackers more happy. Developers
and distro guys are wise enough to just set --disable-builtin-fonts
when they want.
2. Seems that this is by far the most popular FAQ
(http://www.x.org/wiki/FAQErrorMessages).
3. No one gave a good argument to not do this:
http://lists.freedesktop.org/archives/xorg/2008-May/035479.html
Spiritual revert of 1fa4de80fc. Intel's C
compiler claims to be gcc-compatible; if they're not defining the same
macros as gcc then that's their bug, not ours. Even if we were to do
this aliasing we should do it once and for all in servermd.h.
Use only %di to name the PCI register to read/write, rather than %edi.
DOS is only expecting the base PCI config space anyway, and the BIOS
might be using the high bits of %edi.
Yes, this is a 486+ instruction and thus not strictly legal in vm86
mode, but enough BIOSes use it (looking at you VIA) that we might as
well implement it.