the defaults from InitVelocityData() or hypothetic driver-side changes
are now respected, not overridden.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
If the device doesn't have any BARs then it's just a stub for some
lame operating systems that need one PCI device per output for
multihead. No point in warning about it.
It's all a bit wonky since both sis(4) and xgi(4) claim to support the
Volari Z7 and V5/8 (0x0020 and 0x0040), so let's side with xgi(4), why
not. Note that the V3 (not V3XT) identifies itself as a trident chip.
Put out a warning if xorg.conf has InputDevice sections, but these aren't
referenced in the used ServerLayout. This is only performed if AllowEmptyInput
is enabled.
The reason behind this is that the server used to auto-add the first
mouse/keyboard sections if none where referenced. Now, with HAL and AEI
enabled by default, setups that relied on this auto-adding break and are left
without input devices. The least we can do is warn them.
Us shipping a GUI configuration utility (especially as part of the
server!) was pretty pointless. There was pretty much nothing it could
configure which wasn't already runtime adjustable: if you could get a
server up with functioning input and output, there wasn't much xorgcfg
could do for you.
Au revoir.
- Use a single common function to compute reducedness.
- Call it from both the old-school and new-school mode validation paths.
- Define monitor reduced-blanking support in accord with EDID 1.4.
- Attempt to filter RB DMT modes away from the "standard" EDID pool if
the monitor doesn't claim RB support.
On some panels you end up with all of:
- No range descriptor
- No description of physical connectivity
- Native panel size mode in standard timings list
In principle you're supposed to use the timings for that mode from the DMT
spec, but in practice the DMT spec has timings for both 1920x1200 normal
and 1920x1200RB, and the standard timing field gives you no way to
distinguish. And, of course, the non-RB timings don't fit in a single
DVI link.
A couple #if defined(Lynx) && defined(sun) had become just if defined(sun),
resulting in wrong settings for Solaris builds, so they're now just deleted.
OsInitColors always just returned TRUE, so just remove calls to it and
insane special-case logic. Remove unused kcolor.c implementation, and
merge oscolor.h into oscolor.c since it was the only user. Remove
open-coded strncasecmp in oscolor.c.
Since we no longer need to call OsInitColors after reading the config
file, just call PostConfigInit() from one place, and move PM handling to
one place so we can install the signal handlers earlier.
If devices are prepended to the list, their wake-up order on resume is not the
same as the original initialisation order. Hot-plugged devices, originally
inited last, are re-enabled before the xorg.conf devices and in some cases may
steal the device files. Result: we have different devices before and after
suspend/resume.
RedHat Bug 439386 <https://bugzilla.redhat.com/show_bug.cgi?id=439386>
- Allow returning multiple drivers to try for a given PCI id (for instance,
try "geode" then "amd" for AMD Geode hardware)
- On Solaris, use VIS_GETIDENTIFIER ioctl as well as PCI id to choose drivers
- Use wsfb instead of fbdev as a fallback on non-Linux SPARC platforms
Remove AEI check from configImpliedLayout as the setting isn't actually parsed
at this point anyway (written by Sasha Hlusiak).
Resurrect checkInput() and check for devices there if AEI is false (this also
creates the default devices if required).
Set AllowEmptyInput to enabled by default if hotplugging is enabled.
If no Screen is specified in the ServerLayout section, either take the first
one from the config file or autogenerate a default screen.
X.Org Bug 16301 <http://bugs.freedesktop.org/show_bug.cgi?id=16301>
RandR 1.1 has a physical size for each mode. It used to be that the DIX would
remember these modes and pass them back up to the DDX when changing the screen
configuration. The DDX uses RR_GET_MODE_MM to query the driver for the physical
dimensions of the screen, allowing it to preserve the DPI.
With RandR 1.2, the physical dimensions are stored as part of the output, rather
than per mode. The DIX only uses the sizes passed in from the DDX to select the
mode pool for the "default" output, and forgets the physical sizes. Then, when
reconfiguring the screen, it makes up a new RRScreenSizeRec using the dimensions
from the output, screwing up the DPI.
This change works around this problem by ignoring the DIX and querying the real
size from the driver.
This reverts commit 76576c87b0.
which was an incorrect revert of previous ABI bumps. Those
responsible for the accidental ABI bumps in both directions
have all been sacked.
This allows xf86-input-mouse to build again, for example.
Spiritual revert of 1fa4de80fc. Intel's C
compiler claims to be gcc-compatible; if they're not defining the same
macros as gcc then that's their bug, not ours. Even if we were to do
this aliasing we should do it once and for all in servermd.h.
Use only %di to name the PCI register to read/write, rather than %edi.
DOS is only expecting the base PCI config space anyway, and the BIOS
might be using the high bits of %edi.
Yes, this is a 486+ instruction and thus not strictly legal in vm86
mode, but enough BIOSes use it (looking at you VIA) that we might as
well implement it.
In the single output enabled case we never enter the loop and test
never gets set and so we fail to match a good mode.
This was causing my 2560x1600 to end up at 2048x1536.
The problem happens if Monitor/Card combo doesn't provide EDID info,
and the XFree86-VidModeExtension extension is used.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Recording damage from other operations (e.g. creating a client damage record)
may confuse the migration code resulting in corruption.
Option "EXAOptimizeMigration" appears safe now, so enable it by default. Also
remove it from the manpage, as it should only be necessary on request in the
course of bug report diagnostics anymore.
GNU/kFreeBSD defines __FreeBSD_kernel__, but not __FreeBSD__.
Unify preprocessor conditionals between variable declaration and use.
Debian bug #482550.
During GetPointerEvents (and others), we need to access the last coordinates
posted for this device from the driver (not as posted to the client!). Lastx/y
is ok if we only have two axes, but with more complex devices we also need to
transition between all other axes.
ABI break, recompile your input drivers.
This copies over the files generated from mesa/src/mesa/glapi. There's
a corresponding mesa commit that makes it easy to generate the glapi files
straight into the xserver tree when the XML definitions change.
The only few files that are copied from mesa but aren't generated are
glapi.[ch] and glthread.[ch]. Everything in there is technically DRI
driver API and the whole setup is still a bit fragile, but it's not a new
problem.
The --with-mesa-source configure option is still around since other
parts of the server (XGL and DMX - grep for MESA_SOURCE) need that,
but for common case of building with GLX and AIGLX support, that
option is no longer needed.
Conflicts:
Xext/xprint.c (removed in master)
config/hal.c
dix/main.c
hw/kdrive/ati/ati_cursor.c (removed in master)
hw/kdrive/i810/i810_cursor.c (removed in master)
hw/xprint/ddxInit.c (removed in master)
xkb/ddxLoad.c
If the monitor isn't reduced-blanking (either through EDID logic, or
config file setting), then remove RB modes from the default pool. Any
RB modes from the driver and config file pools will stick around though;
you asked for them, you got them.
Seeing as this code seems to be specific to OpenBSD I don't think
__x86_64__ should have been added there at all. It appears to have
been added wherever __amd64__ existed before which is wrong. I
think that part of the commit should be reverted but also all four of
the checks should be __OpenBSD__ && __amd64__ instead of two one
direction and two flipped.
The first guess used to be "is the preferred mode for one output the
preferred mode on all outputs". Instead, do "find the largest mode that's
preferred for at least one output and available on all outputs".
Old logic was just the first one that happened to have an associated
CRTC. The new logic tries to find one that's definitely connected, has
probed modes, and has the largest candidate mode.
It was removed and simplified some conditionals. We don't need test for
pDev->isMaster inside xf86CursorSetCursor() because only MD enters there.
In the last chunk, ScreenPriv fields were being assigned without need, so
that code was wrapped inside the conditional to avoid it.
I also tried to make the identation more sane in some parts that I touched.
Signed-off-by: Tiago Vignatti <vignatti@c3sl.ufpr.br>
Minor modification, part of the original patch led to cursors not being
updated properly when controlled through XTest.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
The only function that cat set SWCursor before xf86DeviceCursorInitialize()
is xf86InitCursor() when VCP and is created.
Signed-off-by: Tiago Vignatti <vignatti@c3sl.ufpr.br>
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Missing parameter caused event processing to go nuts when checking valuators.
X.Org Bug 15936 <http://bugs.freedesktop.org/show_bug.cgi?id=15936>
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
Since glyphs are stored in pixmaps now, they can make their way into VRAM,
which invalidates a bunch of fast-path assumptions in the XAA code. Thus
you end up doing color-expands or WriteBitmap from la-la land and your
aliased glyphs go all funny.
Since XAA isn't ever growing the ability to do sane glyph accel, just force
glyph pixmaps into host memory by catching them at CreatePixmap time.
We need a manual call to SetCursor when we switch from SW to HW rendering and
the other way round. This way we display the new cursor after removing the old
one.
In addition, we only update the internal state for the VCP's sprite. This way,
when we switch back to HW rendering the state is up-to-date and wasn't
overwritten with the other sprite's state.
The second part is a hack. It would be better to keep a state for each sprite,
but then again we don't have hardware that can render multiple cursors so we
might as well do with the hack.
Switches back to HW cursors when sprites other than the VCP are removed.
The current state requires the cursor to change shape once before it updates
to SW / HW rendering (whatever is appropriate), e.g. by moving into a
different window. Until this is done, the cursor is invisible.
This patch only creates a Files section if required, so if no entries are
added, an empty Files section will not be created.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
LeaveVT/EnterVT cycles will free/realloc shadow frame buffers. Because of
this, the presense/absence of that data is insufficient to know whether
the screen function wrappers are necessary. Instead, the 'transform_in_use'
flag should be used.
This patch also adds 'xf86RotateFreeShadow' for drivers to use at LeaveVT
time to free the rotation data; it will be reallocated on EnterVT.
In DeleteInputDeviceRequest, leave the conf_idev (which is shared with
xf86ConfigLayout.input) alone for devices that were specified in the
ServerLayout section of the config file. This way, in the next server
generation we are left with what was the original config and can thus re-init
the devices.
This is an addon to 6d22a9615a, an attempt to
fix Bug 14418.
X.Org Bug 15645 <https://bugs.freedesktop.org/show_bug.cgi?id=15645>
X.Org Bug 14418 <https://bugs.freedesktop.org/show_bug.cgi?id=15645>
The previous check works in the master-branch, but doesn't work with MPX. We
actually copy the SD's information into the MDs public.devicePrivate, so we
need to explicitly check whether a device is a MD before freeing the module.
Use __libmansuffix__ instead of __oslibmansuffix__ which isn't getting
replaced, and rewrap some text to get __xservername__ replaced in the
description of Option "Accel" (cpp doesn't like the preceding quote).
This extension provided bug-compatibility with pre-X11R6, but has been
stubbed out in our server since 2006 to return BadRequest when you actually
asked for it.
The DDX (xfree86 anyway) maintains its own device list in addition to the one
in the DIX. CloseDevice will only remove it from the DIX, not the DDX. If the
server then restarts (last client disconnects), the DDX devices are still
there, will be re-initialised, then the hal devices come in and are added too.
This repeats until we run out of device ids.
This also requires us to strdup() the default pointer/keyboard in
checkCoreInputDevices.
X.Org Bug 14418 <http://bugs.freedesktop.org/show_bug.cgi?id=14418>
Some pointer devices send key events [1], blindly getting the paired device
crashes the server. So let's check if the device is a pointer before we try to
get the paired device.
[1] The MS Wireless Optical Desktop 2000's multimedia keys are sent through
the pointer device, not through the keyboard device.
The jstk code for Joysticks is not used by any module, was never actually compiled and uses an API
that is deprecated these days.
No reason to keep it.
Get rid of glcontextmodes.[ch] from build, rename __GlcontextModes to
__GLXcontext. Drop all #includes of glcontextmodes.h and glcore.h.
Drop the DRI context modes extension.
Add protocol code to DRI2 module and load DRI2 extension by default.
Since there's no way to safely know how many blocks xf86DoEDID_DDC2 would
return, add a new xf86DoEEDID entrypoint to do that, and implement the
one in terms of the other.
The latter doesn't give you the option's value, it just tells you if
it's present in the configuration. So using Option "EXANoComposite" "false"
disabled composite acceleration.
This patch (and not setting HARDWARE_CURSOR_BIT_ORDER_MSBFIRST on big endian
platforms) fixes it for me with the radeon driver and doesn't break intel.
Correct patch this time :)
Should have done this in the first place. Since we're checking for the absence
of the get_crtc callback in the first place, we'll short circuit the later call
and disable the output, so the ugly "continue" block is unnecesary.
By adding a new output callback, ->get_crtc, xf86SetDesiredModes is able to
avoid turning off outputs & CRTCs if the current output<->CRTC mappings are the
same as the desired configuration. This helps avoid flickering displays at
startup time, which speeds things up a little and looks better.
Unless we check for vtSema before calling into the CRTC and output callbacks,
we may end up trying to access video memory that no longer exists, leading to a
crash. So if we don't have vtSema, return FALSE to the caller, indicating that
we didn't do anything.
Fixes#14444.
Actually more like in the mainline case, where the ideal mode happens to
be the very first aspect match on the first monitor. But let's not
split hairs.
The address written to 0xcf8 contains the PCI slot address to send the
config cycle to. However, we would ignore that and always send the
cycle to the device whose BIOS we were running. This breaks some
integrated graphics platforms that have explicit knowledge about the
system's host bridge, for example.
While the ScreenRec's notion of size in millimeters would get updates,
the RANDR 1.1 notion wouldn't, so your screen would appear to be square
and probably at some ludicrous DPI.
xserver and libpciaccess both need to open /dev/xf86, which can only
be opened once. I implemented pci_system_init_dev_mem() like Ian
suggested. This requires some minor changes to the BSD-specific
os-support code. Since pci_system_init_dev_mem() is a no-op on
FreeBSD this should be no problem.
i.e., don't check for the end of the list by ->name == NULL, since that
won't work now. Fix the consumers of xf86DefaultModes to use the new
explicit size as well.
In order to report accurate values to users of the RandR property interface,
it's sometimes necessary to ask the driver to update the value (for example
when backlight brightness changes without the server's knowledge, due to hotkey
events or direct sysfs banging).
This patch wires up the core server code with a new xf86CrtcFuncs callback,
get_property, to allow for this.
The new code is available under the RANDR_13_INTERFACE define, which in turn
depends on the RANDR_12_INTERFACE code.
Old heuristic was to find the first monitor that expressed a preference,
then attempt to get all other monitors to agree. This doesn't work
particularly well when the two sets of modes don't precisely intersect,
you get overlapping-but-not-identical output geometry and things go wrong.
New heuristic is:
- Exact user preference, if given
- Exact output preference, if the same for all outputs
- Best (largest) mode of modes common to all outputs:
- with the same aspect ratio as all outputs (may be NULL)
- with 4:3 aspect ratio
- Then the old heuristic to try to get something lit
Note that it is simply not doable to have a reliable initial output guess if
you insist on trying to clone all outputs together. It's far too easy to
end up with displays that simply don't have modes in common. We need to
switch to right-of placement someday, once we're not limited to CRTC size
limits and we have working multi-GPU in RANDR.
If you don't do this, then Modes "800x600" in the Display subsection will
be dutifully ignored and the driver will start at whatever resolution it
feels like.
CVT is enough different from GTF that it should not be used on monitors
that aren't expecting it. This brings us closer to what the spec says
the correct behaviour is.
Before this it was meaningless to try to mark DisplayModeRec tables
const, since the mode name would be emitted as a pointer to an
anonymous string constant, and therefore would have to be fixed up by
ld.so and so couldn't live in .rodata. With this change the standard
mode lists can live in .rodata, and modes duplicated from them will
have their names filled in on the fly.
FindPCIVideoInfo() function isn't need anymore.
xf86scanpci() is being called only once so we don't need permanent
(static) variables there.
restorePciState() is not used for now (until we find why multiple
cards aren't working).
Formerly the code claimed it could only handle up to 256 visuals, which
was true. Also true, but not explicitly stated, was that it could only
handle visuals with VID < 256. If you have enough screens, and subsystems
that add lots of visuals, you can easily run off the end. (Made worse
because we allocate visual IDs from the same pool as XIDs.) If your app
then chooses a visual > 256, then the Xinerama code would throw BadMatch
on CreateColormap and your app wouldn't start.
With this change, PanoramiXVisualTable is gone. Other subsystems that
were using it as a translation table between each screen's visuals now
use a PanoramiXTranslateVisual() helper.
This reverts commit 3abce3ea2b and
6cbaf15e61.
The memory returned to xf86LoadModule was allocated in doLoadModule, which
calls the respective module's PreInit. As it turns out, input and output
drivers store a pointer to the module elswhere, so freeing it in
xf86LoadModule is a bad idea.
For further reference: hw/xfree86/common/xf86Helper.c
Input drivers: xf86InputDriverList[blah]->module = module;
Output drivers: xf86DriverList[blah]->module = module;
Unloading the module would not look pretty then.
Rather than letting the DDX allocate the events, allocate them once in the DIX
and just pass it around when needed.
DDX should call GetEventList() to obtain this list and then pass it into
Get{Pointer|Keyboard}Events.
LoadModule() returns the only reference to a fresh piece of memory (a
ModuleDescPtr). Sadly, xf86LoadModules dropped the return value on the floor
leaking memory for each module it loaded.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
All the failure paths were very diligent in freeing the "fullpath" temporary
string, but the success case was not. All the content only got strdup()d, so
it's not live memory anymore.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
xf86LogInit allocates a piece of memory, stores it in lf. LogInit() will then
effectively strdup it, but lf is never freed again.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
We need to start breaking the XKB API to enforce sanity, so drag whichever
headers we need to do so into the server tree, as the client API is set in
stone, being part of Xlib.
After trying to switch from X to VT (or just quit) the video-amd driver
attempts to issue INT 10/0 to go to mode 3 (VGA). The emulator, running
the BIOS code, would then spit out:
c000:0282: A2 ILLEGAL EXTENDED X86 OPCODE!
The opcode was 0F A2, or CPUID; it was not implemented in the emulator.
This simple patch, against 1.3.0.0, handles the CPUID instruction in one of
two ways:
1) if ran on __i386__ or __x86_64__ then it calls the CPUID instruction
directly.
2) if ran elsewhere it returns a canned 486dx4 set of values for
function 1.
This fix allows the video-amd driver to switch back to console mode,
with the GSW BIOS.
Thanks to Symbio Technologies for funding my work, and ThinCan for
providing hardware :)
Signed-off-by: Bart Trojanowski <bart@jukie.net>
Acked-by: Eric Anholt <eric@anholt.net>
'Loading foo' is verbosity 3, whereas 'already built-in' is verbosity 0.
This means that gdm's log would just be full of bare 'module already
built-in' messages.
xf86CrtcRotate() is called by randr 1.2 drivers via xf86CrtcSetMode() or xf86SetDesiredModes()
during ScreenInit() at which point pScrn->pScreen is not set. If a user specifies a rotation
in their config file pScrn->pScreen is dereferenced and boom.