CVT is enough different from GTF that it should not be used on monitors
that aren't expecting it. This brings us closer to what the spec says
the correct behaviour is.
Before this it was meaningless to try to mark DisplayModeRec tables
const, since the mode name would be emitted as a pointer to an
anonymous string constant, and therefore would have to be fixed up by
ld.so and so couldn't live in .rodata. With this change the standard
mode lists can live in .rodata, and modes duplicated from them will
have their names filled in on the fly.
FindPCIVideoInfo() function isn't need anymore.
xf86scanpci() is being called only once so we don't need permanent
(static) variables there.
restorePciState() is not used for now (until we find why multiple
cards aren't working).
Formerly the code claimed it could only handle up to 256 visuals, which
was true. Also true, but not explicitly stated, was that it could only
handle visuals with VID < 256. If you have enough screens, and subsystems
that add lots of visuals, you can easily run off the end. (Made worse
because we allocate visual IDs from the same pool as XIDs.) If your app
then chooses a visual > 256, then the Xinerama code would throw BadMatch
on CreateColormap and your app wouldn't start.
With this change, PanoramiXVisualTable is gone. Other subsystems that
were using it as a translation table between each screen's visuals now
use a PanoramiXTranslateVisual() helper.
This reverts commit 3abce3ea2b and
6cbaf15e61.
The memory returned to xf86LoadModule was allocated in doLoadModule, which
calls the respective module's PreInit. As it turns out, input and output
drivers store a pointer to the module elswhere, so freeing it in
xf86LoadModule is a bad idea.
For further reference: hw/xfree86/common/xf86Helper.c
Input drivers: xf86InputDriverList[blah]->module = module;
Output drivers: xf86DriverList[blah]->module = module;
Unloading the module would not look pretty then.
Rather than letting the DDX allocate the events, allocate them once in the DIX
and just pass it around when needed.
DDX should call GetEventList() to obtain this list and then pass it into
Get{Pointer|Keyboard}Events.
LoadModule() returns the only reference to a fresh piece of memory (a
ModuleDescPtr). Sadly, xf86LoadModules dropped the return value on the floor
leaking memory for each module it loaded.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
All the failure paths were very diligent in freeing the "fullpath" temporary
string, but the success case was not. All the content only got strdup()d, so
it's not live memory anymore.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
xf86LogInit allocates a piece of memory, stores it in lf. LogInit() will then
effectively strdup it, but lf is never freed again.
Signed-off-by: Peter Hutterer <peter@cs.unisa.edu.au>
We need to start breaking the XKB API to enforce sanity, so drag whichever
headers we need to do so into the server tree, as the client API is set in
stone, being part of Xlib.
After trying to switch from X to VT (or just quit) the video-amd driver
attempts to issue INT 10/0 to go to mode 3 (VGA). The emulator, running
the BIOS code, would then spit out:
c000:0282: A2 ILLEGAL EXTENDED X86 OPCODE!
The opcode was 0F A2, or CPUID; it was not implemented in the emulator.
This simple patch, against 1.3.0.0, handles the CPUID instruction in one of
two ways:
1) if ran on __i386__ or __x86_64__ then it calls the CPUID instruction
directly.
2) if ran elsewhere it returns a canned 486dx4 set of values for
function 1.
This fix allows the video-amd driver to switch back to console mode,
with the GSW BIOS.
Thanks to Symbio Technologies for funding my work, and ThinCan for
providing hardware :)
Signed-off-by: Bart Trojanowski <bart@jukie.net>
Acked-by: Eric Anholt <eric@anholt.net>
'Loading foo' is verbosity 3, whereas 'already built-in' is verbosity 0.
This means that gdm's log would just be full of bare 'module already
built-in' messages.
xf86CrtcRotate() is called by randr 1.2 drivers via xf86CrtcSetMode() or xf86SetDesiredModes()
during ScreenInit() at which point pScrn->pScreen is not set. If a user specifies a rotation
in their config file pScrn->pScreen is dereferenced and boom.
First mode is _always_ preferred in 1.4; the bit that used to mean this
now means that the preferred mode is also the native pixel format. The
old "is GTF" bit now means "is continuous-frequency" instead.
Section 3.6.4, Table 3.14: Feature Support, Notes 4 and 5.
Nothing actually decoded yet, but at least we print what they are.
New in EDID 1.4:
- Color Management Data (0xF9), Section 3.10.3.7
- CVT 3 Byte Code Descriptor (0xF8), Section 3.10.3.8
- Established Timings III Descriptor (0xF7), section 3.10.3.9
- Manufacturer-specified data tag (0x00 - 0x0F), section 3.10.3.12
It's out of date and not included in the build. Instead, xf86DefModeSet.c is
built from vesamodes and extramodes using modeline2c.awk and *that's* what gets
built.
From bugzilla bug 13467¹:
Currently the xserver fails to build without this (now deleted) file, as the
Makefile tries to distribute it. The patch simply removes the reference to
modeline2c.pl.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Signed-off-by: James Cloos <cloos@jhcloos.com>
Don't run VT switches, terminations, or anything, on the core keyboard: only
run actions which affect the keyboard state. If we get an action such as VT
switch, just swallow the event.
From bugzilla bug 13467¹:
The modeline2c script is the only part of the Xorg server that requires Perl.
[This] is a simpler replacement that works with any normal AWK.
1] http://bugs.freedesktop.org/show_bug.cgi?id=13467
Bug was posted by Joerg Sonnenberger <joerg@NetBSD.org>.
We free the ValuatorClassRec quite regularly. If a SIGIO is handled while
we're swapping device classes, we can bring the server down when we try to
access lastx/lasty of the master device.
These hints allow an acceleration architecture to optimize allocation of certain
types of pixmaps, such as pixmaps that will serve as backing pixmaps for
redirected windows.
If the originating mode didn't have a name, we would end up with the name of
the original mode being setup correctly, but with the name of the copy still
being NULL.
In a multihead setup, if only the first screen can be
initialized, but the second screen is mentioned first in the
ServerLayout section, the xf86InitOrigins() function will crash
because the screen referred to in the e.g. "RightOf" part is
non-existent.
The transformation between fbdev and xfree86 mode timings needs to be
invertible, otherwise Xen and other framebuffers that don't have real
pixel clocks won't initialize.
This makes the root visual a GLX capable visual again and adds a GLX visual
for the COMPOSITE ARGB visual cleanly (as opposed to the hack we had before).
This changes the module initalization order so that the GLX module initializes
after COMPOSITE. The reason for this change is to be able to initialize a
GLX visual config for the COMPOSITE ARGB visual.
Call ProcessOtherEvents first, then for all keyboard devices let them be
wrapped by XKB. This way all XI events will go through XKB.
Note that the VCK is still not wrapped, so core events will bypass XKB.
(cherry picked from commit d627061b48)
Don't build XF86Misc or XF86Vidmode in hw/xfree86/dixmod when it's been
explicitly disabled in configure, or we don't have the proto modules
installed.
Instead of removing the preference bit marking the hardware declared mode
preference, leave it in place and just move the user preferred mode to the
front of the list while marking it with the USERPREF bit which will cause it
to be selected by the initial mode selection code.
Hide getline call by checking for glibc. If not, use fgetln instead. Even
though this section is now #ifdef'ed for linux only, this should help make
it more portable if non-linux folks end up wanting it.
It contains static paths, fails to build on non-glibc, and apparently just
exists to support distributions managing binary drivers and open-source drivers
together. Also restores previous code for fallback to vesa if nothing is
detected.
Right now we default to "all" which gives us a situation much like before,
but when the "typical" option is implemented, we can change the default and
reduce the number of visuals the GLX module bloats the X server with.
Instead of the fragile setup where we filter the modes common between the
DDX generated GLX visuals and the DRI driver generated fbconfigs, we now
just take the fbconfigs returned by the DRI driver to be our supported set.
A lot of EDID writers apparently end up stuffing centimeters (like the
maximum image size field) into the detailed timings, instead of millimeters.
Some of them only get it wrong in one direction. Also, add a quirk to let
us mark the largest 75hz mode as preferred, which will often be used for
EDID 1.0 CRTs.
If none is present, a default one will be created. This will be attached
to either the first device section in the xorg.conf (allowing you to
specify something like using EXA without having a screen section) or a
default screen section if none is present in the file.
This will allow the screen to not explicitly have a device section. If
this is the case and there is a device section in the xorg.conf, the first
one will be used. If there is no device section at all, a default one will
be created that loads the automatically determined module.
This is what we're currently shipping in Debian. Enables the ability for
drivers to ship a text file listing PCI ID's they support, and have the
server read them on startup when no driver is specified. This works, but
isn't the final solution.
DGAStealXXXEvent modified to take in device argument.
The evdev driver only sends one valuator when only one axis changed. We need
to check for DGA either way (xf86PostMotionEventP), otherwise we lose purely
horizontal/vertical movements.
Note that DGA does not do XI events.
Center the frame around the first pointer found and then update all pointers
on the same screen to move to the edges (if necessary).
Note: xf86WarpCursor needs to be modified, is using deprecated
miPointerWarpCursor and will kill the server when called with
inputInfo.pointer.
Removes "LookupKeyboardDevice" and "LookupPointerDevice" in favor of
inputInfo.keyboard and inputInfo.pointer, respectively; all use cases
are non-XI compliant anyway.
Matches linuxPci.c changes made in 8279444a54
Fixes compiler errors:
"ix86Pci.c", line 194: too many struct/union initializers
"ix86Pci.c", line 204: too many struct/union initializers
"ix86Pci.c", line 214: too many struct/union initializers
When processing events from the EQ, _always_ call the processInputProc of the
matching device. For XI devices, this proc is wrapped in three layers.
Core event handling is wrapped by XI event handling, which is wrapped by XKB.
A core event now passes through XKB -> XI -> DIX.
This gets rid of a sync'd grab problem: with the previous code, core events
did disappear during a sync'd device grab on account of mieqProcessInputEvents
calling the processInputProc of the VCP/VCK instead of the actual device. This
lead to the event being processed as normal instead of being enqueued for
later replaying.
Even though they're defined to zero by the spec, we've seen an EDID block
where the (empty) ASCII strings were stuffed in a byte early, leading to the
descriptor being considered a detailed timing instead.
This was an attempt to avoid scratch gc creation and validation for paintwin
because that was expensive. This is not the case in current servers, and the
danger of failure to implement it correctly (as seen in all previous
implementations) is high enough to justify removing it. No performance
difference detected with x11perf -create -move -resize -circulate on Xvfb.
Leave the screen hooks for PaintWindow* in for now to avoid ABI change.
Call ProcessOtherEvents first, then for all keyboard devices let them be
wrapped by XKB. This way all XI events will go through XKB.
Note that the VCK is still not wrapped, so core events will bypass XKB.
Add keyc->postdown, which represents the key state as of the last mieqEnqueue
call, and use it when we need to know the posted state, instead of the
processed state (keyc->down). Add small functions to getevents.c to query and
modify key state in postdown and use them all through, eliminating previously
broken uses.
In commit 41bb9fce47, the event delivery loop
for Xinput enabled keyboards was changed and accidentally used the wrong
index variable, causing random events to be delivered when returning from VT
switch.
In addition, in commit aeba855b07,
SIGIO was blocked during delivery of these events, but not for the entire
period the xf86Events array was being used. Block SIGIO for the whole loop
to avoid other event delivery from trashing the key release events.
(cherry picked from commit aa7ed1f5f3)
Previously, the server version reported by xdpyinfo and Xorg -version would
bear some vague resemblance to a X.Org katamari version, but in the presence
of modularization (and client-server relationships with different katamari
versions on each side) those numbers don't really make sense. Instead, just
report the package version.
When branching a stable branch, master's version should be immediately updated
to the endpoint of the stable branch plus a snapshot of 1 (for example,
1.4.0.1 after server-1.4-branch). The stable branch should then be changed to
RC0 at that time (1.3.99.0, for example).
This scheme was partially attempted for server 1.3, but lacked the appropriate
master updates, thus why it had to be revisited now. While here, we can also
remove a lot of versioning complexity since everything is based on the package
version.
One of these I introduced by listing dix and mi in the same library list to
simplify other servers. The other had been hacked around using libosandcommon,
which is now gone.
This cleans up server Makefile.ams a little bit, but also means that people
messing with configure.ac need to be careful with whether they put libraries
in the _LIBS or _SYS_LIBS targets. Hopefully the comment in configure.ac will
clarify the issues.
The author of the int10 code looked at the VBIOS POSTing code
in DOSEMU to get some initial idea on how to POST a VBIOS.
To give credit to the DOSEMU Team for this inspiration a comment
was added to the code which could suggest that code from the
GPLed DOSEMU was directly incorporated into this code.
This patch should clearify the situation.
Note that pciaccess doesn't yet have Net/OpenBSD support, but the relevant
code should go there instead of disconnected code in the X Server.
While here, remove the now-disabled INCLUDE_XF86_NO_DOMAIN from the headers,
and un-disable xf8StdAccResFromOS for those OSes without domain support which
will need it.
over to new system.
Need to update documentation and address some remaining vestiges of
old system such as CursorRec structure, fb "offman" structure, and
FontRec privates.
This was a bunch of poorly defined resource ranges per OS/platform combination
which were supposed to represent what regions could potentially have resources
allocated into them.
Composite's automatic redirection is a more general mechanism than the
ad-hoc BS machinery, so it's much prettier to implement the one in terms
of the other. Composite now wraps ChangeWindowAttributes and activates
automatic redirection for windows with backing store requested. The old
backing store infrastructure is completely gutted: ABI-visible structures
retain the function pointers, but they never get called, and all the
open-coded conditionals throughout the DIX layer to implement BS are gone.
Note that this is still not a strictly complete implementation of backing
store, since Composite will throw the bits away on unmap and therefore
WhenMapped and Always hints are equivalent.
xf86RandR12ScreenSetSize must protect calls to EnableDisableFBAccess with
suitable vtSema checks to avoid invoking driver code while the X server is
inactive.
The multi-crtc cursor code in hw/xfree86/modes holds a reference to the
current cursor. This reference must be correctly ref counted so the cursor
is not freed out from underneath this code.
This is where they should have been in the first place. All the rest of
the code in the server defines such things in the source files, not the
headers.
A bunch of CFLAGS had gone missing, so the build failed with errors like:
../../../../../hw/xfree86/os-support/linux/lnx_ev56.c:7:19: error: input.h: No such file or directory
../../../../../hw/xfree86/os-support/linux/lnx_ev56.c:8:24: error: scrnintstr.h: No such file or directory
As a result, we can remove the quirks that existed to flip the bits back around
for us. This is not confirmed in all cases due to lack of bugs containing EDID
blocks associated with the quirks, but is likely true.
The outport is most likely unnecessary on any currently used hardware,
the byte copy is necessary from what I know on IA64 and friends so leave it.
Add a new API entry point which lets a driver select the old behaviour if
such a needs is ever found.
This gives me ~20% speed up on startup on 945 hardware.
If NoAutoAddDevices is given as a server flag, then no devices will be added
from HAL events at all. If NoAutoEnableDevices is given, then the devices will
be added (and the DevicePresenceNotify sent), but not enabled, thus leaving
policy up to the client.