This was causing a crash randomly, due to random memory contents.
Use xcalloc to prevent this in the future, due to future changes or mistakes.
Set the drawableType to include GLX_PIXMAP_BIT and GLX_PBUFFER_BIT.
The new libGL supports these.
Set the max Pbuffer width/height, based on the results of a test program.
We may someday want to revisit this depending on what users need, so that
we create a CGLContextObj, make it current, and call glGetIntegerv to
gather the information at runtime.
(cherry picked from commit c7e3383309)
Extensions section was added in X11R6.8.0 and documented in the release notes:
http://www.x.org/archive/X11R6.8.0/doc/RELNOTES2.html#3
but never made it into the man page.
Also fix a bonus typo.
Signed-off-by: Alan Coopersmith <alan.coopersmith@sun.com>
It had a copy and paste mistake that I didn't notice. :/
It was using the CreatePixmapReq.
Also add a missing B16 to the end of the length for the DestroyPixmapReq struct.
Now the AppleDRIDestroyPixmap request seem to work.
(cherry picked from commit 295fe25bd8)
Rather than compiling a new keymap every time InitKeyboardDeviceStruct
is called, cache the previous keymap and reuse it if the rules have not
changed.
Signed-off-by: Dan Nicholson <dbn.lists@gmail.com>
Acked-by: Daniel Stone <daniel@fooishbar.org>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Was only used to provide a list of input devices that XF86-Misc could use,
now that XF86-Misc is gone, was parsed and logged, then completely ignored.
(Depends on previous patch that introduces OBSOLETE_TOKEN in parser to
make obsolete keywords like InputDevices & RgbPath be non-fatal errors.)
Signed-off-by: Alan Coopersmith <alan.coopersmith@sun.com>
Acked-by: Adam Jackson <ajax@redhat.com>
Xorg shouldn't refuse to run just because the user has an xorg.conf that
had the previously-used RgbPath keyword in it.
Signed-off-by: Alan Coopersmith <alan.coopersmith@sun.com>
Acked-by: Peter Hutterer <peter.hutterer@who-t.net>
Proto headers updating resulting in the server advertising new versions is
broken. This should be applied to every extension.
This fixes the build against slightly-older xineramaproto.
When the crtc transformation changes, the entire crtc must be repainted.
This was being done by clearing the shadow and then painting the rectangle
containing the screen image; the clear being required as the screen image
may not fill the crtc. When changing the transform rapidly, this leads to
flashing. Eliminate the clear by painting the entire crtc instead of just
the screen rectangle.
Signed-off-by: Keith Packard <keithp@keithp.com>
Just return a zeroed-out reply in that case. This is unambiguous, and
distinguishes "you didn't name a CRTC" from "you named a CRTC that can't
do panning".
This reverts commit 97c1cbc702.
- Sorry for the thinko, pending damage is often not fragmentated.
- Should the dst region become fragmentated, you actually want to copy more to unfragmentate it.
This involved wrapping some GCOps to get the proper behavior
when using X11 raster ops mixed with OpenGL (see driWrap.c).
This extends the AppleDRI protocol with create and destroy pixmap
functions.
The dri.c code has been extended quite a bit to enable this, and
to initialize the wrapping of CreateGC for GCOps.
This has been tested with tests/glxpixmap and proven to work with
the new libGL. Existing applications seem to work fine too. Redraws
all appear to be correct.
There may be some bugs lurking that I haven't found yet. I plan
to drive them out by extending the libGL test suite.
(cherry picked from commit 630518766b)
a9d7d659.. (PCI: Remove pciBusAddrToHostAddr and associated nonsense)
removes pciBusAddrToHostAddr(), but not its prototype, resulting in:
./.libs/libxorg.a(sdksyms.o):(.data.rel+0xe64): undefined reference to
`pciBusAddrToHostAddr'
Signed-off-by: Chris Ball <cjb@laptop.org>
base_color and label_color need to reference the color in the destination, not
in the source.
X.Org Bug 20081 <http://bugs.freedesktop.org/show_bug.cgi?id=20081>
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Daniel Stone <daniel@fooishbar.org>
This was all a glorified no-op. We rely on pciaccess to create device
maps anyway, so we should have no reason to care about what the host
address is.
Acked-by: Ian Romanick <ian.d.romanick at intel.com>
Signed-off-by: Adam Jackson <ajax@redhat.com>
- The src optimisation is more aggressive and possibly harmful in light of the new initial state of pixmaps.
- There is now actually a performance improvement by almost always keeping the number of rects low.
Rather, modify the two callers to call separately for the two different.
events. Unexport SetMaskForEvent too.
And while we're at it, get rid of the MotionFilter macro, because it's one
half confusing and one half pointless.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
They end up being the same anyway on startup, so let's not have a dynamic mask
assignment mechanism and instead just hardcode them already.
Also unexport SelectForWindow and remove the valid_masks parameter. We can
check that before calling, since there's only one caller anyway.
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
This is the RandR 1.1 version of GetScreenResources and needs to re-query the
DDX to see if the mode pool changed.
Fixes Launchpad bug #325115.
Signed-off-by: Adam Jackson <ajax@redhat.com>
(cherry picked from commit 660c2a7d4c)
All you get for standard timing descriptors is horizontal size in
multiples of 8 pixels (which means you can't say 1366) and height in
terms of aspect ratio (which means you can't say 768). You'd like to
just fuzzy-match this by walking the DMT list for sufficiently close
modes, but you can't because DMT is useless and only defines a 1360x768
mode, because it's _also_ specified in terms of character cells despite
providing pixel exact timings. Neither can you use CVT or GTF to
generate the timings, because they _also_ believe that modes have to be
a multiple of 8 pixels.
You'd also hope you could find a timing definition for this in CEA, but
you can't because CEA only defines transmission formats that actually
exist. So there's 480p, 720p, and 1080p, but no 768p. And why would
there be, after all, the encoded signal is never 768p so obviously no
one would ever make a display in that format.
So instead, make a CVT mode since that's likely to be handled well by
just about everything, smash the horizontal active down by 2, and shift
the sync pulse by 1. Underscanning the hard way.
Pass the suicide.
Otherwise drivers have to refuse interlace twice: once in the output
config, and once in ->valid_mode() to catch output and config modes.
If you can't do interlaced modes, asking nicely for it in the config
isn't going to suddenly make it work.
Signed-off-by: Benjamin Close <Benjamin.Close@clearchain.com>
Acked-by: Peter Hutterer <peter.hutterer@who-t.net>
Acked-by: Daniel Stone <daniel@fooishbar.org>
Signed-off-by: Benjamin Close <Benjamin.Close@clearchain.com>
Acked-by: Peter Hutterer <peter.hutterer@who-t.net>
Acked-by: Daniel Stone <daniel@fooishbar.org>
The algorithm is split in a 2D-specific and a general part.
This potentially allows to accelerate more than just screen motion.
A state machine is intoduced to make code more explicit and readable.
It also improves handling of 'phase 1' mickeys when axial correction
kicks in (corner case).
Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>