ms_crtc_msc_to_kernel_msc attempts to work around kernel
inconsistencies in reporting msc values by comparing the expected
value with the reported value. If the kernel fails to
actually provide its current values, then just skip the work around
steps as there's really nothing better we can do.
Signed-off-by: Keith Packard <keithp@keithp.com>
This is derived from the intel driver DRI2 code, with swapchain and
pageflipping dropped, functions renamed, and vblank event management
shared code moved to a vblank.c for reuse by Present.
This allows AIGLX to load, which means that you get appropriate
visuals exposed in GL, along with many extensions under
direct-rendering that require presence in GLX (which aren't supported
in glxdriswrast.c).
v2: Drop unused header includes in pageflip.c, wrap in #ifdef GLAMOR.
Drop triple-buffering, which was totally broken in practice (I'll
try to fix this later). Fix up some style nits. Document the
general flow of pageflipping and why, rename the DRI2 frame event
type enums to reflect what they're for, and handle them in a
single switch statement so you can understand the state machine
more easily.
v3: Drop pageflipping entirely -- it's unstable on my Intel laptop
(not that the normal 2D driver is stable with pageflipping for
me), and I won't get it fixed before the merge window. It now
passes all of the OML_sync_control tests from Jamey and Theo
(except for occasional warns in timing -fullscreen -divisor 2).
v4: Fix doxygen at the top of vblank.c
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
This renames dumb_get_bo_from_handle(), since it wasn't using a handle
(GEM terminology) but a dmabuf fd.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
By default modesetting now tries to enable X acceleration using
glamor, but falls back to normal shadowfb if GL fails to initialize.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
As I was editing code, the top-level .dir-locals.el was making my new
stuff conflict with the existing style. Make it consistently use the
xorg style, instead.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
v2: Fix libdrm version check, and use XORG_VERSION_* instead of a
static 1.0.0 version for the driver module.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
The server will always have it.
v2: Clean up some weird formatting from the unifdeffing.
Signed-off-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Keith Packard <keithp@keithp.com>
On something like cirrus, start X, then attempt to start a second
X while the first is running, if fbdev is installed it'll fail
hard.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Newer Linux kernels support DSI outputs. To be able to identify them
properly, add DSI to the list of output names.
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
This array isn't used anywhere outside this file, so it can be made
static. While at it, make the array const as well.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Aaron Plattner <aplattner@nvidia.com>
Add const to any immutable string pointers.
Rename 'range' to 'prop_range' to avoid redefined warning.
Eliminate some unused return values.
Signed-off-by: Keith Packard <keithp@keithp.com>
When SDL called this it was totally broken, actually hook
up to the underlying drmmode function.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=64808
Thanks to Peter Wu <lekensteyn@gmail.com> for harassing me.
Signed-off-by: Dave Airlie <airlied@redhat.com>
If a device is not primary, the PCI device match fails because the
xf86-video-modesetting driver looks specifically for a PCI class match of
0x30000 with a mask of 0xffffff. This fails to match, for example, a
non-primary Intel VGA device, because it is reported as having a class of
0x38000.
Fix that by ignoring the low 16 bits of the class in the pci_id_match table.
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Reviewed on IRC by Adam Jackson <ajax@redhat.com>
A fixed-mode output device like a panel will often only inform of its
preferred mode through its EDID. However, the driver will adjust user
specified modes for display through use of a panel-fitter allowing
greater flexibility in upscaling. This is often used by games to set a
low resolution for performance and use the panel fitter to fill the
screen.
v2: Use the presence of the 'scaling mode' connector property as an
indication that a panel fitter is attached to that pipe.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=55564
So the kernel removes the device, and the driver processes the first
udev event, and gets no output back from the kernel, so it check
and don't fall over.
This fixes a couple of crashes seen when hotplugging USB devices.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Recent versions of the X server no longer provide this function, which
has been obsolete for over 2 years now.
Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This allows the driver to operate as an output slave.
It adds scan out pixmap, and the capability
checks to make sure they available.
Signed-off-by: Dave Airlie <airlied@redhat.com>