This allows the server to guess an appropriate initial virtual size and
resolution. The heuristic is to select the largest driver-reported mode
that matches the monitor's physical aspect ratio. We revalidate this
estimate after mode validation, since we may have filtered away all
modes that would fill that size.
Also, the EDID preferred timing is now marked as M_T_PREFERRED as well.
Always add a mouse driver instance configured to send core events, unless
a core pointer already exists using either the mouse or void drivers. This
handles the laptop case where the config file only specifies, say,
synaptics, which causes the touchpad to work but not the pointing stick.
We don't double-instantiate the mouse driver to avoid the mouse moving twice
as fast, and we skip this logic when the user asked for a void core pointer
since that probably means they want to run with no pointer at all.
Base EDID only lets you specify the maximum dotclock in tens of MHz, which
is too fuzzy for some monitors. 1600x1200@60 is just over 160MHz, but if
the monitor really can't handle any mode at 170MHz, then 160 is more
correct. Fix up the EDID block before the driver can see it in this case,
so we don't spuriously reject modes.
The X gamma is used to set the output ramp of the card. Setting a 2.2 output
gamma going into a 2.2 monitor gives an effective gamma of 4.84, which is
very much not what you want.
broken for any 32 bit X server running on a 64 bit kernel) so #ifdef
them out for now. the PCI rework tree will make all this crap go away,
so I think we can tolerate the extra #ifdef for the next release.
instead of `/bin/sh /etc/init.d/xprint get_xpserverlist`
- allows the initscript to set its own different shell under #!
- allows disabling of XPSERVERLIST by making the script non-executable
* Allow files to be installed by using dist_*_DATA instead of EXTRA_DIST.
Also, use dist_*_SCRIPTS to install scripts.
* Fix minor typos in man pages.
There were two sets of bugs in the vertex program (ARB and NV)
protocol. First, several of the ARB functions were missing the
'doubles_in_order="true"' annotation. Second, after the ARB decided
that glVertexAttrib*ARB functions must not alias fixed-function state
for GLSL, Nvidia re-assigned GLX protocol opcodes for
glVertexAttrib*NV (circa Septeber 2004). For some reason gl_API.xml
was never updated to reflect this, and the updated version of the
GL_NV_vertex_program spec never made into the registry.
This is just a server-side regeneration from gl_API.xml version 1.68.
GLX protocol isn't supported for GLX_SGI_swap_control or
GLX_SGI_video_sync. Remove them from the list of available extensions
until they are supported.
Re-generate from gl_API.xml 1.65. This provides the missing bits for
GL_EXT_texture_filter_anisotropic and GL_EXT_blend_equation_separate.
Enable those extensions.
Implement glGetProgramStringARB and glGetProgramStringNV. With these
functions implemented, GL_ARB_{vertex,fragment}_program,
GL_NV_{vertex,fragment}_program, and related extensions can be enabled.
Fill in __glXDisp_GetCompressedTexImageARB and
__glXDispSwap_GetCompressedTexImageARB to finish support for
GL_ARB_texture_compression. With this extension (and the related
compression extensions), the server-side GLX supports all of the
protocol for GL 1.4. w00t!
The bad news is that this has received only minimal testing, and Mesa
does not contain any good tests for GL_ARB_texture_compression.
gl_API.xml 1.63 corrects some problems with GLX protocol for
GL_EXT_paletted_texture and GL_SGI_color_table. Regenerate from that
file, and enable those extensions and GL_EXT_shared_texture_palette.