If the user sets Option "Enable" "TRUE" for a monitor, the X
server will connect the connector a crtc but tell the user it
is disconnected.
However the user in this case is mutter, when it gets it's view
of the output configuration it sees the output is disconnected
and never sets it up again, which seems like the right thing to do.
If we let the user enable a monitor, lets just set it as always
connected.
Reviewed-by: Olivier Fourdan <ofourdan@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
systemd-logind since version 234 (released 2017-07-12) supports being
restarted without losing state [1]. From the systemd NEWS file [2]:
* systemd-logind may now be restarted without losing state. It stores
the file descriptors for devices it manages in the system manager
using the FDSTORE= mechanism. Please note that further changes in
other components may be required to make use of this (for example
Xorg has code to listen for stops of systemd-logind and terminate
itself when logind is stopped or restarted, in order to avoid using
stale file descriptors for graphical devices, which is now
counterproductive and must be reverted in order for restarts of
systemd-logind to be safe. See
https://cgit.freedesktop.org/xorg/xserver/commit/?id=dc48bd653c7e101.)
This reverts commit dc48bd653c.
Closes: #531
[1] https://github.com/systemd/systemd/pull/5600
[2] 9f09a95a7e
Normally, the X server infers the initial screen size based on any
connected outputs. However, if no outputs are connected, the X server
picks a default screen size of 1024 x 768. This option overrides the
default screen size to use when no outputs are connected. In contrast
to the "Virtual" Display SubSection entry, which applies unconditionally,
"NoOutputInitialSize" is only used if no outputs are detected when the
X server starts.
Parse this option in the new exported helper function
xf86AssignNoOutputInitialSize(), so that other XFree86 loadable drivers
can use it, even if they don't use xf86InitialConfiguration().
Signed-off-by: Andy Ritger <aritger@nvidia.com>
Reviewed-by: Keith Packard <keithp@keithp.com>
Since the Solaris kernel tracks IOPL per thread, and doesn't inherit
raised IOPL levels when creating a new thread, we need to turn it on
in the input thread for input drivers like vmmouse that need register
access to work correctly.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Keeping track of kernel state in user space doesn't buy us anything,
and introduces bugs, as we were keeping global state but the Solaris
kernel tracks IOPL per thread.
Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
The VGA arbiter controls the PCI bus' routing of legacy VGA resources,
specifically the video memory aperture at 0xa0000-0xb0000 (640k should
be etc.) and a handful of I/O ports. Since 128k is far too small for a
real framebuffer these days, every driver instead maps a linear version
of VRAM through the PCI BAR. And no DRI2 drivers ever need I/O port
access, because all operations they might be used for (legacy VGA CRTC
setup, mostly) happen on the kernel side.
In other words, this just works, and we can stop breaking it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
All of the null checks here are redundant, you can't get to those paths
unless RANDR's already been initialized. Delete them, and remove the
pointer too.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
If the driver calls xf86HandleColormaps, CMapChangeGamma updates the HW
gamma LUT of all CRTCs via xf86RandR12LoadPalette. However,
xf86RandR12ChangeGamma was then clobbering the gamma LUT of the RandR
1.2 compatibility output's CRTC with the gamma curves computed from the
screen's global gamma values.
Fix this by bailing if xf86RandR12LoadPalette is installed.
Fixes: 02ff0a5d7e "xf86RandR12: Fix XF86VidModeSetGamma triggering a
BadImplementation error"
Noticed when porting this logic to xf86-video-nouveau, and valgrind
complained about conditional jump based on uninitialized data.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Gitlab very kindly exposes the details of the git commit message (among
much else) in the environment. Unfortunately, piglit tries to handle the
environment in non-UTF8-safe ways, which means if the top-of-tree commit
mentions non-ASCII characters (say, in the author's name) then all the
tests fail and so does the pipeline.
Fortunately none of those variables are things our piglit invocation
needs. Since I've failed to rebuild the docker image as yet, just clear
the likely variables from the environment before running piglit.
This-makes-me: ☹
Believe it or not, somehow we've never done this in legacy mode! We
currently simply change the DPMS property on the CRTC's output's
respective DRM connector, but this means that we're just setting the
CRTC as inactive-not disabled. From the perspective of the kernel, this
means that any shared resources used by the CRTC are still in use.
This can cause problems for drivers that are not yet fully atomic,
despite using the atomic helpers internally. For instance: if CRTC-1 and
CRTC-2 are still enabled and use shared resources within the kernel (an
MST topology, for example), and then userspace tries to go enable CRTC-3
on the same topology this might suddenly fail if CRTC-3 needs the shared
resources CRTC-1 and CRTC-2 are using. While I don't know of any
situations in the mainline kernel that actually trigger this, future
plans for reworking the atomic check of MST drivers are absolutely
going to make this into a real issue (they already are in my WIP
branches for the kernel).
So: actually do the right thing here and disable CRTCs when they're not
going to be used anymore, even in legacy mode.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Some Broadcom set-top-box boards have PCI busses, but the GPU is still
probed through DT. We would dereference a null busid here in that
case.
Signed-off-by: Eric Anholt <eric@anholt.net>
Lifted from vfb. xfree86 had almost the same thing but unparameterized,
port it to the vfb style.
Signed-off-by: Adam Jackson <ajax@redhat.com>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Could cause privilege elevation and/or arbitrary files overwrite, when
the X server is running with elevated privileges (ie when Xorg is
installed with the setuid bit set and started by a non-root user).
CVE-2018-14665
Issue reported by Narendra Shinde and Red Hat.
Signed-off-by: Matthieu Herrb <matthieu@herrb.eu>
Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Reviewed-by: Adam Jackson <ajax@redhat.com>
At the point where xf86BusProbe runs we haven't yet taken our own VT,
which means we can't perform drm "master" operations on the device. This
is tragic, because we need master to fish the bus id string out of the
kernel, which we can only do after drmSetInterfaceVersion, which for
some reason stores that string on the device not the file handle and
thus needs master access.
Fortunately we know the format of the busid string, and it happens to
almost be the same as the ID_PATH variable from udev. Use that instead
and stop calling drmSetInterfaceVersion.
Reviewed-by: Peter Hutterer <peter.hutterer@who-t.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
If the X server is terminated while its VT is not active, it should
not change the current VT.
v2: Query current state in xf86CloseConsole using VT_GETSTATE instead of
keeping track in xf86VTEnter/xf86VTLeave/etc.
This hasn't done anything besides return TRUE in a long long time.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
These are so close to identical that most DDXes implement one in terms
of the other. All the relevant cases can be distinguished by the error
code, so merge the functions together to make things simpler.
Reviewed-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Adam Jackson <ajax@redhat.com>
No supported driver supports 1bpp anymore, nor has in a very long time.
This option only worked with vgahw anyway.
Signed-off-by: Adam Jackson <ajax@redhat.com>
I don't think this is useful information to have in the log, and it's
a bunch of autotools and meson logic to produce it.
Signed-off-by: Eric Anholt <eric@anholt.net>
60ec8ead broke the autotools build:
sdksyms.o:(.data+0x58): undefined reference to `InitConnectionLimits'
sdksyms.o:(.data+0x2ec8): undefined reference to `xf86ServerName'
collect2: error: ld returned 1 exit status
Makefile:811: recipe for target 'Xorg' failed
Likewise 3a4d7c79 for InitConnectionLimits.
Signed-off-by: Adam Jackson <ajax@redhat.com>
If it's really this important we should just do it and not complain. We
never do it so it must not matter.
Signed-off-by: Adam Jackson <ajax@redhat.com>
I'm sure printing the address of function pointers in modules you'd
loaded might have made sense back when we rolled our own dlopen, but we
got better.
Signed-off-by: Adam Jackson <ajax@redhat.com>
The old code would not in fact validate the option value, though it
might complain about it in the log. It also didn't let you set some
legal values that the -maxclients command line option would.
Signed-off-by: Adam Jackson <ajax@redhat.com>
DGAShutdown() walks every screen and attempts to reset the mode. That's
maybe a reasonable thing to do, although the explicit loop is certainly
a bad smell.
In ddxGiveUp it's called after we've torn down the vga arbiter - and in
fact most of the rest of screen state - which is... very very bad. The
other place it's called is from the Control-Alt-BackSpace handler, where
we don't even attempt to do vga arb setup, and where in any case we're
going to escape the main loop eventually anyway.
Move all that cleanup work inside DGACloseScreen. This means it happens
earlier in server teardown than previously, but not in a way you're ever
going to be upset about.
Signed-off-by: Adam Jackson <ajax@redhat.com>