For 1bpp pixmap, software fb get better performance than
GL surface. The main reason is that fbo doesn't support
1bpp texture as internal format, so we have to translate
a 1bpp bitmap to a 8bit alpha format each time which is
very inefficient. And the previous implementation is
not supported by the latest OpenGL 4.0, the GL_BITMAP
was deprecated.
Signed-off-by: Zhigang Gong <zhigang.gong@linux.intel.com>
Added a new shader aswizlle_prog to wired the alpha to 1 when
the image color depth is 24 (xrgb). Then we don't need to fallback
the xrgb source/mask to software composite in render phase. Also
don't wire the alpha bit to 1 in the render phase. This can get
about 2x performance gain with the cairo performance trace's
firefox-planet case.
Signed-off-by: Zhigang Gong <zhigang.gong@linux.intel.com>
use pbo if possible when we load texture to a temporary tex.
And for the previous direct texture load function, it's not
correct and get removed in this commit.
Signed-off-by: Zhigang Gong <zhigang.gong@linux.intel.com>
Added comments to glamor_pixmap_create. To be refined in the future.
We need to identify whether a pixmap is a CPU memory pixmap or a
GPU pixmap. Current implementation is not correct. There are three
cases:
1. Too large pixmap, we direct it to CPU memory pixmap.
2. w ==0 || h == 0 pixmap, this case has two possibilities:
2.1 It will become a screen pixmap latter, then it should be
GPU type.
2.2 It's a scratch pixmap or created from a share memory, then
it should belong to CPU memory.
XXX, need to be refined latter.
For those pixmap which has valid fbo and opened as GLAMOR_ACCESS_RO
mode, we don't need to upload the texture back when calling the
glamor_finish_access(). This will get about 10% performance gain.
Change the row length of 1bit color depth pixmap to the actual stride.
The previous implementation use the width as its stride which is not
good. As it will waste 8 times of space and also bring some non-unify
code path. With this commit, we can merge those 1bit or other color
depth to almost one code path. And we will use pixel buffer object
as much as possible due to performance issue. By default, some mesa
hardware driver will fallback to software rasterization when use
glReadPixels on a non-buffer-object frame buffer. This change will
get about 4x times performance improvemention when we use y-inverted
glamor or the driver support hardware y-flipped blitting.
If pixmap's size exceeds the limitation of the MESA library, the
rendering will fail. So we switch to software fb if it is the case.
Add one new element for pixmap private structure to indicate whehter
we are a software fb type or a opengl type.
This commit fixed two bugs when one client reset the connection.
The first is that we should reopen the graphic device when the previous
node was closed during the screen closing. The second one is we should
call glamor_close_screen (not the ddx version) prior to call
eglTerminate(). As eglTerminate will release the share library. And
the glamor_close_screen may still need to call openGL APIs and thus
will hit segfault. And renamed the ddx functions to avoid naming
conflications with the glamor functions.
GC is redefined in the X11/Xlib.h and include/gcstruct.h which is
a xorg header file. Just use a macro to simply avoid the conflict.
Need revisit latter to find a correct way to fix this problem.
Due to the coordinate system on EGL is different from FBO
object. To support EGL surface well, we add this new feature.
When calling glamor_init from EGL ddx driver, it should use
the new flag GLAMOR_INVERTED_Y_AXIS.
move the original glamor_fini to glamor_close_screen. And wrap the CloseScreen
with glamor_close_screen, Then we can do some thing before call the underlying
CloseScreen().
The root cause is that glamor_fini will be called after the ->CloseScreen().
This may trigger a segmentation fault at
glamor_unrealize_glyph_caches() at calling into FreePicture().
As current glamor implementation depends on the glx library in the
mesa package which is conflict with the version in xorg. We have to
--disable-glx when build Xephyr. But this leads to the linking error
here. We comment out the calling to ephyrHijackGLXExtension() now.
Need revisit latter.
We should include the dix-config.h for all the glamor files. Otherwise
the XID type maybe inconsisitent in different files in 64bit machine.
The root cause is this macro "#define _XSERVER64 1" should be included
in all files refer to the data type "XID" which is originally defined
in X.h. If _XSERVER64 is defined as 1, then XID is defined as CARD32
which is a 32bit integer. If _XSERVER64 is not defined as 1 then XID
is "unsigned long". In a 32bit machine, "unsigned long" should be
identical to CARD32. But in a 64bit machine, they are different.
Sometimes we want to try a couple of different methods for
accelerating. If one of them says "no" and the other says "yes",
don't spam the log about the "no."
They're stored just like a8, but the values are set to either 0.0 or
1.0. Because they're a8 with only two legal values, we can't use them
as destinations, but nobody's rendering to a1 dests anyway (we hope).
It's not an offset from pixmap coords to composited pixmap coords,
it's an offset from screen-relative window drawable coords to
composited pixmap coords.