Discussion:
remote glx does not work in current
(too old to reply)
K Venken
2020-06-05 08:59:05 UTC
Permalink
I have several slackware stations with different versions being 14.1,
14.2, and 'current' (dated 20190917 to be precise).

When I run glxgears locally on any of them everything is running normally.

When I remotely log in to any of the 14.1 or 14.2 from any other
(current as well) glxgears runs normally. I tried this with ssh -X -Y
and with explicitly setting the display.

When I remotely login to the current version I get following error:

'Error: couldn't get an RGB, Double-buffered visual'

I verified with xeyes that the X-display is correctly set.

This is

'any version' to current = broken
'any version' to 14.1, 14.2 = OK

Could it be that X or glX is broken in current this case - this is using
opengl?

kind regards,

Karel
K. Venken
2020-06-05 14:31:49 UTC
Permalink
Post by K Venken
I have several slackware stations with different versions being 14.1,
14.2, and 'current' (dated 20190917 to be precise).
When I run glxgears locally on any of them everything is running normally.
When I remotely log in to any of the 14.1 or 14.2 from any other
(current as well) glxgears runs normally. I tried this with ssh -X -Y
and with explicitly setting the display.
'Error: couldn't get an RGB, Double-buffered visual'
I verified with xeyes that the X-display is correctly set.
This is
'any version' to current = broken
'any version' to 14.1, 14.2 = OK
Could it be that X or glX is broken in current this case - this is using
opengl?
The problem got fixed after I reinstalled the nvidia drivers on the
(application) server.

OK, now I am confused. This is how I thought X was working (sorry for
the lousy pictures)

+-----------------+ +-------------------------+
| local PC | | remote application PC |
+-----------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server |<-------->| X-client |
| | | | |
| Display driver | | NVidia |
+--------||-------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+

Which would have made me believe that installing the NVidia drivers on
the application server are not needed. Apparently not. I am missing
something about this. But anyway, the problem is fixed. glxgears now
also works remotely on 'current'. This is, glxgears runs on the
application server and is displayed on the local PC.
Henrik Carlqvist
2020-06-06 16:18:40 UTC
Permalink
Post by K. Venken
The problem got fixed after I reinstalled the nvidia drivers on the
(application) server.
OK, now I am confused. This is how I thought X was working (sorry for
the lousy pictures)
+-----------------+ +-------------------------+
| local PC | | remote application PC |
+-----------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server |<-------->| X-client |
| | | | |
| Display driver | | NVidia |
+--------||-------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+
Which would have made me believe that installing the NVidia drivers on
the application server are not needed. Apparently not. I am missing
something about this.
Needed on the application server are dynamic libraries that your
application links to. If you check with "ldd /usr/bin/glxgears" you will
see that the glxgears application links to libGL and libX11 among other
libraries.

Those libraries will look at your DISPLAY variable which gets set to
something like localhost:10 when you login by ssh instead of localhost:0
when you run X at the console on your local machine.

LibGL is one of the libraries that the evil binary nVidia driver
replaces. The libGL library will give you hardware accelerated OpenGL
using DRI mechanisms on a local display, but as you now login on a remote
machine the libGL library will fall back to software rendering. You will
probably be able to verify the performance difference by looking at the
FPS output of glxgears.

regards Henrik
K. Venken
2020-06-06 17:46:38 UTC
Permalink
Post by Henrik Carlqvist
Post by K. Venken
The problem got fixed after I reinstalled the nvidia drivers on the
(application) server.
OK, now I am confused. This is how I thought X was working (sorry for
the lousy pictures)
+-----------------+ +-------------------------+
| local PC | | remote application PC |
+-----------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server |<-------->| X-client |
| | | | |
| Display driver | | NVidia |
+--------||-------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+
Which would have made me believe that installing the NVidia drivers on
the application server are not needed. Apparently not. I am missing
something about this.
Needed on the application server are dynamic libraries that your
application links to. If you check with "ldd /usr/bin/glxgears" you will
see that the glxgears application links to libGL and libX11 among other
libraries.
This is the output of this,...

***@matthua:~$ ldd `which glxgears`
linux-vdso.so.1 (0x00007ffc138b8000)
libGLEW.so.2.1 => /usr/lib64/libGLEW.so.2.1 (0x00001487e6cbc000)
libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x00001487e6c4e000)
libGL.so.1 => /usr/lib64/libGL.so.1 (0x00001487e6bb5000)
libm.so.6 => /lib64/libm.so.6 (0x00001487e6a68000)
libX11.so.6 => /usr/lib64/libX11.so.6 (0x00001487e692a000)
libXext.so.6 => /usr/lib64/libXext.so.6 (0x00001487e6916000)
libc.so.6 => /lib64/libc.so.6 (0x00001487e672f000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00001487e654f000)
libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00001487e6535000)
libGLX.so.0 => /usr/lib64/libGLX.so.0 (0x00001487e6502000)
libGLdispatch.so.0 => /usr/lib64/libGLdispatch.so.0
(0x00001487e6446000)
libdl.so.2 => /lib64/libdl.so.2 (0x00001487e6441000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00001487e641d000)
/lib64/ld-linux-x86-64.so.2 (0x00001487e6f97000)
libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x00001487e63f4000)
libXau.so.6 => /usr/lib64/libXau.so.6 (0x00001487e63ef000)
libXdmcp.so.6 => /usr/lib64/libXdmcp.so.6 (0x00001487e63e7000)

All are found,...
Post by Henrik Carlqvist
Those libraries will look at your DISPLAY variable which gets set to
something like localhost:10 when you login by ssh instead of localhost:0
when you run X at the console on your local machine.
LibGL is one of the libraries that the evil binary nVidia driver
replaces. The libGL library will give you hardware accelerated OpenGL
using DRI mechanisms on a local display, but as you now login on a remote
machine the libGL library will fall back to software rendering.
Thanks for the clarification Hendrik.

I was expecting that when you login remotely, it would use software
rendering as it would when I didn't install the NVidia drivers.
Unfortunately, I didn't note the exact versions before and after I
installed the NVidia drivers, so I can't check it anymore...

And then, I have to install the NVidia drivers to get the remote
connection working. It seems like there is some interaction with the
NVidia drivers anyway, even when you remotely login. I can understand
that someone (driver) has to 'interpret' the (open)GL stuff, but to me
it seems like, if software does it or NVidia does it, it does not
matter, because, only after translation, it is used by the X-protocol.
Post by Henrik Carlqvist
You will
probably be able to verify the performance difference by looking at the
FPS output of glxgears.
Every detail matters, even replacing 100 Mbps with 1Gbps, and glxgears
tells you. This is why I am using it. But at some points my eyes don't
care anymore ;-)

Having an NVidia with proper driver is probably going to make things (a
little) faster even remotely.
Post by Henrik Carlqvist
regards Henrik
Rich
2020-06-06 18:40:10 UTC
Permalink
Post by K. Venken
Post by K Venken
I have several slackware stations with different versions being 14.1,
14.2, and 'current' (dated 20190917 to be precise).
When I run glxgears locally on any of them everything is running normally.
When I remotely log in to any of the 14.1 or 14.2 from any other
(current as well) glxgears runs normally. I tried this with ssh -X -Y
and with explicitly setting the display.
'Error: couldn't get an RGB, Double-buffered visual'
I verified with xeyes that the X-display is correctly set.
This is
'any version' to current = broken
'any version' to 14.1, 14.2 = OK
Could it be that X or glX is broken in current this case - this is using
opengl?
The problem got fixed after I reinstalled the nvidia drivers on the
(application) server.
OK, now I am confused. This is how I thought X was working (sorry for
the lousy pictures)
+-----------------+ +-------------------------+
| local PC | | remote application PC |
+-----------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server |<-------->| X-client |
| | | | |
| Display driver | | NVidia |
+--------||-------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+
Almost - you just have the NVidia driver on the wrong side:

+-------------------+ +-------------------------+
| local PC | | remote application PC |
+-------------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server<---->|<----->|<->X-client libraries |
| | | | |
| Display driver | | |
|(NVidia/Noveu/etc.)| | |
+--------||---------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+

And 'keyboard' is not really connected to the display
(NVidia/Noveu/etc.) driver, but I did not feel like moving that around
just now.
K. Venken
2020-06-06 21:05:59 UTC
Permalink
Post by Rich
Post by K. Venken
Post by K Venken
I have several slackware stations with different versions being 14.1,
14.2, and 'current' (dated 20190917 to be precise).
When I run glxgears locally on any of them everything is running normally.
When I remotely log in to any of the 14.1 or 14.2 from any other
(current as well) glxgears runs normally. I tried this with ssh -X -Y
and with explicitly setting the display.
'Error: couldn't get an RGB, Double-buffered visual'
I verified with xeyes that the X-display is correctly set.
This is
'any version' to current = broken
'any version' to 14.1, 14.2 = OK
Could it be that X or glX is broken in current this case - this is using
opengl?
The problem got fixed after I reinstalled the nvidia drivers on the
(application) server.
OK, now I am confused. This is how I thought X was working (sorry for
the lousy pictures)
+-----------------+ +-------------------------+
| local PC | | remote application PC |
+-----------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server |<-------->| X-client |
| | | | |
| Display driver | | NVidia |
+--------||-------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+
Unfortunately, I have to disagree. I haven't mentioned it, but (some of)
the client PC can be a Windows - PuTTY - Xming combination s well, in
which Nvidia does not exist at all. So, sorry for disagreeing, I had to
add the NVidia driver to the application server, not the local PC. It
sounds weird, I agree.
Post by Rich
+-------------------+ +-------------------------+
| local PC | | remote application PC |
+-------------------+ +-------------------------+
| | | |
| | | Application |
| | | | |
| X-server<---->|<----->|<->X-client libraries |
| | | | |
| Display driver | | |
|(NVidia/Noveu/etc.)| | |
+--------||---------+ +-------------------------+
+----||----+
| screen |
+----------+
| keyboard |
+----------+
And 'keyboard' is not really connected to the display
(NVidia/Noveu/etc.) driver, but I did not feel like moving that around
just now.
That is correct, the lousy drawing wasn't very accurate. The ASCII art
already went beyond its capabilities. But it was to indicate that the
X-server sends the keystrokes and mouse events also to the the
application. This is very striking on slow connections with xeyes where
you see the lag between the mouse and the eyes. I use both (xeyes and
glxgears) to test the setup for usability.

kind regards and thanks a lot for commenting.

Loading...