Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run a container(ubuntu 20.04) with GPU accleration, the container use OpenGL renderer: llvmpipe (LLVM 15.0.7, 256 bits) but not NVIDIA GeForce RTX 3060/PCIe/SSE2 #303

Open
Maipengfei opened this issue Dec 2, 2024 · 5 comments

Comments

@Maipengfei
Copy link

I use sudo docker run -it -p 6901:6901 -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=all --gpus="all" --name nvidia-test -e VNC_PW=123456 kasm-lrae nvidia-smi to create a container
I hope the container can use my Nvidia GPU to render gazebo which requires OpenGL. But I use the glxinfo -B to check the container, it show the follow response:
llvmpipe
It seems that the container hasn't use GPU to render the OpenGL. The expected output may be this(according KasmVNC GPU Acceleration ):
glxinfo
The nvidia-smi both give the expected outcome in host system and container, which may indicate that I have installed Nvidia driver successfully.
nvidia-smi

Other information:
host system: Ubuntu20.04
The image kasm-lrae base bases from kasmweb/ubuntu-focal-desktop:1.16.0-rolling-daily
The content of daemon.json is as follow:
{
"registry-mirrors": [
"https://reg-mirror.qiniu.com/"
],
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}

@codernew007
Copy link

Same!! Expect your solution

@hochbit
Copy link

hochbit commented Jan 15, 2025

I do not have dri interface available... so I use this to get into a console with offloading.

#!/usr/bin/bash

env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia __VK_LAYER_NV_optimus=NVIDIA_only VK_DRIVER_FILES=/usr/share/vulkan/icd.d/nvidia_icd.json LIBVA_DRIVER_NAME=nvidia bash

You can create the json file like this:

cat /usr/share/vulkan/icd.d/virtio_icd.x86_64.json | jq '.ICD.library_path="/usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0"' > /usr/share/vulkan/icd.d/nvidia_icd.json

VK_ variables and file are only required if you need vulkan

Works for me for openSCAD, blender, minetest ... except for the last one I have some other nasty mouse issues (#310)

@ehfd
Copy link

ehfd commented Feb 16, 2025

@Maipengfei @codernew007

I suggest combining VirtualGL to make things work.
Check https://github.com/selkies-project/docker-nvidia-egl-desktop/blob/91ba69533d707cf21933cf30097491315d7805cf/Dockerfile#L296 and https://github.com/selkies-project/docker-nvidia-egl-desktop/blob/91ba69533d707cf21933cf30097491315d7805cf/entrypoint.sh#L98.

You may directly use my container implementation, which has an option to use KasmVNC as a flag, if you wish.

@ehfd
Copy link

ehfd commented Feb 16, 2025

I do not have dri interface available... so I use this to get into a console with offloading.

#!/usr/bin/bash

env __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia __VK_LAYER_NV_optimus=NVIDIA_only VK_DRIVER_FILES=/usr/share/vulkan/icd.d/nvidia_icd.json LIBVA_DRIVER_NAME=nvidia bash
You can create the json file like this:

cat /usr/share/vulkan/icd.d/virtio_icd.x86_64.json | jq '.ICD.library_path="/usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0"' > /usr/share/vulkan/icd.d/nvidia_icd.json
VK_ variables and file are only required if you need vulkan

Works for me for openSCAD, blender, minetest ... except for the last one I have some other nasty mouse issues (#310)

@hochbit Does this with with actual X11 windows, or only with offscreen workloads?

@ehfd
Copy link

ehfd commented Feb 16, 2025

Relevant: TigerVNC/tigervnc#1773

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants