You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I already have a setup of Ollama and OpenWebUI in my Ubuntu WSL instance. Because of limitations of my corporate notebook enabling Cuda support in Docker is not possible.
So I need to have Ollama be run in WSL to use my eGPU.
Yes, it´s actually quite easy - just delete the corresponding section (in my case the Ollama section). You only have to make sure that the ports specified in the Docker compose file match your local installation of OpenWebUI.
I already have a setup of Ollama and OpenWebUI in my Ubuntu WSL instance. Because of limitations of my corporate notebook enabling Cuda support in Docker is not possible.
So I need to have Ollama be run in WSL to use my eGPU.
Is there a way to custimze the setup in the folder https://github.com/coleam00/ai-agents-masterclass/tree/main/local-ai-packaged so I can reuse the existing Ollama and OpenWebUI installations?
The text was updated successfully, but these errors were encountered: