-
Notifications
You must be signed in to change notification settings - Fork 811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment of local MultiLoRA model using TGI #2564
Comments
Any update on this? |
Still facing the issue. |
I think you should add issue to the https://github.com/huggingface/text-generation-inference repo |
sure. I will add the issue. |
Yup I'm run into an issue too when using custom based on https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-GPTQ-Int8 model |
so right now the only option is to upload the adapters to HuggingFace repo and use the respective model-id to deploy the model... right? |
Yupp |
Hi Team,
Was trying to deploy a multi-lora adapter model with Starcoder2-3B as base.
Referring to the below blog:
https://huggingface.co/blog/multi-lora-serving
Please correct my understanding if I'm am wrong, that the Starcoder2 model is not supported for the multi-lora deployment using TGI. We are getting the below error while deploying.
Also, can you suggest how we can deploy a local model and adapters saved in the local directory using TGI.
Every time I try running the below docker command, it is downloading the files from HF.
Please let me know if any additional information is required.
Thanks,
Ashwin.
The text was updated successfully, but these errors were encountered: