CogVLM 17B number of params. #373
Unanswered
aldoz-mila
asked this question in
Q&A
Replies: 1 comment
-
My question is also related to this thread: #356. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I just wanted to make sure that I get the count right. The Vicuna 1.5 base here is 7B, and the vision encoder EVA02-CLIP-E is 5B. Is the 17B the total number of trainable params? I assume it's the 7B of the base Vicuna + the extra attention and FFN layers added for the deep fusion inside the LLM, + the MLP projector between the vision encoder and LLM? Is the vision encoder (frozen) counting towards the total number of params of CogVLM? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions