You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for providing such detailed instructions on how to use this model. It has been extremely useful to me. I am relatively new to this field of transformers. I cloned the model from GitHub and was able to run it successfully on my own dataset. However, if I try to load the model from HuggingFace using the following, the code fails and I get an error.
The error says ValueError: Expected input batch_size (2048) to match target batch_size (16) and 2048 is 16*128 which is the max_length. I have seen occurrences of such errors and they have been associated with how the loss function is computed. Could you please help me figure out what I am doing wrong here?
Thanks,
Souvik
The text was updated successfully, but these errors were encountered:
Thanks for your interest in our model. Sorry to get back to you late. Can you give me a bit more details about your setup so I can reproduce the error. Information like:
What does the input dataset look like (shape, num_labels, tokenization steps, max_length, batch size used during training/inference)?
Could you provide a minimal, reproducible code snippet that triggers the error?
Are you using a custom loss function, or are you relying on the default one in AutoModelForMaskedLM?
It would be beneficial for me the understand your problem further.
Hi,
Thank you for providing such detailed instructions on how to use this model. It has been extremely useful to me. I am relatively new to this field of transformers. I cloned the model from GitHub and was able to run it successfully on my own dataset. However, if I try to load the model from HuggingFace using the following, the code fails and I get an error.
config = AutoConfig.from_pretrained("HUBioDataLab/SELFormer", num_labels=num_labels)
model = AutoModelForMaskedLM.from_pretrained("HUBioDataLab/SELFormer", config=config)
tokenizer = AutoTokenizer.from_pretrained("HUBioDataLab/SELFormer", do_lower_case=False)
The error says ValueError: Expected input batch_size (2048) to match target batch_size (16) and 2048 is 16*128 which is the max_length. I have seen occurrences of such errors and they have been associated with how the loss function is computed. Could you please help me figure out what I am doing wrong here?
Thanks,
Souvik
The text was updated successfully, but these errors were encountered: