-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much time does it take to train 50 epochs with a single 3090 GPU on the H36M dataset? #1
Comments
I train on different dataset with different GPU. Maybe it will be for your information. CPU: 13600KF |
I try to print the len of data, the information is as follows, Is that right print(len(dataset_train)) : 182327 |
I try to print the len of data, the information is as follows, Is that right print(len(dataset_train)) : 182327 |
I do not use Human3.6M dataset. It seems that the time cost is normal because there are much more training data and GPU performance is different. |
oh!Which data set did you use? |
I am using a dataset made by myself recorded by Azure Kinect, for special application scenes. |
oh, I was wondering how much time you spent training your AuxFormer model on h3.6m |
yes,that‘s right |
i found that its take a lot of time to train an epoch,i thought it is stuck
|
I found that it takes a day or two to finish training and wondered if that was normal!thanks!
The text was updated successfully, but these errors were encountered: