You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Did anyone face the issue of the training process slowing down? For example, training one DQN-CTS worker on Montezuma's Revenge runs at about 220 iter/sec after 100.000 steps and 35 iter/sec after 400.000. Any thoughts? Thank you.
The text was updated successfully, but these errors were encountered:
Hi @ionelhosu, I think when it's running at 220 iter/sec the training hasn't actually started yet; it's just filling the replay buffer until it reaches some minimum size. That explains the slowdown, but it is unexpected just how slow that training updates are. I'd like to do some tensorflow profiling to spot what the bottleneck is here, but if you find anything interesting on your own please let me know.
Did anyone face the issue of the training process slowing down? For example, training one DQN-CTS worker on Montezuma's Revenge runs at about 220 iter/sec after 100.000 steps and 35 iter/sec after 400.000. Any thoughts? Thank you.
The text was updated successfully, but these errors were encountered: