Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training slowing down dramatically #21

Open
ionelhosu opened this issue Oct 30, 2017 · 1 comment
Open

Training slowing down dramatically #21

ionelhosu opened this issue Oct 30, 2017 · 1 comment

Comments

@ionelhosu
Copy link

Did anyone face the issue of the training process slowing down? For example, training one DQN-CTS worker on Montezuma's Revenge runs at about 220 iter/sec after 100.000 steps and 35 iter/sec after 400.000. Any thoughts? Thank you.

@steveKapturowski
Copy link
Owner

Hi @ionelhosu, I think when it's running at 220 iter/sec the training hasn't actually started yet; it's just filling the replay buffer until it reaches some minimum size. That explains the slowdown, but it is unexpected just how slow that training updates are. I'd like to do some tensorflow profiling to spot what the bottleneck is here, but if you find anything interesting on your own please let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants