You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am recently learning your paper, and notice that some configs are different between SoCo paper and SoCo code.
Say, in SoCo paper, Table 1, it says that lr_base=1.0, wd=1e-5 and LARS are used. In SoCo code, lr_base=0.03, wd=1e-4 and SGD are used. If I want to use LARS optimizer, need I decrease the weight_decay ? Thanks in advance!
In Table 1, BYOL pretrained with 300 epoch achieves 40.4 APbbox. May I also ask that, was BYOL reproduced by your own in Table 1 and could you please give me some hints on settings (e.g. batch size, lr, wd) of reproducing BYOL results on COCO? (I'm now doing a project and really struggling on reproducing BYOL)
Looking forward to your answers and your many excellent works in the future!
The text was updated successfully, but these errors were encountered:
I'm not one of the authors, but I also noticed this. However, when you look at the details of the experiments (e.g. in tools/SoCo_FPN_100ep.sh) you can see that the default argparse values are overwritten to be consistent with the paper. So the base learning rate is set to 1 and the optimizer is LARS etc. I've also double checked the logs from my run and they agree with the paper.
Thanks @linusericsson for your great comments.
Indeed there are lots of parameters, the log will help to reproduce the results provided in paper.
The results of BYOL in our paper are obtained from PixPro.
We also struggled to reproduce the BYOL.
To reproduce the results of some classic works including BYOL, please refer to the PyContrast repo.
Thanks for you interests.
Hi, Thanks for your great work!
I am recently learning your paper, and notice that some configs are different between SoCo paper and SoCo code.
Say, in SoCo paper, Table 1, it says that lr_base=1.0, wd=1e-5 and LARS are used. In SoCo code, lr_base=0.03, wd=1e-4 and SGD are used. If I want to use LARS optimizer, need I decrease the weight_decay ? Thanks in advance!
In Table 1, BYOL pretrained with 300 epoch achieves 40.4 APbbox. May I also ask that, was BYOL reproduced by your own in Table 1 and could you please give me some hints on settings (e.g. batch size, lr, wd) of reproducing BYOL results on COCO? (I'm now doing a project and really struggling on reproducing BYOL)
Looking forward to your answers and your many excellent works in the future!
The text was updated successfully, but these errors were encountered: