Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend PyTorch: Add L1 regularizers #1905

Merged
merged 6 commits into from
Mar 19, 2025

Conversation

vl-dud
Copy link
Contributor

@vl-dud vl-dud commented Dec 3, 2024

Continuation of the PR #1884

@lululxvi
Copy link
Owner

lululxvi commented Dec 8, 2024

We are unifying the regularization for tensorflow and paddle, see #1894 . Do you think we can also unify pytorch regularization in a more unified code?

@vl-dud
Copy link
Contributor Author

vl-dud commented Dec 8, 2024

Unfortunately, this can be a bit difficult in pytorch. The implementation options seem unnecessarily complicated to me.

@vl-dud
Copy link
Contributor Author

vl-dud commented Dec 19, 2024

I didn't notice earlier that you implemented the NysNewtonCG optimizer. Should I add L1 regularization in train_step_nncg?

@lululxvi
Copy link
Owner

I didn't notice earlier that you implemented the NysNewtonCG optimizer. Should I add L1 regularization in train_step_nncg?

Not this PR.

@lululxvi
Copy link
Owner

Sorry, it has been a while for this PR. Could you remind me what is the purpose of this PR?

@vl-dud
Copy link
Contributor Author

vl-dud commented Jan 20, 2025

Sorry, it has been a while for this PR. Could you remind me what is the purpose of this PR?

This is a pytorch implementation of L1 regularization.

@lululxvi lululxvi merged commit bd43c6c into lululxvi:master Mar 19, 2025
13 checks passed
@lululxvi lululxvi changed the title Backend PyTorch: Add L1 and L1+L2 regularizers Backend PyTorch: Add L1 regularizers Mar 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants