Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Share tensors between comp replay and comms replay #194

Closed

Conversation

shengfukevin
Copy link
Contributor

Summary:
Share tensors between comp replay and comms replay

When run full replay in et_replay, compute replay and comms replay manage the tensor allocation separately. So some tensors are double allocated, this leads to the full replay of Llama4 70B out of memory.

This DIFF is to fix it by allocating tensors in comp replay and passes them to comms replay.

Reviewed By: sanrise

Differential Revision: D67353163

Summary:
Share tensors between comp replay and comms replay

When run full replay in et_replay, compute replay and comms replay manage the tensor allocation separately. So some tensors are double allocated, this leads to the full replay of Llama4 70B out of memory.

This DIFF is to fix it by allocating tensors in comp replay and passes them to comms replay.

Reviewed By: sanrise

Differential Revision: D67353163
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 3, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D67353163

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c5f8d06.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants