Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add resnet and llama to benchmark models #1386

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

vcanicTT
Copy link
Contributor

@vcanicTT vcanicTT commented Mar 6, 2025

We want to track the end-to-end performance of different models, so we have added ResNet HF and Llama Prefill to the benchmark models. This PR also includes a few changes to existing models, aiming to align all models to be written in the same manner and uploaded successfully.

@codecov-commenter
Copy link

codecov-commenter commented Mar 6, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 43.40%. Comparing base (5236bc7) to head (c505238).
Report is 3 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1386   +/-   ##
=======================================
  Coverage   43.40%   43.40%           
=======================================
  Files          48       48           
  Lines        7860     7860           
=======================================
  Hits         3412     3412           
  Misses       4448     4448           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@vcanicTT
Copy link
Contributor Author

vcanicTT commented Mar 6, 2025

@dgolubovicTT @pilkicTT @vmilosevic Can you take a look?

Copy link

github-actions bot commented Mar 6, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests634 ran492 passed142 skipped0 failed
TestResult
No test annotations available

1 similar comment
Copy link

github-actions bot commented Mar 6, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests634 ran492 passed142 skipped0 failed
TestResult
No test annotations available

Copy link

github-actions bot commented Mar 6, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests693 ran556 passed137 skipped0 failed
TestResult
No test annotations available

1 similar comment
Copy link

github-actions bot commented Mar 6, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests693 ran556 passed137 skipped0 failed
TestResult
No test annotations available

Copy link
Contributor

@pilkicTT pilkicTT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vcanicTT I see that you have another PR open (for resnet)... should that one be closed?

@@ -8,4 +8,10 @@


# MNIST Linear
python forge/test/benchmark/benchmark.py -m mnist_linear -bs 1 -lp 32 -o forge-benchmark-e2e-mnist.json
PYTHONPATH=./forge python forge/test/benchmark/benchmark.py -m mnist_linear -bs 1 -lp 32 -o forge-benchmark-e2e-mnist.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the PYTHONPATH set? We should avoid this...

Copy link
Contributor Author

@vcanicTT vcanicTT Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had problems with python packages and imports, and this was the best way to solve the problem. When I run the tests via benchmark script it doesn't see tests module. Do you have any suggestions how to resolve this in a better way?

@@ -82,7 +82,9 @@ jobs:
shell: bash
run: |
source env/activate
python forge/test/benchmark/benchmark.py -m mnist_linear -bs 1 -lp 32 -o ${{ steps.strings.outputs.perf_report_path }}
PYTHONPATH=./forge python forge/test/benchmark/benchmark.py -m mnist_linear -bs 1 -lp 32 -o ${{ steps.strings.outputs.perf_report_path }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to run all the models with higher batch size.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests can be run in batches larger than one. For now, we keep it at one to ensure the models work correctly. We can increase it later when we start tracking results and are confident that they are preserved accurately.

@vcanicTT
Copy link
Contributor Author

vcanicTT commented Mar 7, 2025

@vcanicTT I see that you have another PR open (for resnet)... should that one be closed?

Yes, I sent a PR only for ResNet. Then, I rebased on the ResNet branch and continued working on LLaMA. If we do everything in this PR, I will close the first one.

@vcanicTT vcanicTT force-pushed the vcanic/bringup_benchmark_llama branch from 24b220f to c505238 Compare March 7, 2025 12:52
@@ -44,7 +44,10 @@ jobs:
run: |
echo "work-dir=$(pwd)" >> "$GITHUB_OUTPUT"
echo "build-output-dir=$(pwd)/build" >> "$GITHUB_OUTPUT"
echo "perf_report_path=forge-benchmark-e2e-mnist_$JOB_ID.json" >> "$GITHUB_OUTPUT"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vmilosevic Can you look at this part, this file, and please tell if you're ok or not with this change? A model expects .json file as output, so I made three different paths for each model, and one path that represents folder from which we upload the files, i.e. benchmark reports.

Copy link

github-actions bot commented Mar 7, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests634 ran493 passed141 skipped0 failed
TestResult
No test annotations available

Copy link

github-actions bot commented Mar 7, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests693 ran557 passed136 skipped0 failed
TestResult
No test annotations available

Copy link

github-actions bot commented Mar 7, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests634 ran493 passed141 skipped0 failed
TestResult
No test annotations available

Copy link

github-actions bot commented Mar 7, 2025

TestsPassed ✅Skipped ⚠️Failed
TT-Forge-FE Tests693 ran557 passed136 skipped0 failed
TestResult
No test annotations available

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants