Skip to content

Commit

Permalink
Add isolate mode
Browse files Browse the repository at this point in the history
Summary: When benchmarking across multiple operators, we can optionally isolate each operator run in a child process.

Reviewed By: FindHao

Differential Revision: D65154665

fbshipit-source-id: 9c9a21a76897084b061374cb3f7d8524a4aaac9b
  • Loading branch information
xuzhao9 authored and facebook-github-bot committed Nov 1, 2024
1 parent a66ce04 commit cc094df
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 0 deletions.
1 change: 1 addition & 0 deletions torchbenchmark/operators/fp8_gemm/fp8_gemm.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ def __init__(
self, tb_args: argparse.Namespace, extra_args: Optional[List[str]] = None
):
super().__init__(tb_args, extra_args)
self.use_cuda_graphs = True
self.extra_args = parse_args(extra_args)

def get_input_iter(self):
Expand Down
1 change: 1 addition & 0 deletions torchbenchmark/operators/fp8_gemm_blockwise/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ def __init__(
self, tb_args: argparse.Namespace, extra_args: Optional[List[str]] = None
):
super().__init__(tb_args, extra_args)
self.use_cuda_graphs = True
addmm_args = parse_args(self.extra_args)
if addmm_args.m and addmm_args.n and addmm_args.k:
self.shapes = [(addmm_args.m, addmm_args.n, addmm_args.k)]
Expand Down

0 comments on commit cc094df

Please sign in to comment.