Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
specialize symfloats for wrapped_gradient in get_fake_value (#139935)
Summary: Fixes `PYTORCH_TEST_WITH_DYNAMO=1 python test/test_torch.py TestTorchDeviceTypeCPU.test_gradient_type_promotion_cpu` when `specialize_float=False` Reviewers might wonder why we need to have this whitelist. Can't we rely on python_arg_parser.h to do the specialization generically? Alas this path doesn't actually FFI to C++ so we do need to do the specialization in pythonland. X-link: pytorch/pytorch#139935 Approved by: https://github.com/ezyang ghstack dependencies: #139569, #139457, #139568, #139572, #139846, #139454, #139896 Reviewed By: ZainRizvi Differential Revision: D65661211 Pulled By: bobrenjc93 fbshipit-source-id: a75d733e6191e8f884108dab3ef94f92d396e105
- Loading branch information