-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supporting any Torch model (not just TorchScript) #131
Comments
here is a relevant PyTorch forum thread. At some point torch.export will exist as a option maybe Edit: maybe it is already possible, this looks promising: https://pytorch.org/docs/main/torch.compiler_aot_inductor.html |
Any solution that requires a Python runtime to be present will be very limiting. Think of Folding@home, for example. It sounds like this is all still in flux, and there are important things torch.compile can't do yet. Hopefully once everything settles down, there will be a clear migration path. I really hope they continue to support jit compilation, though. Having to rely on ahead-of-time compiled libraries would also be very limiting, and likely infeasible for important use cases. |
Steve's AOT thingy seems to be the only pytorch-endorsed way, but I have zero faith it is actually usable as of today -.- |
We recently encountered the same need to deploy torch compiled models in MD simulations and developed a solution that offloads some computations to Python. We wrote a short note focusing on this approach and included some very basic benchmarks. We’re happy with the solution we arrived at, but in that note we did not address some of the less-than-ideal experiences we had with torch compiled models. Our main concerns (with the compiled models, not the deployment solution) are:
All in all, we agree that the torch compile toolchain still needs further refinement. In parallel, existing MLFF models may also require non-trivial changes to accommodate torch compile. Nevertheless, I’m optimistic that future improvements will bring enough acceleration to justify updating models and perhaps more importantly to influence the new models under development. The code and the note can be found at https://github.com/bytedance/OpenMM-Python-Force |
It would be great to be able to use a torch.compile'd model in OpenMM-Torch.
AFAIK there is no way to cross the Python-C++ barrier with a torch.compile'd model. Nothing like torch::jit::Module for TorchScript.
I can think of two solutions:
Doing this requires sending a generic Python class/function to C++ through SWIG, which I have not been successful in doing. It is really easy with pybind, but I cannot manage to mix pybind and swig for the life of me.
There is no way AFAIK to call torch.compile from C++, so we would have to invoke that via pybind at, perhaps, the TorchForce constructor. Then a py::object would be stored instead of a torch::jit::Module.
I think 2 is the simplest way with the current state of the codebase. Allowing something not easily serializable as the model (aka not TorchScript) would make serializing TorchForce an issue.
I would like to hear your thoughts on this!
The text was updated successfully, but these errors were encountered: