Skip to content

Latest commit

 

History

History
74 lines (58 loc) · 2.04 KB

README.md

File metadata and controls

74 lines (58 loc) · 2.04 KB

TurboHopp: Accelerated Molecule Scaffold Hopping with Consistency Models

Official implementation of Neurips 2024 poster "TurboHopp: Accelerated Molecule Scaffold Hopping with Consistency Models".

Citation

If you use this code in your research, please cite our paper:

@article{turbohopp2024,
  title={TurboHopp: Accelerated Molecule Scaffold Hopping with Consistency Models},
  author={Yoo, Kiwoong and Oertell, Owen and Lee, Junhyun and Lee, Sanghoon and Kang, Jaewoo}, # Add authors
  journal={arXiv preprint arXiv:2410.20660},
  year={2024}
}

Repository Structure

├── configs/           # Configuration files for training and evaluation
├── consistency/       # Consistency model implementation
├── diffusion_hopping/ # Core modules adapted from DiffHopp
├── utils/            # Helper functions
├── train_consistency.py    # Train consistency model
├── train_rlcm.py          # Train with RL for optimization
└── evaluate_consistency.py # Evaluation script

Environment Setup

This code was developed and tested with:

  • CUDA 11.8
  • Python 3.9
  • PyTorch 2.0.1

To install the environment:

conda env create -f environment.yml
conda activate turbohopp

Usage

Training Consistency Model

python train_consistency.py --config configs/train_config.yaml

Training with RL

For a pretrained consistency model checkpoint:

python train_rlcm.py  --config configs/config_rlcm_docking.yaml

Evaluation

Evaluate a trained model:

python evaluate_consistency.py \
    --checkpoint_path path/to/model.ckpt \
    --cuda_device 0 \
    --molecules_per_pocket 10 \
    --find_best \
    --batch_size 512 \
    --mode train \
    --dataset pdbbind_filtered

Acknowledgments

This codebase builds upon DiffHopp. We thank the authors for making their code available.

License

This project is licensed under the MIT License - see the LICENSE file for details.