Pinned Loading
-
daizedong.github.io
daizedong.github.io PublicForked from academicpages/academicpages.github.io
Daize Dong's personal page.
JavaScript 2
-
pjlab-sys4nlp/llama-moe
pjlab-sys4nlp/llama-moe Public⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
-
OpenSparseLLMs/LLaMA-MoE-v2
OpenSparseLLMs/LLaMA-MoE-v2 Public🚀LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
-
A4Bio/GraphsGPT
A4Bio/GraphsGPT PublicThe official implementation of the ICML'24 paper "A Graph is Worth K Words: Euclideanizing Graph using Pure Transformer".
-
CASE-Lab-UMD/Unified-MoE-Compression
CASE-Lab-UMD/Unified-MoE-Compression PublicThe official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".
-
ChatGPT-ArXiv-Paper-Assistant
ChatGPT-ArXiv-Paper-Assistant PublicChatGPT/Gemini/DeepSeek based personalized ArXiv paper assistant bot. Powerful, free, and easy-to-use.
Python 1
If the problem persists, check the GitHub status page or contact support.