Skip to content
@ModelCloud

ModelCloud.ai

Our mission is to give allow everyone, including bots, unlimited and free access to llm/ai models.

Pinned Loading

  1. GPTQModel GPTQModel Public

    Production ready LLM model compression/quantization toolkit with accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.

    Python 263 40

  2. Device-SMI Device-SMI Public

    Self-contained Python lib with zero-dependencies that give you a unified device properties for gpu, cpu, and npu. No more calling separate tools such as nvidia-smi or /proc/cpuinfo and parsing it y…

    Python 10 1

Repositories

Showing 3 of 3 repositories
  • GPTQModel Public

    Production ready LLM model compression/quantization toolkit with accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.

    ModelCloud/GPTQModel’s past year of commit activity
    Python 263 Apache-2.0 40 7 6 Updated Feb 8, 2025
  • Tokenicer Public
    ModelCloud/Tokenicer’s past year of commit activity
    Python 0 Apache-2.0 1 0 1 Updated Feb 8, 2025
  • Device-SMI Public

    Self-contained Python lib with zero-dependencies that give you a unified device properties for gpu, cpu, and npu. No more calling separate tools such as nvidia-smi or /proc/cpuinfo and parsing it yourself.

    ModelCloud/Device-SMI’s past year of commit activity
    Python 10 Apache-2.0 1 1 2 Updated Jan 10, 2025

Top languages

Loading…

Most used topics

Loading…