logo
0
0
WeChat Login

tool icon LLM Compressor

docs PyPI

llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:

  • Comprehensive set of quantization algorithms for weight-only and activation quantization
  • Seamless integration with Hugging Face models and repositories
  • safetensors-based file format compatible with vllm
  • Large model support via accelerate

✨ Read the announcement blog here! ✨

LLM Compressor Flow


💬 Join us on the vLLM Community Slack and share your questions, thoughts, or ideas in:

  • #sig-quantization
  • #llm-compressor

🚀 What's New!

Big updates have landed in LLM Compressor! To get a more in-depth look, check out the LLM Compressor overview.

Some of the exciting new features include:

  • Updated offloading and model loading support: Loading transformers models that are offloaded to disk and/or offloaded across distributed process ranks is now supported. Disk offloading allows users to load and compress very large models which normally would not fit in CPU memory. Offloading functionality is no longer supported through accelerate but through model loading utilities added to compressed-tensors. For a full summary of updated loading and offloading functionality, for both single-process and distributed flows, see the Big Models and Distributed Support guide.
  • Distributed GPTQ Support: GPTQ now supports Distributed Data Parallel (DDP) functionality to significantly improve calibration runtime. An example using DDP with GPTQ can be found here.
  • Updated FP4 Microscale Support: GPTQ now supports FP4 quantization schemes, including both MXFP4 and NVFP4. MXFP4 support has also been improved with updated weight scale generation. Models with weight-only quantization in the MXFP4 format can now run in vLLM as of vLLM v0.14.0. MXFP4 models with activation quantization are not yet supported in vLLM for compressed-tensors models
  • New Model-Free PTQ Pathway: A new model-free PTQ pathway has been added to LLM Compressor, called model_free_ptq. This pathway allows you to quantize your model without the requirement of Hugging Face model definition and is especially useful in cases where oneshot may fail. This pathway is currently supported for data-free pathways only i.e FP8 quantization and was leveraged to quantize the Mistral Large 3 model. Additional examples have been added illustrating how LLM Compressor can be used for Kimi K2
  • Extended KV Cache and Attention Quantization Support: LLM Compressor now supports attention quantization. KV Cache quantization, which previously only supported per-tensor scales, has been extended to support any quantization scheme including a new per-head quantization scheme. Support for these checkpoints is on-going in vLLM and scripts to get started have been added to the experimental folder

Supported Formats

  • Activation Quantization: W8A8 (int8 and fp8)
  • Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
  • 2:4 Semi-structured and Unstructured Sparsity

Supported Algorithms

  • Simple PTQ
  • GPTQ
  • AWQ
  • SmoothQuant
  • SparseGPT
  • AutoRound

When to Use Which Optimization

Please refer to compression_schemes.md for detailed information about available optimization schemes and their use cases.

Installation

pip install llmcompressor

Get Started

End-to-End Examples

Applying quantization with llmcompressor:

User Guides

Deep dives into advanced usage of llmcompressor:

Quick Tour

Let's quantize Qwen3-30B-A3B with FP8 weights and activations using the Round-to-Nearest algorithm.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.

Apply Quantization

Quantization is applied by selecting an algorithm and calling the oneshot API.

from compressed_tensors.offload import dispatch_model from transformers import AutoModelForCausalLM, AutoTokenizer from llmcompressor import oneshot from llmcompressor.modifiers.quantization import QuantizationModifier MODEL_ID = "Qwen/Qwen3-30B-A3B" # Load model. model = AutoModelForCausalLM.from_pretrained(MODEL_ID, dtype="auto") tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) # Configure the quantization algorithm and scheme. # In this case, we: # * quantize the weights to FP8 using RTN with block_size 128 # * quantize the activations dynamically to FP8 during inference recipe = QuantizationModifier( targets="Linear", scheme="FP8_BLOCK", ignore=["lm_head", "re:.*mlp.gate$"], ) # Apply quantization. oneshot(model=model, recipe=recipe) # Confirm generations of the quantized model look sane. print("========== SAMPLE GENERATION ==============") dispatch_model(model) input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to( model.device ) output = model.generate(input_ids, max_new_tokens=20) print(tokenizer.decode(output[0])) print("==========================================") # Save to disk in compressed-tensors format. SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-BLOCK" model.save_pretrained(SAVE_DIR) tokenizer.save_pretrained(SAVE_DIR)

Inference with vLLM

The checkpoints created by llmcompressor can be loaded and run in vllm:

Install:

pip install vllm

Run:

from vllm import LLM model = LLM("Qwen/Qwen3-30B-A3B-FP8-BLOCK") output = model.generate("My name is")

Questions / Contribution

  • If you have any questions or requests open an issue and we will add an example or documentation.
  • We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.

Citation

If you find LLM Compressor useful in your research or projects, please consider citing it:

@software{llmcompressor2024, title={{LLM Compressor}}, author={Red Hat AI and vLLM Project}, year={2024}, month={8}, url={https://github.com/vllm-project/llm-compressor}, }

About

No description, topics, or website provided.
2.41 MiB
0 forks0 stars1 branches0 TagREADMEApache-2.0 license
Language
Python99.5%
Shell0.3%
Makefile0.1%
Dockerfile0%
Others0.1%