-
Notifications
You must be signed in to change notification settings - Fork 6k
[docs] Pipeline-level quantization #11604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -13,59 +13,98 @@ specific language governing permissions and limitations under the License. | |||||
|
||||||
# Quantization | ||||||
|
||||||
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits. | ||||||
Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits. | ||||||
|
||||||
<Tip> | ||||||
Diffusers supports multiple quantization backends to make large diffusion models like [Flux](../api/pipelines/flux) more accessible. This guide shows how to use the [`~quantizers.PipelineQuantizationConfig`] class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint. | ||||||
|
||||||
Interested in adding a new quantization method to Diffusers? Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) to learn more about adding a new quantization method. | ||||||
## Pipeline-level quantization | ||||||
|
||||||
</Tip> | ||||||
There are two ways you can use [`~quantizers.PipelineQuantizationConfig`] depending on the level of control you want over the quantization specifications of each model in the pipeline. | ||||||
|
||||||
<Tip> | ||||||
- for more basic and simple use cases, you only need to define the `quant_backend`, `quant_kwargs`, and `components_to_quantize` | ||||||
- for more granular quantization control, provide a `quant_mapping` that provides the quantization specifications for the individual model components | ||||||
|
||||||
If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI: | ||||||
### Simple quantization | ||||||
|
||||||
* [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/) | ||||||
* [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/) | ||||||
Initialize [`~quantizers.PipelineQuantizationConfig`] with the following parameters. | ||||||
|
||||||
</Tip> | ||||||
- `quant_backend` specifies which quantization backend to use. Currently supported backends include: `bitsandbytes_4bit`, `bitsandbytes_8bit`, `gguf`, `quanto`, and `torchao`. | ||||||
- `quant_kwargs` contains the specific quantization arguments to use. | ||||||
- `components_to_quantize` specifies which components of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as [`FluxPipeline`]. The example below quantizes the T5 text encoder in [`FluxPipeline`] while keeping the CLIP model intact. | ||||||
|
||||||
## When to use what? | ||||||
```py | ||||||
import torch | ||||||
from diffusers import DiffusionPipeline | ||||||
from diffusers.quantizers import PipelineQuantizationConfig | ||||||
|
||||||
Diffusers currently supports the following quantization methods. | ||||||
- [BitsandBytes](./bitsandbytes) | ||||||
- [TorchAO](./torchao) | ||||||
- [GGUF](./gguf) | ||||||
- [Quanto](./quanto.md) | ||||||
pipeline_quant_config = PipelineQuantizationConfig( | ||||||
quant_backend="bitsandbytes_4bit", | ||||||
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16}, | ||||||
components_to_quantize=["transformer", "text_encoder_2"], | ||||||
) | ||||||
``` | ||||||
|
||||||
[This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques. | ||||||
Pass the `pipeline_quant_config` to [`~DiffusionPipeline.from_pretrained`] to quantize the pipeline. | ||||||
|
||||||
## Pipeline-level quantization | ||||||
```py | ||||||
pipe = DiffusionPipeline.from_pretrained( | ||||||
"black-forest-labs/FLUX.1-dev", | ||||||
quantization_config=pipeline_quant_config, | ||||||
torch_dtype=torch.bfloat16, | ||||||
).to("cuda") | ||||||
|
||||||
image = pipe("photo of a cute dog").images[0] | ||||||
``` | ||||||
|
||||||
Diffusers allows users to directly initialize pipelines from checkpoints that may contain quantized models ([example](https://huggingface.co/hf-internal-testing/flux.1-dev-nf4-pkg)). However, users may want to apply | ||||||
quantization on-the-fly when initializing a pipeline from a pre-trained and non-quantized checkpoint. You can | ||||||
do this with [`~quantizers.PipelineQuantizationConfig`]. | ||||||
### quant_mapping | ||||||
|
||||||
Start by defining a `PipelineQuantizationConfig`: | ||||||
The `quant_mapping` argument provides more flexible options for how to quantize each individual component in a pipeline, like combining different quantization backends. | ||||||
|
||||||
Initialize [`~quantizers.PipelineQuantizationConfig`] and pass a `quant_mapping` to it. The `quant_mapping` allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder. | ||||||
|
||||||
The example below uses two quantization backends, [`~quantizers.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder. | ||||||
|
||||||
```py | ||||||
import torch | ||||||
from diffusers import DiffusionPipeline | ||||||
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig | ||||||
from diffusers.quantizers.quantization_config import QuantoConfig | ||||||
from diffusers.quantizers import PipelineQuantizationConfig | ||||||
from transformers import BitsAndBytesConfig | ||||||
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig | ||||||
|
||||||
pipeline_quant_config = PipelineQuantizationConfig( | ||||||
quant_mapping={ | ||||||
"transformer": QuantoConfig(weights_dtype="int8"), | ||||||
"text_encoder_2": BitsAndBytesConfig( | ||||||
"text_encoder_2": TransformersBitsAndBytesConfig( | ||||||
load_in_4bit=True, compute_dtype=torch.bfloat16 | ||||||
), | ||||||
} | ||||||
) | ||||||
``` | ||||||
|
||||||
Then pass it to [`~DiffusionPipeline.from_pretrained`] and run inference: | ||||||
There is a separate bitsandbytes backend in [Transformers](https://huggingface.co/docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig). You need to import and use [`transformers.BitsAndBytesConfig`] for components that come from Transformers. For example, `text_encoder_2` in [`FluxPipeline`] is a [`~transformers.T5EncoderModel`] from Transformers so you need to use [`transformers.BitsAndBytesConfig`] instead of [`diffusers.BitsAndBytesConfig`]. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
||||||
> [!TIP] | ||||||
> Use the [simple quantization](#simple-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from. | ||||||
|
||||||
```py | ||||||
import torch | ||||||
from diffusers import DiffusionPipeline | ||||||
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig | ||||||
from diffusers.quantizers import PipelineQuantizationConfig | ||||||
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig | ||||||
|
||||||
pipeline_quant_config = PipelineQuantizationConfig( | ||||||
quant_mapping={ | ||||||
"transformer": DiffusersBitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16), | ||||||
"text_encoder_2": TransformersBitsAndBytesConfig( | ||||||
load_in_4bit=True, compute_dtype=torch.bfloat16 | ||||||
), | ||||||
} | ||||||
) | ||||||
``` | ||||||
|
||||||
Pass the `pipeline_quant_config` to [`~DiffusionPipeline.from_pretrained`] to quantize the pipeline. | ||||||
|
||||||
```py | ||||||
pipe = DiffusionPipeline.from_pretrained( | ||||||
|
@@ -77,52 +116,17 @@ pipe = DiffusionPipeline.from_pretrained( | |||||
image = pipe("photo of a cute dog").images[0] | ||||||
``` | ||||||
|
||||||
This method allows for more granular control over the quantization specifications of individual | ||||||
model-level components of a pipeline. It also allows for different quantization backends for | ||||||
different components. In the above example, you used a combination of Quanto and BitsandBytes. However, | ||||||
one caveat of this method is that users need to know which components come from `transformers` to be able | ||||||
to import the right quantization config class. | ||||||
## Resources | ||||||
|
||||||
The other method is simpler in terms of experience but is | ||||||
less-flexible. Start by defining a `PipelineQuantizationConfig` but in a different way: | ||||||
Check out the resources below to learn more about quantization. | ||||||
|
||||||
```py | ||||||
pipeline_quant_config = PipelineQuantizationConfig( | ||||||
quant_backend="bitsandbytes_4bit", | ||||||
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16}, | ||||||
components_to_quantize=["transformer", "text_encoder_2"], | ||||||
) | ||||||
``` | ||||||
|
||||||
This `pipeline_quant_config` can now be passed to [`~DiffusionPipeline.from_pretrained`] similar to the above example. | ||||||
|
||||||
In this case, `quant_kwargs` will be used to initialize the quantization specifications | ||||||
of the respective quantization configuration class of `quant_backend`. `components_to_quantize` | ||||||
is used to denote the components that will be quantized. For most pipelines, you would want to | ||||||
keep `transformer` in the list as that is often the most compute and memory intensive. | ||||||
|
||||||
The config below will work for most diffusion pipelines that have a `transformer` component present. | ||||||
In most case, you will want to quantize the `transformer` component as that is often the most compute- | ||||||
intensive part of a diffusion pipeline. | ||||||
|
||||||
```py | ||||||
pipeline_quant_config = PipelineQuantizationConfig( | ||||||
quant_backend="bitsandbytes_4bit", | ||||||
quant_kwargs={"load_in_4bit": True, "bnb_4bit_quant_type": "nf4", "bnb_4bit_compute_dtype": torch.bfloat16}, | ||||||
components_to_quantize=["transformer"], | ||||||
) | ||||||
``` | ||||||
- If you are new to quantization, we recommend checking out the following beginner-friendly courses in collaboration with DeepLearning.AI. | ||||||
|
||||||
Below is a list of the supported quantization backends available in both `diffusers` and `transformers`: | ||||||
- [Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/) | ||||||
- [Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/) | ||||||
|
||||||
* `bitsandbytes_4bit` | ||||||
* `bitsandbytes_8bit` | ||||||
* `gguf` | ||||||
* `quanto` | ||||||
* `torchao` | ||||||
- Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) if you're interested in adding a new quantization method. | ||||||
|
||||||
- The Transformers quantization [Overview](https://huggingface.co/docs/transformers/quantization/overview#when-to-use-what) provides an overview of the pros and cons of different quantization backends. | ||||||
|
||||||
Diffusion pipelines can have multiple text encoders. [`FluxPipeline`] has two, for example. It's | ||||||
recommended to quantize the text encoders that are memory-intensive. Some examples include T5, | ||||||
Llama, Gemma, etc. In the above example, you quantized the T5 model of [`FluxPipeline`] through | ||||||
`text_encoder_2` while keeping the CLIP model intact (accessible through `text_encoder`). | ||||||
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perfect! |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Easy to miss when adding a new backend. Additionally, with the granular option, it is possible to use a quantization backend for
transformers
that may not exist indiffusers
and vice-versa. How do we make that info clear? 👀