Big Data

PyTorch’s TorchTune: Revolutionizing LLM Fine-Tuning


Introduction

The ever-growing field of large language models (LLMs) unlocks incredible potential for various applications. However, fine-tuning these powerful models for specific tasks can be a complex and resource-intensive endeavor. TorchTune, a new PyTorch library, tackles this challenge head-on by offering an intuitive and extensible solution. PyTorch released the alpha tourchtune, a PyTorch native library for finetuning your large language models easily. According to the PyTorch design principles, it provides composable and modular building blocks along with easy-to-extend training recipes to fine-tune large language techniques such as LORA, and QLORA on various consumer-grade and professional GPUs.   

PyTorch's TorchTune

Why Use TorchTune?

In the past year, there has been a surge in interest in open large language models (LLMs). Fine-tuning these cutting-edge models for specific applications has become a crucial technique. However, this adaptation process can be complex, requiring extensive customization across various stages, including data and model selection, quantization, evaluation, and inference. Additionally, the sheer size of these models presents a significant challenge when fine-tuning them on resource-constrained consumer-grade GPUs.

Current solutions often hinder customization and optimization by obfuscating critical components behind layers of abstraction. This lack of transparency makes it difficult to understand how different elements interact and which ones need modification to achieve desired functionality. It addresses this challenge by empowering developers with fine-grained control and visibility over the entire fine-tuning process, enabling them to tailor LLMs to their specific requirements and constraints

TorchTune Workflows

TorchTune supports the following finetuning workflows: 

  • Downloading and preparing the datasets and model checkpoints
  • Customizing the training with composable building blocks that support different model architectures, parameter-efficient fine-tuning (PEFT) techniques, and more.
  • Logging progress and metrics to gain insight into the training process.
  • Quantizing the model post-tuning.
  • Evaluating the fine-tuned model on popular benchmarks.
  • Running local inference for testing fine-tuned models.
  • Checkpoint compatibility with popular production inference systems

Torch Tune supports the following models

Model Sizes
Llama2 7B, 13B
Mistral 7B
Gemma 2B

Moreover, they will add new models in the coming weeks, including support for 70B versions and MoEs. 

Fine-Tuning Recipes

TorchTune provides the following fine-tuning recipes.

Memory efficiency is important to us. All of our recipes are tested on a variety of setups including commodity GPUs with 24GB of VRAM as well as beefier options found in data centers.

Single-GPU recipes expose a number of memory optimizations that aren’t available in the distributed versions. These include support for low-precision optimizers from bitsandbytes and fusing optimizer step with backward to reduce memory footprint from the gradients (see example config). For memory-constrained setups, we recommend using the single-device configs as a starting point. For example, our default QLoRA config has a peak memory usage of ~9.3GB. Similarly LoRA on single device with batch_size=2 has a peak memory usage of ~17.1GB. Both of these are with dtype=bf16 and AdamW as the optimizer.

This table captures the minimum memory requirements for our different recipes using the associated configs.

What is TorchTune’s Design?

  • Extensible by Design: Acknowledging the rapid evolution of fine-tuning techniques and diverse user needs, TorchTune prioritizes easy extensibility. Its recipes leverage modular components and readily modifiable training loops. Minimal abstraction ensures user control over the fine-tuning process. Each recipe is self-contained (less than 600 lines of code!) and requires no external trainers or frameworks, further promoting transparency and customization.
  • Democratizing Fine-Tuning: TorchTune fosters inclusivity by catering to users of varying expertise levels. Its intuitive configuration files are readily modifiable, allowing users to customize settings without extensive coding knowledge. Additionally, memory-efficient recipes enable fine-tuning on readily available consumer-grade GPUs (e.g., 24GB), eliminating the need for expensive data center hardware.
  • Open Source Ecosystem Integration: Recognizing the vibrant open-source LLM ecosystem, PyTorch’s TorchTune prioritizes interoperability with a wide range of tools and resources. This flexibility empowers users with greater control over the fine-tuning process and deployment of their models.
  • Future-Proof Design: Anticipating the increasing complexity of multilingual, multimodal, and multi-task LLMs, PyTorch’s TorchTune prioritizes flexible design. This ensures the library can adapt to future advancements while maintaining pace with the research community’s rapid innovation. To power the full spectrum of future use cases, seamless collaboration between various LLM libraries and tools is crucial. With this vision in mind, TorchTune is built from the ground up for seamless integration with the evolving LLM landscape.

Integration with the LLM

TorchTune adheres to the PyTorch philosophy of promoting ease of use by offering native integrations with several prominent LLM tools:

  • Hugging Face Hub: Leverages the vast repository of open-source models and datasets available on Hugging Face Hub for fine-tuning. Streamlined integration through the tunedownload CLI command facilitates immediate initiation of fine-tuning tasks.
  • PyTorch FSDP: Enables distributed training by harnessing the capabilities of PyTorch FSDP. This caters to the growing trend of utilizing multi-GPU setups, commonly featuring consumer-grade cards like NVIDIA’s 3090/4090 series. TorchTune offers distributed training recipes powered by FSDP to capitalize on such hardware configurations.
  • Weights & Biases: Integrates with the Weights & Biases AI platform for comprehensive logging of training metrics and model checkpoints. This centralizes configuration details, performance metrics, and model versions for convenient monitoring and analysis of fine-tuning runs.
  • EleutherAI’s LM Evaluation Harness: Recognizing the critical role of model evaluation, TorchTune includes a streamlined evaluation recipe powered by EleutherAI’s LM Evaluation Harness. This grants users straightforward access to a comprehensive suite of established LLM benchmarks. To further enhance the evaluation experience, we intend to collaborate closely with EleutherAI in the coming months to establish an even deeper and more native integration.
  • ExecuTorch: Enables efficient inference of fine-tuned models on a wide range of mobile and edge devices by facilitating seamless export to ExecuTorch.
  • torchao: Provides a simple post-training recipe powered by torchao’s quantization APIs, enabling efficient conversion of fine-tuned models into lower precision formats (e.g., 4-bit or 8-bit) for reduced memory footprint and faster inference.

Getting Started

To get started with fine-tuning your first LLM with TorchTune, see our tutorial on fine-tuning Llama2 7B. Our end-to-end workflow tutorial will show you how to evaluate, quantize and run inference with this model. The rest of this section will provide a quick overview of these steps with Llama2.

Step1: Downloading a model

Follow the instructions on the official meta-llama repository to ensure you have access to the Llama2 model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide.

tune download meta-llama/Llama-2-7b-hf \
--output-dir /tmp/Llama-2-7b-hf \
--hf-token <HF_TOKEN> \

Set your environment variable HF_TOKEN or pass in –hf-token to the command in order to validate your access. You can find your token here.

Step2: Running Fine-Tuning Recipes

Llama2 7B + LoRA on single GPU

tune run lora_finetune_single_device --config llama2/7B_lora_single_device

For distributed training, tune CLI integrates with torchrun. Llama2 7B + LoRA on two GPUs

tune run --nproc_per_node 2 full_finetune_distributed --config llama2/7B_full

Make sure to place any torchrun commands before the recipe specification. Any CLI args after this will override the config and not impact distributed training

Step3: Modify Configs

There are two ways in which you can modify configs:

Config Overrides

You can easily overwrite config properties from the command-line:

tune run lora_finetune_single_device \
--config llama2/7B_lora_single_device \
batch_size=8 \
enable_activation_checkpointing=True \
max_steps_per_epoch=128

Update a Local Copy

You can also copy the config to your local directory and modify the contents directly:

tune cp llama2/7B_full ./my_custom_config.yaml
Copied to ./7B_full.yaml

Then, you can run your custom recipe by directing the tune run command to your local files:

tune run full_finetune_distributed --config ./my_custom_config.yaml

Check out tune –help for all possible CLI commands and options. For more information on using and updating configs, take a look at our config deep-dive.

Conclusion

TorchTune empowers developers to harness the power of large language models (LLMs) through a user-friendly and extensible PyTorch library. Its focus on composable building blocks, memory-efficient recipes, and seamless integration with the LLM ecosystem simplifies the fine-tuning process for a wide range of users. Whether you’re a seasoned researcher or just starting out, TorchTune provides the tools and flexibility to tailor LLMs to your specific needs and constraints.