Vocabulary Compression for LLM Pretraining
A simple approach to compress the vocabulary layer of an LLM during pre-training to reduce memory requirements and increase throughput.
Abstract
(Vennam et al., 2024). We present a method to compress the final linear layer of language models, reducing memory usage by up to 3.4x without significant performance loss. By grouping tokens based on Byte Pair Encoding (BPE) merges, we prevent materialization of the memory-intensive logits tensor. Evaluations on the TinyStories dataset show that our method performs on par with GPT-Neo and GPT2 while significantly improving throughput by up to 3x, making it suitable for low-compute environments.
Related Publications
2024
-
MLC @ NeurIPSLLM Vocabulary Compression for Low-Compute EnvironmentsIn Workshop on Machine Learning and Compression, NeurIPS 2024, 2024