Understanding Low-Rank Adaptation and Quantized Fine-tuning for Large Language Models
Start with a large language model pre-trained on vast amounts of data
Reduce model precision while maintaining performance
Train small rank decomposition matrices instead of full weights
Resulting model adapted to specific tasks with minimal parameters
Original Model
Large Parameter Space
Parameters: 100