Low-rank adaptation, frozen base + BA update
Medium ArchitectureImplement LoRA — parameter-efficient fine-tuning for large models.
• self.linear: frozen nn.Linear (weight & bias requires_grad=False)
• self.lora_A: nn.Parameter(rank, in_features) — random init
• self.lora_B: nn.Parameter(out_features, rank) — zero init
• Scaling: alpha / rank
Implement the function below. Use only basic PyTorch operations.
Use this code to debug before submitting.
Try solving it yourself first! Click below to reveal the solution.
For interactive practice with auto-grading, run TorchCode locally:pip install torch-judge then use check("lora")
Low-rank adaptation, frozen base + BA update