Fine-Tuning

Adapting a pre-trained AI model to a specific task or dataset by continuing training on new data, often improving task-specific performance.

Definition

A transfer-learning technique where a generic, large-scale pre-trained model (e.g., BERT, ResNet) is further trained on domain-specific labeled data with reduced learning rates. Fine-tuning accelerates development, requires less task-specific data, and leverages broad feature representations. Governance must track base-model provenance, license compliance, and document fine-tuning dataset and hyperparameter choices for reproducibility.

Real-World Example

A legal-tech company fine-tunes a BERT model on 50,000 labeled legal-contract clauses. With only 1/10th the data of training from scratch, they achieve 90% accuracy on clause classification, enabling automated contract review that meets in-house QA standards.