Ground Truth
The accurate, real-world data or labels used as a benchmark to train and evaluate AI model performance.
Definition
The authoritative reference—often human-annotated—against which model predictions are compared. Ensuring high-quality ground truth requires rigorous data-labeling protocols, inter-annotator agreement checks, and periodic revalidation as definitions or contexts evolve. Ground truth underpins fairness and accuracy evaluations and must be stored with provenance metadata for auditability.
Real-World Example
In autonomous-driving research, thousands of frames are manually labeled with precise bounding boxes around pedestrians and vehicles. This ground truth is used to train perception models and to benchmark detection accuracy under diverse conditions, ensuring model performance aligns with real-world safety requirements.