Precision and recall are two essential metrics in evaluating the performance of classification models, particularly in contexts where class distribution is imbalanced. Precision measures the accuracy of positive predictions, defined as the ratio of true positives to the sum of true and false positives. In contrast, recall (also known as sensitivity) quantifies the ability of a model to identify all relevant instances, calculated as the ratio of true positives to the sum of true positives and false negatives. Understanding the trade-off between precision and recall is crucial, as optimizing for one often comes at the expense of the other. In many applications, such as medical diagnosis and fraud detection, striking the right balance between these metrics is vital for effective decision-making.