Predictive Parity

Predictive Parity is a group fairness metric that focuses on ensuring that for individuals who receive a positive prediction from the AI system, the likelihood of that prediction being correct is the same across different demographic groups. In other words, it aims to equalize the positive predictive value (PPV) across different protected groups. This metric is particularly relevant in applications where a positive prediction leads to some form of intervention or opportunity, and we want to ensure that these interventions are equally reliable across different populations.

The core idea behind predictive parity is that if an AI system predicts a positive outcome for someone, we want the probability of that prediction being accurate to be consistent, regardless of the individual's group membership. This addresses the concern that a positive prediction might be more reliable for certain groups than others, potentially leading to unfair or inefficient resource allocation.

Mathematically, let \(Y\) be the ground truth outcome (1 for positive, 0 for negative), \(\hat{Y}\) be the predicted outcome (1 for positive, 0 for negative), and \(A\) be the sensitive attribute representing a demographic group. Predictive parity holds if the positive predictive value \(P(Y=1|\hat{Y}=1, A=group_1)\) is equal to the positive predictive value \(P(Y=1|\hat{Y}=1, A=group_2)\), and so on for all demographic groups.

Consider a system that predicts individuals who are likely to benefit from a particular educational program (\(\hat{Y}=1\)). Predictive parity would mean that among all the individuals predicted to benefit, the proportion who actually do benefit (\(Y=1\)) should be the same across different racial or socioeconomic groups (\(A\)). If the PPV is lower for a particular group, it suggests that the positive predictions are less reliable for that group, potentially leading to wasted resources or ineffective interventions.

Unlike statistical parity, which focuses on equalizing the rate of positive predictions, predictive parity focuses on the quality or reliability of those positive predictions. It's conditional on the predicted positive outcome rather than the ground truth outcome (as in equal opportunity) or simply the group membership.

However, predictive parity does not address potential disparities in negative predictive value (NPV), true negative rate (specificity), or false negative rate. A model can satisfy predictive parity while still having different rates of correctly identifying negative cases or incorrectly classifying positive cases as negative across different groups. Therefore, like other fairness metrics, predictive parity provides only one lens through which to evaluate the fairness of an AI system.

The IBM AI Fairness 360 toolkit includes predictive parity as a metric to assess the fairness of models, particularly in scenarios where the focus is on the reliability of positive predictions. Understanding and considering predictive parity, alongside other fairness metrics, is crucial for developing AI systems that are not only accurate but also fair in the impact and reliability of their positive predictions across diverse populations.

Your Image Description
"Predictive Parity asks: when our AI offers a positive prediction, is that prediction equally reliable, equally true, for everyone?" ✅🎯 - AI Alchemy Hub
Matrix Algebra
Tags:
  • Predictive Parity
  • Group Fairness
  • Positive Predictive Value
  • PPV
  • Conditional Fairness
  • AI Fairness
Share Now:
Last Updated: May 06, 2025 21:10:32