Skip to main content

News & Events

Improving fairness in personalized AI models

Amy Sprague
April 23, 2024

New "fair collaborative learning" framework aims to improve both personalization and fairness in AI models simultaneously.

Artificial intelligence and machine learning are revolutionizing how we make decisions, with models being developed to provide personalized predictions and recommendations for individuals. However, a major challenge is ensuring these personalized models treat everyone fairly.

Ryan Lin headshot

Feng (Ryan) Lin

Enter the research of ISE PhD student Feng (Ryan) Lin. Lin’s paper, Fair Collaborative Learning (FairCL): A Method to Improve Fairness amid Personalization, was selected as a finalist of the 2023 INFORMS QSR Best Paper Competition, for developing a new framework called "fair collaborative learning" (FairCL) that can improve both the accuracy and fairness of personalized models.

"For model personalization where each individual has their own data and needs their own model, there are two traditional approaches," Lin explained. "Individualized learning trains models separately on each person's data, but when data is limited, these individual models can perform poorly. The opposite extreme is a one-size-fits-all solution that pools everyone's data into a single model, ignoring individual differences."

Collaborative Learning Explained

Collaborative learning (CL) takes a middle ground approach between individualized learning that builds completely separate models for each person, and a one-size-fits-all pooled model. It first identifies a set of "canonical" patterns or models in the data, such as the progression patterns of Alzheimer’s disease (normal aging, mild cognitive impairment, Alzheimer's disease), socio-economic status, or level of education. Each individual's personalized model is then characterized as a combination of these canonical models.

"Collaborative learning can improve fairness compared to the extremes of individual or pooled models," Lin said. "However, it doesn't actively consider fairness constraints, which our FairCL framework aims to address."

FairCL incorporates various mathematical definitions of fairness used in AI ethics literature, such as ensuring similar individuals receive similar treatments. Among many possibilities, Lin found "bounded individual loss" to be a practical way to define fairness amid the complex formulation of personalized model optimization.

To solve this optimization problem, Lin developed a novel "self-adaptive" algorithm. "It ensures consistency between fairness constraints and the feasibility of the personalized models while reducing computational costs," he explained.

Lin's faculty adviser and co-author, Professor Shuai Huang, highlighted the novelty of the work: "Ryan's FairCL framework bridges an important gap between collaborative learning for personalization and fairness in machine learning models. His self-adaptive algorithm is a key innovation."

The potential applications span any scenario requiring personalized prediction and fair treatment, such as healthcare, transportation, education, and financial services. As AI pervades decision-making across domains, Lin's research offers a promising approach to develop personalized models that are not only accurate, but can also improve fairness.