Dr. Vishnu Boddeti has been awarded a $530K NSF grant
https://www.nsf.gov/awardsearch/showAward?AWD_ID=2147116
Abstract:
Artificial intelligence based computer systems are increasingly reliant on effective information representation in order to support decision making in domains ranging from image recognition systems to identity control through face recognition. However, systems that rely on traditional statistics and prediction from historical or human-curated data also naturally inherit any past biased or discriminative tendencies. The overarching goal of the award is to mitigate this problem by using information representations that maintain its utility while eliminating information that could lead to discrimination against subgroups in a population. Specifically, this project will study the different trade-offs between utility and fairness of different data representations, and then identify solutions to reduce the gap to the best trade-off. Then, new representations and corresponding algorithms will be developed guided by such trade-off analysis. The investigators will provide performance limits based on the developed theory, and also evidence of efficacy in order to obtain fair machine learning systems and to gain societal trust. The application domain used in this research is face recognition systems. The undergraduate and graduate students who participate in the project will be trained to conduct cutting-edge research to integrate fairness into artificial intelligent based systems.
The research agenda of this project is centered around answering two questions on learning fair representations, (i) What are the fundamental trade-offs between utility and fairness of data representations?, (ii) How to devise practical fair representation learning algorithms that can mitigate bias in machine learning systems and provably achieve the theoretical utility-fairness trade-offs? To answer the first question, the project will theoretically elucidate and empirically quantify the different trade-offs inherent to utility. This will be done considering different fairness definitions such as demographic parity, equalized odds, and equality of opportunity. To answer the second question, the project will develop representation learning algorithms that (a) are analytically tractable and provably fair, (b) mitigate worst-case bias, as opposed to average bias over instances or demographic groups, (c) are fair with respect to demographic information that is only partially known or fully unknown, and (d) mitigate demographic bias both due to an imbalance in samples as well as features through optimal data sampling and projection.
(Date Posted: 2022-02-18)