Home | Scientific Committee | Program | Dates | Submission |
This workshop on bias and fairness of biometric systems highly complements the main ICPR 2022 conference and its tracks 1, 2, 3, and 4. Although the main conference has a track 4 on biometrics and human-machine interaction, the topic of bias and fairness of biometric systems needs a dedicated session of its own.
With recent advances in deep learning obtaining hallmark accuracy rates for various computer vision applications, biometrics is a widely adopted technology for recognizing identities, surveillance, border control, and mobile user authentication. However, over the last few years, the fairness of this automated biometric-based recognition and attribute classification methods have been questioned across demographic variations by media articles in the well-known press, academic, and industry research. Specifically, facial analysis technology is reported to be biased against darker-skinned people like African Americans, and women. This has led to the ban of facial recognition technology for government use. Apart from facial analysis, bias is also reported for other biometric modalities such as ocular and fingerprint and other AI systems based on biometric images such as face morphing attack detection algorithms. Despite existing work in this field, the state-of-the-art is still at its initial stages. There is a pressing need to examine the bias of existing biometric modalities and the development of advanced methods for bias mitigation in existing biometric-based systems. This workshop provides the forum for addressing the recent advancement and challenges in the field. The expected outcomes are to increase awareness of demographic effects, recent advances and provide a common ground of discussion for academicians, industry, and government.