Angelina Wang FPO
Location:
The members of
Examiners:
Readers:
A copy of
Everyone is invited to attend
Abstract follows below:
With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work. I will discuss three research directions that show how, despite the technically convenient approach of considering equality acontextually, a stronger engagement with societal context allows us to operationalize a more equitable formulation. First, I will introduce a dataset tool that we developed to analyze complex, socially-grounded forms of visual bias. Then, I will provide empirical evidence to support how we should incorporate societal context in bringing intersectionality into machine learning. Finally, I will discuss how in the excitement of using LLMs for tasks like human participant replacement, we have neglected to consider the importance of human positionality. Overall, I will explore how we can expand a narrow focus on equality in responsible machine learning to encompass a broader understanding of equity that substantively engages with societal context