Workshop on Gender Bias in Natural Language Processing

Logo

The 6th Workshop on Gender Bias in Natural Language Processing at ACL 2025.

  • Homepage
  • Call for Papers
  • Organizers
  • Keynotes
  • Schedule
  • Accepted Papers
  • Keynotes

    Anne Lauscher

    University of Hamburg

    Title: TBD

    Abstract

    TBD


    Maarten Sap

    Carnegie Mellon University (CMU) & Allen Institute for AI (AI2)

    Responsible AI for Diverse Users and Cultures

    Abstract

    AI systems and language technologies are increasingly developed and deployed onto users of diverse genders and cultures. Yet, they still lack contextual and cultural awareness, and are unilaterally pushed onto many users that do not necessarily want them. In this talk, I will discuss some ongoing projects towards responsible AI development for diverse users and cultures. I will first discuss the CobraFrames formalism, a method to enhance the reasoning of models for offensive speech grounded in social contexts such as speaker and listener identities. Then, I will discuss MC-Signs, a novel benchmark to measure the cultural awareness of multimodal AI systems with respect to culturally offensive gestures. Finally, I will conclude with a study on AI acceptability, showing that lay people’s opinions about when and where AI should be used varies depending on their gender, AI literacy, and more. I will conclude with some future directions towards responsible and prosocial AI.