Workshop on Gender Bias in Natural Language Processing

Logo

The 6th Workshop on Gender Bias in Natural Language Processing at ACL 2025.

Accepted Papers

Presentation Mode Title Authors
Oral Presentation Introducing MARB — A Dataset for Studying the Social Dimensions of Reporting Bias in Language Models Tom Södahl Bladsjö, Ricardo Muñoz Sánchez
Oral Presentation GENDEROUS: Machine Translation and Cross-Linguistic Evaluation of a Gender-Ambiguous Dataset Janiça Hackenbuchner, Joke Daems, Eleni Gkovedarou
Poster I Towards Massive Multilingual Holistic Bias Xiaoqing Tan, Prangthip Hansanti, Arina Turkatenko, Joe Chuang, Carleigh Wood, Bokai YU, Christophe Ropers, Marta R. Costa-jussà
Poster I Detecting Bias and Intersectional Bias in Italian Word Embeddings and Language Models Alexandre Puttick, Mascha Kurpicz-Briki
Poster I Sports and Women’s Sports: Gender Bias in Text Generation with Olympic Data Laura Biester
Poster I Power(ful) Associations: Rethinking ``Stereotype’’ for NLP Hannah Devinney
Poster I Exploring Gender Bias in Large Language Models: An In-depth Dive into the German Language Kristin Gnadt, David Thulke, Simone Kopeinik, Ralf Schlüter
Poster I Adapting Psycholinguistic Research for LLMs: Gender-inclusive Language in a Coreference Context Marion Bartl, Thomas Brendan Murphy, Susan Leavy
Poster I Gender Bias in Nepali-English Machine Translation: A Comparison of LLMs and Existing MT Systems Supriya Khadka, Bijayan Bhattarai
Poster I Mind the Gap: Gender-based Differences in Occupational Embeddings Olga Kononykhina, Anna-Carolina Haensch, Frauke Kreuter
Poster I Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting Ej Zhou, Weiming Lu
Poster I Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation Hadi Mohammadi, Tina Shahedi, Pablo Mosteiro, Massimo Poesio, Ayoub Bagheri, Anastasia Giachanou
Poster II Leveraging Large Language Models to Measure Gender Representation Bias in Gendered Language Corpora Erik Derner, Sara Sansalvador de la Fuente, Yoan Gutierrez, Paloma Moreda Pozo, Nuria M Oliver
Poster II Strengths and Limitations of Word-Based Task Explainability in Vision Language Models: a Case Study on Biological Sex Biases in the Medical Domain Lorenzo Bertolini, Valentin Comte, Victoria Ruiz-Serra, Lia Orfei, Mario Ceresa
Poster II GG-BBQ: German Gender Bias Benchmark for Question Answering Shalaka Satheesh, Katrin Klug, Katharina Beckh, Héctor Allende-Cid, Sebastian Houben, Teena Hassan
Poster II Characterizing non-binary French: A first step towards debiasing gender inference Marie Flesch, Heather Burnett
Poster II Can Explicit Gender Information Improve Zero-Shot Machine Translation? Van-Hien Tran, Huy Hien Vu, Hideki Tanaka, Masao Utiyama
Poster II Colombian Waitresses y Jueces canadienses: Gender and Country Biases in Occupation Recommendations from LLMs Elisa Forcada Rodríguez, Jon Ander Campos, Olatz Perez-de-Vinaspre, Dietrich Klakow, Vagrant Gautam
Poster II Bias Attribution in Filipino Language Models: Extending a Bias Interpretability Metric for Application on Agglutinative Languages Lance Calvin Lim Gamboa, Yue Feng, Mark G. Lee
Poster II Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models Aleksandra Sorokovikova, Pavel Chizhov, Iuliia Eremenko, Ivan P. Yamshchikov
Poster II Measuring Gender Bias in the Farsi Language Hamidreza Saffari, Mohammadamin Shafiei, Donya Rooein, Debora Nozza
Poster II A Diachronic Analysis of Human and Model Predictions on Audience Gender in How-to Guides Nicola Fanton, Sidharth Ranjan, Titus von der Malsburg, Michael Roth
Poster II One Size Fits None: Rethinking Fairness in Medical AI Roland Roller, Michael Hahn, Ajay Madhavan Ravichandran, Zeineb Sassi, Bilgin Osmanodja, Florian Oetke, Aljoscha Burchardt, Klaus Netter, Anne Herrmann, Klemens Budde, Peter Dabrock, Tobias Strapatsas, Sebastian Möller
Poster III From Measurement to Mitigation: Exploring the Transferability of Debiasing Approaches to Gender Bias in Maltese Language Models Melanie Galea, Claudia Borg
Poster III Some Myths About Bias: A Queer Studies Reading Of Gender Bias In NLP Filipa Calado
Poster III GenWriter: Reducing Gender Cues in Biographies through Text Rewriting Shweta Soundararajan, Sarah Jane Delany
Poster III Examining the Cultural Encoding of Gender Bias in LLMs for Low-Resourced African Languages Abigail Oppong, Hellina Hailu Nigatu, Chinasa T. Okolo
Poster III Ableism, Ageism, Gender, and Nationality bias in Norwegian and Multilingual Language Models Martin Sjåvik, Samia Touileb
Poster III Gender Bias and the Role of Context in Human Perception and Machine Translation Janiça Hackenbuchner, Arda Tezcan, Joke Daems
Poster III (Findings) GeNRe: a French Gender-Neutral Rewriting System Using Collective Nouns Enzo Doyen, Amalia Todirascu
Poster III (Findings) taz2024full: Analysing German Newspapers for Gender Bias and Discrimination across Decades Stefanie Urchs, Veronika Thurner, Matthias Aßenmacher, Christian Heumann, Stephanie Thiemichen
Poster III (Findings) BanStereoSet: A Dataset to Measure Stereotypical Social Biases in LLMs for Bangla Mahammed Kamruzzaman, Abdullah Al Monsur, Shrabon Kumar Das, Enamul Hassan, Gene Louis Kim
Lightning Talk JBBQ: Japanese Bias Benchmark for Analyzing Social Biases in Large Language Models Hitomi Yanaka, Namgi Han, Ryoma Kumon, Lu Jie, Masashi Takeshita, Ryo Sekizawa, Taisei Katô, Hiromi Arai
Lightning Talk Intersectional Bias in Japanese Large Language Models from Contextualized Perspective Hitomi Yanaka, Xinqi He, Lu Jie, Namgi Han, Ryoma Kumon, Yuma Matsuoka, Kazuhiko Watabe, Yuko Itatsu
Lightning Talk Disentangling Biased Representations: A Causal Intervention Framework for Fairer NLP Models Yangge Qian, Yilong Hu, Siqi Zhang, Xu Gu, Xiaolin Qin
Lightning Talk Fine-Tuning vs Prompting Techniques for Gender-Fair Rewriting of Machine Translations Paolo Mainardi, Federico Garcea, Alberto Barrón-Cedeño
Lightning Talk ArGAN: Arabic Gender, Ability, and Nationality Dataset for Evaluating Biases in Large Language Models Ranwa Aly, Yara Allam, Rana Gaber, Christine Basta
Lightning Talk Assessing Gender Bias of Pretrained Bangla Language Models in STEM and SHAPE Fields Noor Mairukh Khan Arnob, Saiyara Mahmud, Azmine Toushik Wasi
Lightning Talk WoNBias: A Dataset for Classifying Bias & Prejudice Against Women in Bengali Text Raisul Islam Aupi, Nishat Tafannum, Shahidur Rahman, Kh Mahmudul Hassan, Naimur Rahman
Lightning Talk LLMs Exhibit Implicit Biases in Predicting Patients’ Gender from Clinical Conversations Naveen Jafer Nizar, Swetasudha Panda, Daeja Oxendine, Qinlan Shen, Sumana Srivatsa, Krishnaram Kenthapadi
Lightning Talk Wanted: Personalised Bias Warnings for Gender Bias in Language Models Chiara Di Bonaventura, Michelle Nwachukwu, Maria Stoica
Lightning Talk (Findings) Translate With Care: Addressing Gender Bias, Neutrality, and Reasoning in Large Language Model Translations Pardis Sadat Zahraei, Ali Emami
Lightning Talk (Findings) Biases Propagate in Encoder-based Vision-Language Models: A Systematic Analysis From Intrinsic Measures to Zero-shot Retrieval Outcomes Kshitish Ghate, Tessa Charlesworth, Mona T. Diab, Aylin Caliskan