Share this page:

Societal Biases in Language Generation: Progress and Challenges

Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.

Download the full text


Abstract

Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner. While techniques can effectively generate fluent text, they can also produce undesirable societal biases that can have a disproportionately negative impact on marginalized populations. Language generation presents unique challenges for biases in terms of direct user interaction and the structure of decoding techniques. To better understand these challenges, we present a survey on societal biases in language generation, focusing on how data and techniques contribute to biases and progress towards reducing biases. Motivated by a lack of studies on biases from decoding techniques, we also conduct experiments to quantify the effects of these techniques. By further discussing general trends and open challenges, we call to attention promising directions for research and the importance of fairness and inclusivity considerations for language generation applications.


Bib Entry

@inproceedings{sheng2021societal,
  title = {Societal Biases in Language Generation: Progress and Challenges},
  author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
  booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
  year = {2021}
}

Related Publications

  • Mitigating Bias for Question Answering Models by Tracking Bias Influence

    Mingyu Derek Ma, Jiun-Yu Kao, Arpit Gupta, Yu-Hsiang Lin, Wenbo Zhao, Tagyoung Chung, Wei Wang, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    BibTeX Details
    @inproceedings{ma2024bias,
      title = {Mitigating Bias for Question Answering Models by Tracking Bias Influence},
      author = {Ma, Mingyu Derek and Kao, Jiun-Yu and Gupta, Arpit and Lin, Yu-Hsiang and Zhao, Wenbo and Chung, Tagyoung and Wang, Wei and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2024}
    }
    
    Details
  • Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems

    Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, and Kai-Wei Chang, in Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings), 2023.
    Full Text BibTeX Details
    @inproceedings{wan2023personalized,
      title = {Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems},
      author = {Wan, Yixin and Zhao, Jieyu and Chadha, Aman and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings)},
      year = {2023}
    }
    
    Details
  • Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children’s Fairy Tales

    Paulina Toro Isaza, Guangxuan Xu, Toye Oloko, Yufang Hou, Nanyun Peng, and Dakuo Wang, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text BibTeX Details
    @inproceedings{isaza2023fairytales,
      title = {Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children's Fairy Tales},
      author = {Isaza, Paulina Toro and Xu, Guangxuan and Oloko, Toye and Hou, Yufang and Peng, Nanyun and Wang, Dakuo},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • Towards Robust NLG Evaluation with Syntactically-diverse Prompts

    Arshiya Aggarwal, Jiao Sun, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings), 2022.
    Full Text BibTeX Details
    @inproceedings{aggarwal2022towards,
      title = {Towards Robust NLG Evaluation with Syntactically-diverse Prompts},
      author = {Aggarwal, Arshiya and Sun, Jiao and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings)},
      year = {2022}
    }
    
    Details
  • Socially Aware Bias Measurements for Hindi Language Representations

    Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, and Kai-Wei Chang, in Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short, 2022.
    Full Text BibTeX Details
    @inproceedings{malik2022socially,
      title = {Socially Aware Bias Measurements for Hindi Language Representations},
      author = {Malik, Vijit and Dev, Sunipa and Nishi, Akihiro and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short},
      year = {2022}
    }
    
    Details
  • Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia

    Jiao Sun and Nanyun Peng, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    Full Text Code Abstract BibTeX Details
    Human activities can be seen as sequences of events, which are crucial to understanding societies. Disproportional event distribution for different demographic groups can manifest and amplify social stereotypes, and potentially jeopardize the ability of members in some groups to pursue certain goals. In this paper, we present the first event-centric study of gender biases in a Wikipedia corpus. To facilitate the study, we curate a corpus of career and personal life descriptions with demographic information consisting of 7,854 fragments from 10,412 celebrities. Then we detect events with a state-of-the-art event detection model, calibrate the results using strategically generated templates, and extract events that have asymmetric associations with genders. Our study discovers that Wikipedia pages tend to intermingle personal life events with professional events for females but not for males, which calls for the awareness of the Wikipedia community to formalize guidelines and train the editors to mind the implicit biases that contributors carry. Our work also lays the foundation for future works on quantifying and discovering event biases at the corpus level.
    @inproceedings{sun2021men,
      title = {Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia},
      author = {Sun, Jiao and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2021}
    }
    
    Details
  • Societal Biases in Language Generation: Progress and Challenges

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    Full Text Abstract BibTeX Details
    Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner. While techniques can effectively generate fluent text, they can also produce undesirable societal biases that can have a disproportionately negative impact on marginalized populations. Language generation presents unique challenges for biases in terms of direct user interaction and the structure of decoding techniques. To better understand these challenges, we present a survey on societal biases in language generation, focusing on how data and techniques contribute to biases and progress towards reducing biases. Motivated by a lack of studies on biases from decoding techniques, we also conduct experiments to quantify the effects of these techniques. By further discussing general trends and open challenges, we call to attention promising directions for research and the importance of fairness and inclusivity considerations for language generation applications.
    @inproceedings{sheng2021societal,
      title = {Societal Biases in Language Generation: Progress and Challenges},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2021}
    }
    
    Details
  • "Nice Try, Kiddo": Ad Hominems in Dialogue Systems

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
    Full Text Video Code Abstract BibTeX Details
    Ad hominem attacks are those that attack some feature of a person’s character instead of the position the person is maintaining. As a form of toxic and abusive language, ad hominems contain harmful language that could further amplify the skew of power inequality for marginalized populations. Since dialogue systems are designed to respond directly to user input, it is important to study ad hominems in these system responses. In this work, we propose categories of ad hominems that allow us to analyze human and dialogue system responses to Twitter posts. We specifically compare responses to Twitter posts about marginalized communities (#BlackLivesMatter, #MeToo) and other topics (#Vegan, #WFH). Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity to apply soft constraints to top-k sampling and can decrease the amount of ad hominems generated by dialogue systems. Our results indicate that 1) responses composed by both humans and DialoGPT contain more ad hominems for discussions around marginalized communities versus other topics, 2) different amounts of ad hominems in the training data can influence the likelihood of the model generating ad hominems, and 3) we can thus carefully choose training data and use constrained decoding techniques to decrease the amount of ad hominems generated by dialogue systems.
    @inproceedings{sheng2021nice,
      title = {"Nice Try, Kiddo": Ad Hominems in Dialogue Systems},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      publisher = {Association for Computational Linguistics},
      pages = {750--767},
      year = {2021}
    }
    
    Details
  • Towards Controllable Biases in Language Generation

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, long, 2020.
    Full Text Poster Code Abstract BibTeX Details
    We present a general approach towards controllable societal biases in natural language generation (NLG). Building upon the idea of adversarial triggers, we develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups. We then analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics. The former scenario enables us to detect the types of biases present in the model. Specifically, we show the effectiveness of our approach at facilitating bias analysis by finding topics that correspond to demographic inequalities in generated text and comparing the relative effectiveness of inducing biases for different demographics. The second scenario is useful for mitigating biases in downstream applications such as dialogue generation. In our experiments, the mitigation technique proves to be effective at equalizing the amount of biases across demographics while simultaneously generating less negatively biased text overall.
    @inproceedings{sheng2020towards,
      title = {Towards Controllable Biases in Language Generation},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, long},
      year = {2020}
    }
    
    Details
  • Man is to person as woman is to location: Measuring gender bias in named entity recognition

    Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan, in 31st ACM Conference on Hypertext and Social Media (HT’20), 2020.
    Full Text BibTeX Details
    @inproceedings{mehrabi2020man,
      title = {Man is to person as woman is to location: Measuring gender bias in named entity recognition},
      author = {Mehrabi, Ninareh and Gowda, Thamme and Morstatter, Fred and Peng, Nanyun and Galstyan, Aram},
      booktitle = {31st ACM Conference on Hypertext and Social Media (HT’20)},
      year = {2020}
    }
    
    Details
  • The Woman Worked as a Babysitter: On Biases in Language Generation

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), short, 2019.
    Full Text BibTeX Details
    @inproceedings{sheng2019woman,
      title = {The Woman Worked as a Babysitter: On Biases in Language Generation},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), short},
      year = {2019}
    }
    
    Details
  • Debiasing Community Detection: The Importance of Lowly-Connected Nodes

    Ninareh Mehrabi, Fred Morstatter, Nanyun Peng, and Aram Galstyan, in The 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2019), 2019.
    Full Text BibTeX Details
    @inproceedings{mehrabi2019debiasing,
      title = {Debiasing Community Detection: The Importance of Lowly-Connected Nodes},
      author = {Mehrabi, Ninareh and Morstatter, Fred and Peng, Nanyun and Galstyan, Aram},
      booktitle = {The 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2019)},
      year = {2019}
    }
    
    Details