My Google Scholar

Download the bibfile


Preprint

  • Revealing Persona Biases in Dialogue Systems

    Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, and Nanyun Peng, arXiv preprint arXiv:2104.08728, 2021.
    BibTeX Details
    @article{sheng2021revealing,
      title = {Revealing Persona Biases in Dialogue Systems},
      author = {Sheng, Emily and Arnold, Josh and Yu, Zhou and Chang, Kai-Wei and Peng, Nanyun},
      journal = {arXiv preprint arXiv:2104.08728},
      year = {2021}
    }
    
    Details
  • " Don’t quote me on that": Finding Mixtures of Sources in News Articles

    Alexander Spangher, Nanyun Peng, Jonathan May, and Emilio Ferrara, arXiv preprint arXiv:2104.09656, 2021.
    BibTeX Details
    @article{spangher2021don,
      title = {" Don't quote me on that": Finding Mixtures of Sources in News Articles},
      author = {Spangher, Alexander and Peng, Nanyun and May, Jonathan and Ferrara, Emilio},
      journal = {arXiv preprint arXiv:2104.09656},
      year = {2021}
    }
    
    Details

2024

  • MacGyver: Are Large Language Models Creative Problem Solvers?

    Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, and Faeze Brahman, in Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    Full Text BibTeX Details
    @inproceedings{tian2024macgyver,
      title = {MacGyver: Are Large Language Models Creative Problem Solvers?},
      author = {Tian, Yufei and Ravichander, Abhilasha and Qin, Lianhui and Bras, Ronan Le and Marjieh, Raja and Peng, Nanyun and Choi, Yejin and Griffiths, Thomas L. and Brahman, Faeze},
      booktitle = {Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2024}
    }
    
    Details
  • AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation

    Haoyi Qiu, Kung-Hsiang Huang, Jingnong Qu, and Nanyun Peng, in Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    Full Text BibTeX Details
    @inproceedings{qiu2024amrfact,
      title = {AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation},
      author = {Qiu, Haoyi and Huang, Kung-Hsiang and Qu, Jingnong and Peng, Nanyun},
      booktitle = {Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2024}
    }
    
    Details
  • Contextual Label Projection for Cross-Lingual Structured Prediction

    Tanmay Parekh, I.-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    BibTeX Details
    @inproceedings{parekh2024clap,
      title = {Contextual Label Projection for Cross-Lingual Structured Prediction},
      author = {Parekh, Tanmay and Hsu, I-Hung and Huang, Kuan-Hao and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2024}
    }
    
    Details
  • Event Detection from Social Media for Epidemic Prediction

    Tanmay Parekh, Anh Mac, Jiarui Yu, Yuxuan Dong, Syed Shahriar, Bonnie Liu, Eric J. Yang, Kuan-Hao Huang, Wei Wang, Nanyun Peng, and Kai-Wei Chang, in Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    BibTeX Details
    @inproceedings{parekh2024pipp,
      title = {Event Detection from Social Media for Epidemic Prediction},
      author = {Parekh, Tanmay and Mac, Anh and Yu, Jiarui and Dong, Yuxuan and Shahriar, Syed and Liu, Bonnie and Yang, Eric J and Huang, Kuan-Hao and Wang, Wei and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {Proceedings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2024}
    }
    
    Details
  • Mitigating Bias for Question Answering Models by Tracking Bias Influence

    Mingyu Derek Ma, Jiun-Yu Kao, Arpit Gupta, Yu-Hsiang Lin, Wenbo Zhao, Tagyoung Chung, Wei Wang, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
    BibTeX Details
    @inproceedings{ma2024bias,
      title = {Mitigating Bias for Question Answering Models by Tracking Bias Influence},
      author = {Ma, Mingyu Derek and Kao, Jiun-Yu and Gupta, Arpit and Lin, Yu-Hsiang and Zhao, Wenbo and Chung, Tagyoung and Wang, Wei and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2024}
    }
    
    Details
  • Human-in-the-Loop Synthetic Text Data Inspection with Provenance Tracking

    Hong Jin Kang*, Fabrice Y. Harel-Canada*, Muhammad Ali Gulzar, Nanyun Peng, and Miryung Kim, in Findings of Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-Findings), 2024.
    BibTeX Details
    @inproceedings{kang2024hitl,
      title = {Human-in-the-Loop Synthetic Text Data Inspection with Provenance Tracking},
      author = {Kang*, Hong Jin and Harel-Canada*, Fabrice Y and Gulzar, Muhammad Ali and Peng, Nanyun and Kim, Miryung},
      booktitle = {Findings of Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-Findings)},
      year = {2024}
    }
    
    Details
  • RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment

    Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian, in Proceedings of the Twelfth International Conference on Learning Representations (ICLR), 2024.
    Full Text BibTeX Details
    @inproceedings{yang2024rlcd,
      title = {RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment},
      author = {Yang, Kevin and Klein, Dan and Celikyilmaz, Asli and Peng, Nanyun and Tian, Yuandong},
      booktitle = {Proceedings of the Twelfth International Conference on Learning Representations (ICLR)},
      year = {2024}
    }
    
    Details
  • STAR: Boosting Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models

    Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung, P. Jeffrey Brantingham, Nanyun Peng, and Wei Wang, in Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI), 2024.
    Full Text BibTeX Details
    @inproceedings{ma2024star,
      title = {STAR: Boosting Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models},
      author = {Ma, Mingyu Derek and Wang, Xiaoxuan and Kung, Po-Nien and Brantingham, P. Jeffrey and Peng, Nanyun and Wang, Wei},
      booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI)},
      year = {2024}
    }
    
    Details
  • MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways

    Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Lin, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, and Wei Wang, in Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI), Demonstration Track, 2024.
    Full Text BibTeX Details
    @inproceedings{ma2024middag,
      title = {MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways},
      author = {Ma, Mingyu Derek and Taylor, Alexander K. and Wen, Nuan and Lin, Yanchen and Kung, Po-Nien and Qin, Wenna and Wen, Shicheng and Zhou, Azure and Yang, Diyi and Ma, Xuezhe and Peng, Nanyun and Wang, Wei},
      booktitle = {Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI), Demonstration Track},
      year = {2024}
    }
    
    Details

2023

  • Harnessing Black-Box Control to Boost Commonsense in LMs’ Generation

    Yufei Tian, Felix Zhang, and Nanyun Peng, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    Full Text BibTeX Details
    @inproceedings{tian2023harnessing,
      title = {Harnessing Black-Box Control to Boost Commonsense in LMs’ Generation},
      author = {Tian, Yufei and Zhang, Felix and Peng, Nanyun},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks

    Po-Nien Kung, Fan Yin, Di Wu, Kai-Wei Chang, and Nanyun Peng, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    Full Text BibTeX Details
    @inproceedings{kung2023active,
      title = {Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks},
      author = {Kung, Po-Nien and Yin, Fan and Wu, Di and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • Gender Biases in Automatic Evaluation Metrics for Image Captioning

    Haoyi Qiu, Zi-Yi Dou, Tianlu Wang, Asli Celikyilmaz, and Nanyun Peng, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    Full Text BibTeX Details
    @inproceedings{qiu2023gender,
      title = {Gender Biases in Automatic Evaluation Metrics for Image Captioning},
      author = {Qiu, Haoyi and Dou, Zi-Yi and Wang, Tianlu and Celikyilmaz, Asli and Peng, Nanyun},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • Localizing Active Objects from Egocentric Vision with Symbolic World Knowledge

    Te-Lin Wu*, Yu Zhou*, and Nanyun Peng, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    BibTeX Details
    @inproceedings{wu2023localizing,
      title = {Localizing Active Objects from Egocentric Vision with Symbolic World Knowledge},
      author = {Wu*, Te-Lin and Zhou*, Yu and Peng, Nanyun},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life Videos

    Te-Lin Wu*, Zi-Yi Dou*, Qingyuan Hu*, Yu Hou, Nischal Reddy Chandra, Marjorie Freedman, Ralph Weischedel, and Nanyun Peng, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    Full Text BibTeX Details
    @inproceedings{wu2023acquired,
      title = {ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life Videos},
      author = {Wu*, Te-Lin and Dou*, Zi-Yi and Hu*, Qingyuan and Hou, Yu and Chandra, Nischal Reddy and Freedman, Marjorie and Weischedel, Ralph and Peng, Nanyun},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • Evaluating Large Language Models on Controlled Generation Tasks

    Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, and Xuezhe Ma, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    Full Text BibTeX Details
    @inproceedings{sun2023eval,
      title = {Evaluating Large Language Models on Controlled Generation Tasks},
      author = {Sun, Jiao and Tian, Yufei and Zhou, Wangchunshu and Xu, Nan and Hu, Qian and Gupta, Rahul and Wieting, John Frederick and Peng, Nanyun and Ma, Xuezhe},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • Identifying Informational Sources in News Articles

    Alexander Spangher, Nanyun Peng, Emilio Ferrara, and Jonathan May, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    Full Text BibTeX Details
    @inproceedings{spangher2023identifying,
      title = {Identifying Informational Sources in News Articles},
      author = {Spangher, Alexander and Peng, Nanyun and Ferrara, Emilio and May, Jonathan},
      booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2023}
    }
    
    Details
  • “Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters

    Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, and Nanyun Peng, in Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings), 2023.
    Full Text BibTeX Details
    @inproceedings{wan2023kelly,
      title = {“Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters},
      author = {Wan, Yixin and Pu, George and Sun, Jiao and Garimella, Aparna and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings)},
      year = {2023}
    }
    
    Details
  • Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems

    Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, and Kai-Wei Chang, in Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings), 2023.
    Full Text BibTeX Details
    @inproceedings{wan2023personalized,
      title = {Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems},
      author = {Wan, Yixin and Zhao, Jieyu and Chadha, Aman and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings)},
      year = {2023}
    }
    
    Details
  • DesCo: Learning Object Recognition with Rich Language Descriptions

    Liunian Harold Li*, Zi-Yi Dou*, Nanyun Peng, and Kai-Wei Chang, in The 2023 Conference on Neural Information Processing Systems (NeurIPS), 2023.
    Full Text BibTeX Details
    @inproceedings{li2023desco,
      title = {DesCo: Learning Object Recognition with Rich Language Descriptions},
      author = {Li*, Liunian Harold and Dou*, Zi-Yi and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {The 2023 Conference on Neural Information Processing Systems (NeurIPS)},
      year = {2023}
    }
    
    Details
  • Masked Path Modeling for Vision-and-Language Navigation

    Zi-Yi Dou, Feng Gao, and Nanyun Peng, in Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings), 2023.
    Full Text BibTeX Details
    @inproceedings{dou2023mpm,
      title = {Masked Path Modeling for Vision-and-Language Navigation},
      author = {Dou, Zi-Yi and Gao, Feng and Peng, Nanyun},
      booktitle = {Findings of The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-Findings)},
      year = {2023}
    }
    
    Details
  • Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning

    Mingyu Derek Ma, Jiun-Yu Kao, Shuyang Gao, Arpit Gupta, Di Jin, Tagyoung Chung, and Nanyun Peng, in Proceedings of INTERSPEECH 2023, 2023.
    Full Text BibTeX Details
    @inproceedings{ma2023parameter,
      title = {Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning},
      author = {Ma, Mingyu Derek and Kao, Jiun-Yu and Gao, Shuyang and Gupta, Arpit and Jin, Di and Chung, Tagyoung and Peng, Nanyun},
      booktitle = {Proceedings of INTERSPEECH 2023},
      year = {2023}
    }
    
    Details
  • AMPERE: AMR-Aware Prefix for Generation-Based Event Argument Extraction Model

    I.-Hung Hsu*, Zhiyu Xie*, Kuan-Hao Huang, Premkumar Natarajan, and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Poster Video Code BibTeX Details
    @inproceedings{hsu2023ampere,
      title = {AMPERE: AMR-Aware Prefix for Generation-Based Event Argument Extraction Model},
      author = {Hsu*, I-Hung and Xie*, Zhiyu and Huang, Kuan-Hao and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • Learning Action Conditions from Instructional Manuals for Instruction Understanding

    Te-Lin Wu, Caiqi Zhang, Qingyuan Hu, Alex Spangher, and Nanyun Peng, in Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Abstract BibTeX Details
    The ability to infer pre- and postconditions of an action is vital for comprehending complex instructions, and is essential for applications such as autonomous instruction-guided agents and assistive AI that supports humans to perform physical tasks. In this work, we propose a task dubbed action condition inference, which extracts mentions of preconditions and postconditions of actions in instructional manuals. We propose a weakly supervised approach utilizing automatically constructed large-scale training instances from online instructions, and curate a densely human-annotated and validated dataset to study how well the current NLP models do on the proposed task. We design two types of models differ by whether contextualized and global information is leveraged, as well as various combinations of heuristics to construct the weak supervisions. Our experiments show a > 20% F1-score improvement with considering the entire instruction contexts and a > 6% F1-score benefit with the proposed heuristics. However, the best performing model is still well-behind human performance.
    @inproceedings{wu2023action,
      title = {Learning Action Conditions from Instructional Manuals for Instruction Understanding},
      author = {Wu, Te-Lin and Zhang, Caiqi and Hu, Qingyuan and Spangher, Alex and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • ACCENT: An Automatic Event Commonsense Evaluation Metric for Open-Domain Dialogue Systems

    Sarik Ghazarian*, Yijia Shao*, Rujun Han, Aram Galstyan, and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text BibTeX Details
    @inproceedings{ghazarian2023accent,
      title = {ACCENT: An Automatic Event Commonsense Evaluation Metric for Open-Domain Dialogue Systems},
      author = {Ghazarian*, Sarik and Shao*, Yijia and Han, Rujun and Galstyan, Aram and Peng, Nanyun},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • GENEVA: Benchmarking Generalizability for Event Argument Extraction with Hundreds of Event Types and Argument Roles

    Tanmay Parekh, I.-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Code BibTeX Details
    @inproceedings{parekh2023geneva,
      title = {GENEVA: Benchmarking Generalizability for Event Argument Extraction with Hundreds of Event Types and Argument Roles},
      author = {Parekh, Tanmay and Hsu, I-Hung and Huang, Kuan-Hao and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • Unsupervised Melody-to-Lyric Generation

    Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Slides BibTeX Details
    @inproceedings{tian2023lyric,
      title = {Unsupervised Melody-to-Lyric Generation},
      author = {Tian, Yufei and Narayan-Chen, Anjali and Oraby, Shereen and Cervone, Alessandra and Sigurdsson, Gunnar and Tao, Chenyang and Zhao, Wenbo and Chung, Tagyoung and Huang, Jing and Peng, Nanyun},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning

    Po-Nien Kung and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), short, 2023.
    Full Text BibTeX Details
    @inproceedings{kung2023models,
      title = {Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning},
      author = {Kung, Po-Nien and Peng, Nanyun},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), short},
      year = {2023}
    }
    
    Details
  • DICE: Data-Efficient Clinical Event Extraction with Generative Models

    Mingyu Derek Ma, Alexander K. Taylor, Wei Wang, and Nanyun Peng, in Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Code BibTeX Details
    @inproceedings{ma2023dice,
      title = {DICE: Data-Efficient Clinical Event Extraction with Generative Models},
      author = {Ma, Mingyu Derek and Taylor, Alexander K. and Wang, Wei and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • TAGPRIME: A Unified Framework for Relational Structure Extraction

    I.-Hung Hsu*, Kuan-Hao Huang*, Shuning Zhang, Wenxing Cheng, Premkumar Natarajan, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Code BibTeX Details
    @inproceedings{hsu2023tagprime,
      title = {TAGPRIME: A Unified Framework for Relational Structure Extraction},
      author = {Hsu*, I-Hung and Huang*, Kuan-Hao and Zhang, Shuning and Cheng, Wenxing and Natarajan, Premkumar and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • DOC: Improving Long Story Coherence With Detailed Outline Control

    Kevin Yang, Dan Klein, Nanyun Peng, and Yuandong Tian, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text BibTeX Details
    @inproceedings{yang2023doc,
      title = {DOC: Improving Long Story Coherence With Detailed Outline Control},
      author = {Yang, Kevin and Klein, Dan and Peng, Nanyun and Tian, Yuandong},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children’s Fairy Tales

    Paulina Toro Isaza, Guangxuan Xu, Toye Oloko, Yufang Hou, Nanyun Peng, and Dakuo Wang, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text BibTeX Details
    @inproceedings{isaza2023fairytales,
      title = {Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children's Fairy Tales},
      author = {Isaza, Paulina Toro and Xu, Guangxuan and Oloko, Toye and Hou, Yufang and Peng, Nanyun and Wang, Dakuo},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams

    Te-Lin Wu, Satwik Kottur, Andrea Madotto, Mahmoud Azab, Pedro Rodriguez, Nanyun Peng, Babak Damavandi, and Seungwhan Moon, in Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Abstract BibTeX Details
    Building an AI assistant that can seamlessly converse and instruct humans, in a user-centric situated scenario, requires several essential abilities: (1) spatial and temporal understanding of the situated and real-time user scenes, (2) capability of grounding the actively perceived visuals of users to conversation contexts, and (3) conversational reasoning over past utterances to perform just-in-time assistance. However, we currently lack a large-scale benchmark that captures user–assistant interactions with all of the aforementioned features. To this end, we propose SIMMC-VR, extending the SIMMC 2.0 dataset, which only concerns static visual scenes, to a video-grounded task-oriented dialog dataset that captures real-world AI-assisted user scenarios in VR. We propose a novel data collection paradigm that involves (1) generating object-centric multimodal dialog flows with egocentric visual streams and visually-grounded templates, and (2) manually paraphrasing the simulated dialogs for naturalness and diversity while preserving multimodal dependencies.  To measure meaningful progress in the field, we propose four tasks to address the new challenges in SIMMC-VR, which require complex spatial-temporal dialog reasoning in active egocentric scenes. We benchmark the proposed tasks with strong multimodal models, and highlight the key capabilities that current models lack for future research directions.
    @inproceedings{wu2023simmcvr,
      title = {SIMMC-VR: A Task-oriented Multimodal Dialog Dataset with Situated and Immersive VR Streams},
      author = {Wu, Te-Lin and Kottur, Satwik and Madotto, Andrea and Azab, Mahmoud and Rodriguez, Pedro and Peng, Nanyun and Damavandi, Babak and Moon, Seungwhan},
      booktitle = {Proceedings of the Conference of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • Code-Switching Text Synthesis in Unseen Language Pairs

    I.-Hung Hsu, Avik Ray, Shubham Grag, Nanyun Peng, and Jing Huang, in Findings of the Association for Computational Linguistics: ACL (ACL-findings), 2023.
    Full Text Slides Video BibTeX Details
    @inproceedings{hsu2023codeswitch,
      title = {Code-Switching Text Synthesis in Unseen Language Pairs},
      author = {Hsu, I-Hung and Ray, Avik and Grag, Shubham and Peng, Nanyun and Huang, Jing},
      booktitle = {Findings of the Association for Computational Linguistics: ACL (ACL-findings)},
      year = {2023}
    }
    
    Details
  • Tractable Control for Autoregressive Language Generation

    Honghua Zhang, Meihua Dang, Nanyun Peng, and Guy Van den Broeck, in Proceedings of the Fortieth International Conference on Machine Learning (ICML), 2023.
    Full Text BibTeX Details Oral Paper
    @inproceedings{zhang2023gelato,
      title = {Tractable Control for Autoregressive Language Generation},
      author = {Zhang, Honghua and Dang, Meihua and Peng, Nanyun and Broeck, Guy Van den},
      booktitle = {Proceedings of the Fortieth International Conference on Machine Learning (ICML)},
      year = {2023}
    }
    
    Details
  • Generalized Decoding for Pixel, Image and Language

    Xueyan Zou*, Zi-Yi Dou*, Jianwei Yang*, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jianfeng Wang, Lu Yuan, Nanyun Peng, Lijuan Wang, Yong Jae Lee, and Jianfeng Gao, in The Conference on Computer Vision and Pattern Recognition (CVPR-23), 2023.
    Full Text Code BibTeX Details
    @inproceedings{xdecoder,
      title = {Generalized Decoding for Pixel, Image and Language},
      author = {Zou*, Xueyan and Dou*, Zi-Yi and Yang*, Jianwei and Gan, Zhe and Li, Linjie and Li, Chunyuan and Dai, Xiyang and Behl, Harkirat and Wang, Jianfeng and Yuan, Lu and Peng, Nanyun and Wang, Lijuan and Lee, Yong Jae and Gao, Jianfeng},
      booktitle = {The Conference on Computer Vision and Pattern Recognition (CVPR-23)},
      year = {2023}
    }
    
    Details
  • Where Does Your News Come From? Predicting Information Pathways in Social Media

    Alexander Taylor, Nuan Wen, Po-Nien Kung, Jiaao Chen, Nanyun Peng, and Wei Wang, in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2023.
    Full Text BibTeX Details
    @inproceedings{taylor2023pathway,
      title = {Where Does Your News Come From? Predicting Information Pathways in Social Media},
      author = {Taylor, Alexander and Wen, Nuan and Kung, Po-Nien and Chen, Jiaao and Peng, Nanyun and Wang, Wei},
      booktitle = {Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information
      Retrieval (SIGIR)},
      year = {2023}
    }
    
    Details

2022

  • Character-Centric Story Visualization via Visual Planning and Token Alignment

    Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, and Nanyun Peng, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
    Full Text BibTeX Details
    @inproceedings{hong2022Character,
      title = {Character-Centric Story Visualization via Visual Planning and Token Alignment},
      author = {Chen, Hong and Han, Rujun and Wu, Te-Lin and Nakayama, Hideki and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2022}
    }
    
    Details
  • ExPUNations: Augmenting Puns with Keywords and Explanations

    Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
    Full Text BibTeX Details
    @inproceedings{sun2022expun,
      title = {ExPUNations: Augmenting Puns with Keywords and Explanations},
      author = {Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Cervone, Alessandra and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2022}
    }
    
    Details
  • Context-Situated Pun Generation

    Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
    Full Text BibTeX Details
    @inproceedings{sun2022context,
      title = {Context-Situated Pun Generation},
      author = {Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Gao, Shuyang and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2022}
    }
    
    Details
  • Re3: Generating Longer Stories With Recursive Reprompting and Revision

    Kevin Yang, Yuandong Tian, Nanyun Peng, and Dan Klein, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
    Full Text BibTeX Details
    @inproceedings{yang2022re3,
      title = {Re3: Generating Longer Stories With Recursive Reprompting and Revision},
      author = {Yang, Kevin and Tian, Yuandong and Peng, Nanyun and Klein, Dan},
      booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2022}
    }
    
    Details
  • A Unified Framework for Pun Generation with Humor Principles

    Yufei Tian, Divyanshu Arun Sheth, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings), 2022.
    Full Text BibTeX Details
    @inproceedings{tian2022unified,
      title = {A Unified Framework for Pun Generation with Humor Principles},
      author = {Tian, Yufei and Arun Sheth, Divyanshu and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings)},
      year = {2022}
    }
    
    Details
  • Sequentially Controlled Text Generation

    Alexander Spangher, Yao Ming, Xinyu Hua, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings), 2022.
    Full Text BibTeX Details
    @inproceedings{spangher2022sequentially,
      title = {Sequentially Controlled Text Generation},
      author = {Spangher, Alexander and Ming, Yao and Hua, Xinyu and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings)},
      year = {2022}
    }
    
    Details
  • Towards Robust NLG Evaluation with Syntactically-diverse Prompts

    Arshiya Aggarwal, Jiao Sun, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings), 2022.
    Full Text BibTeX Details
    @inproceedings{aggarwal2022towards,
      title = {Towards Robust NLG Evaluation with Syntactically-diverse Prompts},
      author = {Aggarwal, Arshiya and Sun, Jiao and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings)},
      year = {2022}
    }
    
    Details
  • EnDex: Evaluation of Dialogue Engagingness at Scale

    Guangxuan Xu, Nischal Reddy Chandra, Ruibo Liu, Fabrice Harel-Canada, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings), 2022.
    Full Text BibTeX Details
    @inproceedings{xu2022endex,
      title = {EnDex: Evaluation of Dialogue Engagingness at Scale},
      author = {Xu, Guangxuan and Chandra, Nischal Reddy and Liu, Ruibo and Harel-Canada, Fabrice and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings)},
      year = {2022}
    }
    
    Details
  • InsNet: An Efficient, Flexible, and Performant Insertion-based Text Generation Model

    Sidi Lu, Tao Meng, and Nanyun Peng, in Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS), 2022.
    Full Text BibTeX Details
    @inproceedings{lu2022InsNet,
      title = {InsNet: An Efficient, Flexible, and Performant Insertion-based Text Generation Model},
      author = {Lu, Sidi and Meng, Tao and Peng, Nanyun},
      booktitle = {Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS)},
      year = {2022}
    }
    
    Details
  • Controllable Text Generation with Neurally-Decomposed Oracle

    Tao Meng, Sidi Lu, Nanyun Peng, and Kai-Wei Chang, in Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS), 2022.
    Full Text BibTeX Details Oral Paper
    @inproceedings{meng2022nado,
      title = {Controllable Text Generation with Neurally-Decomposed Oracle},
      author = {Meng, Tao and Lu, Sidi and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS)},
      year = {2022}
    }
    
    Details
  • Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone

    Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang, in Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS), 2022.
    Full Text BibTeX Details
    @inproceedings{dou2022fiber,
      title = {Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone},
      author = {Dou, Zi-Yi and Kamath, Aishwarya and Gan, Zhe and Zhang, Pengchuan and Wang, Jianfeng and Li, Linjie and Liu, Zicheng and Liu, Ce and LeCun, Yann and Peng, Nanyun and Gao, Jianfeng and Wang, Lijuan},
      booktitle = {Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems (NeurIPS)},
      year = {2022}
    }
    
    Details
  • Controllable Text Generation for Open-Domain Creativity and Fairness

    Nanyun Peng, in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Early Career Track, 2022.
    Full Text BibTeX Details
    @inproceedings{peng2022controllable,
      title = {Controllable Text Generation for Open-Domain Creativity and Fairness},
      author = {Peng, Nanyun},
      booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Early Career Track},
      year = {2022}
    }
    
    Details
  • NewsEdits: A News Article Revision Dataset and a Novel Document-Level Reasoning Challenge

    Alexander Spangher, Xiang Ren, Jonathan May, and Nanyun Peng, in Proceedings of the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022.
    Full Text Code BibTeX Details 🏆 Outstanding Paper Award
    @inproceedings{spangher2022news,
      title = {NewsEdits: A News Article Revision Dataset and a Novel Document-Level Reasoning Challenge},
      author = {Spangher, Alexander and Ren, Xiang and May, Jonathan and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2022}
    }
    
    Details
  • Zero-Shot Sonnet Generation with Discourse-Level Planning and Aesthetics Features

    Yufei Tian and Nanyun Peng, in 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022.
    Full Text Code BibTeX Details
    @inproceedings{tian2022sonnet,
      title = {Zero-Shot Sonnet Generation with Discourse-Level Planning and Aesthetics Features},
      author = {Tian, Yufei and Peng, Nanyun},
      booktitle = {2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2022}
    }
    
    Details
  • Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction

    Kuan-Hao Huang*, I.-Hung Hsu*, Premkumar Natarajan, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
    Full Text Slides Poster Code Abstract BibTeX Details
    We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE). By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments. We design language-agnostic templates to represent the event argument structures, which are compatible with any language, hence facilitating the cross-lingual transfer. Our proposed model finetunes multilingual pre-trained generative language models to generate sentences that fill in the language-agnostic template with arguments extracted from the input passage. The model is trained on source languages and is then directly applied to target languages for event argument extraction. Experiments demonstrate that the proposed model outperforms the current state-of-the-art models on zero-shot cross-lingual EAE. Comprehensive studies and error analyses are presented to better understand the advantages and the current limitations of using generative language models for zero-shot cross-lingual transfer EAE.
    @inproceedings{huang2022multilingual,
      title = {Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction},
      author = {Huang*, Kuan-Hao and Hsu*, I-Hung and Natarajan, Premkumar and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2022}
    }
    
    Details
  • Go Back in Time: Generating Flashbacks in Stories with Event Temporal Prompts

    Rujun Han, Hong Chen, Yufei Tian, and Nanyun Peng, in 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022.
    Full Text Code BibTeX Details
    @inproceedings{han2022go,
      title = {Go Back in Time: Generating Flashbacks in Stories with Event Temporal Prompts},
      author = {Han, Rujun and Chen, Hong and Tian, Yufei and Peng, Nanyun},
      booktitle = {2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2022}
    }
    
    Details
  • FOAM: A Follower-aware Speaker Model for Vision-and-Language Navigation

    Zi-Yi Dou and Nanyun Peng, in Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short, 2022.
    Full Text Code BibTeX Details
    @inproceedings{dou2022foam,
      title = {FOAM: A Follower-aware Speaker Model for Vision-and-Language Navigation},
      author = {Dou, Zi-Yi and Peng, Nanyun},
      booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short},
      year = {2022}
    }
    
    Details
  • AmbiPun: Generating Humorous Puns with Ambiguous Context

    Anirudh Mittal, Yufei Tian, and Nanyun Peng, in 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short, 2022.
    Full Text Code BibTeX Details
    @inproceedings{Mittal2022ambipun,
      title = {AmbiPun: Generating Humorous Puns with Ambiguous Context},
      author = {Mittal, Anirudh and Tian, Yufei and Peng, Nanyun},
      booktitle = {2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short},
      year = {2022}
    }
    
    Details
  • Socially Aware Bias Measurements for Hindi Language Representations

    Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, and Kai-Wei Chang, in Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short, 2022.
    Full Text BibTeX Details
    @inproceedings{malik2022socially,
      title = {Socially Aware Bias Measurements for Hindi Language Representations},
      author = {Malik, Vijit and Dev, Sunipa and Nishi, Akihiro and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short},
      year = {2022}
    }
    
    Details
  • An Empirical Study of Training End-to-End Vision-and-Language Transformers

    Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, and Michael Zeng, in The Conference on Computer Vision and Pattern Recognition (CVPR-22), 2022.
    Full Text Code Abstract BibTeX Details
    Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present METER, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer while maintaining fast inference speed. Notably, our best model achieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%.
    @inproceedings{dou2022meter,
      title = {An Empirical Study of Training End-to-End Vision-and-Language Transformers},
      author = {Dou, Zi-Yi and Xu, Yichong and Gan, Zhe and Wang, Jianfeng and Wang, Shuohang and Wang, Lijuan and Zhu, Chenguang and Zhang, Pengchuan and Yuan, Lu and Peng, Nanyun and Liu, Zicheng and Zeng, Michael},
      booktitle = {The Conference on Computer Vision and Pattern Recognition (CVPR-22)},
      year = {2022}
    }
    
    Details
  • DEAM: Dialogue Coherence Evaluation using AMR-based Semantic Manipulations

    Sarik Ghazarian, Nuan Wen, Aram Galstyan, and Nanyun Peng, in Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
    Full Text Abstract BibTeX Details
    Automatic evaluation metrics are essential for the rapid development of open-domain dialogue systems as they facilitate hyper-parameter tuning and comparison between models. Although recently proposed trainable conversation-level metrics have shown encouraging results, the quality of the metrics is strongly dependent on the quality of training data. Prior works mainly resort to heuristic text-level manipulations (e.g. utterances shuffling) to bootstrap incoherent conversations (negative examples) from coherent dialogues (positive examples). Such approaches are insufficient to appropriately reflect the incoherence that occurs in interactions between advanced dialogue models and humans. To tackle this problem, we propose DEAM, a Dialogue coherence Evaluation metric that relies on Abstract Meaning Representation (AMR) to apply semantic-level Manipulations for incoherent (negative) data generation. AMRs naturally facilitate the injection of various types of incoherence sources, such as coreference inconsistency, irrelevancy, contradictions, and decrease engagement, at the semantic level, thus resulting in more natural incoherent samples. Our experiments show that DEAM achieves higher correlations with human judgments compared to baseline methods on several dialog datasets by significant margins. We also show that DEAM can distinguish between coherent and incoherent dialogues generated by baseline manipulations, whereas those baseline models cannot detect incoherent examples generated by DEAM. Our results demonstrate the potential of AMR-based semantic manipulations for natural negative example generation.
    @inproceedings{ghazarian2022deam,
      title = {DEAM: Dialogue Coherence Evaluation using AMR-based Semantic Manipulations},
      author = {Ghazarian, Sarik and Wen, Nuan and Galstyan, Aram and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2022}
    }
    
    Details
  • DEGREE: A Data-Efficient Generative Event Extraction Model

    I.-Hung Hsu*, Kuan-Hao Huang*, Elizabeth Boschee, Scott Miller, Premkumar Natarajan, Kai-Wei Chang, and Nanyun Peng, in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), 2022.
    Full Text Slides Video Code Abstract BibTeX Details
    Event extraction requires high-quality expert human annotations, which are usually expensive. Therefore, learning a data-efficient event extraction model that can be trained with only a few labeled examples has become a crucial challenge. In this paper, we focus on low-resource end-to-end event extraction and propose DEGREE, a data-efficient model that formulates event extraction as a conditional generation problem. Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern. The final event predictions are then extracted from the generated sentence with a deterministic algorithm. DEGREE has three advantages to learn well with less training data. First, our designed prompts provide semantic guidance for DEGREE to leverage DEGREE and thus better capture the event arguments. Moreover, DEGREE is capable of using additional weakly-supervised information, such as the description of events encoded in the prompts. Finally, DEGREE learns triggers and arguments jointly in an end-to-end manner, which encourages the model to better utilize the shared knowledge and dependencies among them. Our experimental results demonstrate the strong performance of DEGREE for low-resource event extraction.
    @inproceedings{hsu2022degree,
      title = {DEGREE: A Data-Efficient Generative Event Extraction Model},
      author = {Hsu*, I-Hung and Huang*, Kuan-Hao and Boschee, Elizabeth and Miller, Scott and Natarajan, Premkumar and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)},
      year = {2022}
    }
    
    Details
  • Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals

    Te-Lin Wu, Alex Spangher, Pegah Alipoormolabashi, Marjorie Freedman, Ralph Weischedel, and Nanyun Peng, in Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
    Full Text Abstract BibTeX Details
    The ability to sequence unordered events is evidence of comprehension and reasoning about real world tasks/procedures, and is essential for applications such as task planning and multi-source instruction summarization. It often requires thorough understanding of temporal common sense and multimodal information, since these procedures are often conveyed by a combination of texts and images. While humans are capable of reasoning about and sequencing unordered procedural instructions,  the extent to which the current machine learning methods possess such a capability is still an open question. In this work, we benchmark models’ capability of reasoning over and sequencing unordered multimodal instructions by curating datasets from online instructional manuals and collecting comprehensive human annotations. We find current state-of-the-art models not only perform significantly worse than humans but also seem incapable of efficiently utilizing  multimodal information. To improve machines’ performance on multimodal event sequencing, we propose sequence-aware pretraining techniques exploiting the sequential alignment properties of both texts and images, resulting in >5% improvements on perfect match ratio.
    @inproceedings{wu2022procedural,
      title = {Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals},
      author = {Wu, Te-Lin and Spangher, Alex and Alipoormolabashi, Pegah and Freedman, Marjorie and Weischedel, Ralph and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2022}
    }
    
    Details
  • Fantastic Questions and Where to Find Them: FairytaleQA–An Authentic Dataset for Narrative Comprehension

    Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, and Mark Warschauer, in Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
    BibTeX Details
    @inproceedings{xu2022fairy,
      title = {Fantastic Questions and Where to Find Them: FairytaleQA--An Authentic Dataset for Narrative Comprehension},
      author = {Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Bradford, Nora and Sun, Branda and Hoang, Tran and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark},
      booktitle = {Proceedings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2022}
    }
    
    Details
  • Sibylvariant Transformations for Robust Text Classification

    Fabrice Y. Harel-Canada, Muhammad Ali Gulzar, Nanyun Peng, and Miryung Kim, in Findings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL-findings), 2022.
    BibTeX Details
    @inproceedings{harel-canada2022sibyl,
      title = {Sibylvariant Transformations for Robust Text Classification},
      author = {Harel-Canada, Fabrice Y and Gulzar, Muhammad Ali and Peng, Nanyun and Kim, Miryung},
      booktitle = {Findings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL-findings)},
      year = {2022}
    }
    
    Details
  • On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark

    Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang, in Findings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL-findings), 2022.
    Full Text Abstract BibTeX Details
    Dialogue safety problems severely limit the real-world deployment of neural conversational models and have attracted great research interests recently. However, dialogue safety problems remain under-defined and the corresponding dataset is scarce. We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works. To spur research in this direction, we compile DiaSafety, a dataset with rich context-sensitive unsafe examples. Experiments show that existing safety guarding tools fail severely on our dataset. As a remedy, we train a dialogue safety classifier to provide a strong baseline for context-sensitive dialogue unsafety detection. With our classifier, we perform safety evaluations on popular conversational models and show that existing dialogue systems still exhibit concerning context-sensitive safety problems.
    @inproceedings{sun2022safe,
      title = {On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark},
      author = {Sun, Hao and Xu, Guangxuan and Deng, Jiawen and Cheng, Jiale and Zheng, Chujie and Zhou, Hao and Peng, Nanyun and Zhu, Xiaoyan and Huang, Minlie},
      booktitle = {Findings of the Conference of the 60th Annual Meeting of the Association for Computational Linguistics (ACL-findings)},
      year = {2022}
    }
    
    Details
  • Zero-shot Commonsense Question Answering with Cloze Translation and Consistency Optimization

    Zi-Yi Dou and Nanyun Peng, in The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2022.
    Full Text Code Abstract BibTeX Details
    Commonsense question answering (CQA) aims to test if models can answer questions regarding commonsense knowledge that everyone knows. Prior works that incorporate external knowledge bases have shown promising results, but knowledge bases are expensive to construct and are often limited to a fixed set of relations. In this paper, we instead focus on better utilizing the implicit knowledge stored in pre-trained language models. While researchers have found that the knowledge embedded in pre-trained language models can be extracted by having them fill in the blanks of carefully designed prompts for relation extraction and text classification, it remains unclear if we can adopt this paradigm in CQA where the inputs and outputs take much more flexible forms. To this end, we investigate four translation methods that can translate natural questions into cloze-style sentences to better solicit commonsense knowledge from language models, including a syntactic-based model, an unsupervised neural model, and two supervised neural models. In addition, to combine the different translation methods, we propose to encourage consistency among model predictions on different translated questions with unlabeled data. We demonstrate the effectiveness of our methods on three CQA datasets in zero-shot settings. We show that our methods are complementary to a knowledge base improved model, and combining them can lead to state-of-the-art zero-shot performance. Analyses also reveal distinct characteristics of the different cloze translation methods and provide insights on why combining them can lead to great improvements.
    @inproceedings{dou2022improving,
      title = {Zero-shot Commonsense Question Answering with Cloze Translation and Consistency Optimization},
      author = {Dou, Zi-Yi and Peng, Nanyun},
      booktitle = {The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI)},
      year = {2022}
    }
    
    Details
  • Discourse-level Relation Extraction via Graph Pooling

    I.-Hung Hsu, Xiao Guo, Premkumar Natarajan, and Nanyun Peng, in The Thirty-Sixth AAAI Conference On Artificial Intelligence Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI), 2022.
    BibTeX Details 🏆 Best Paper Award
    @inproceedings{hsu2021discourse,
      title = {Discourse-level Relation Extraction via Graph Pooling},
      author = {Hsu, I-Hung and Guo, Xiao and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {The Thirty-Sixth AAAI Conference On Artificial Intelligence Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI)},
      year = {2022}
    }
    
    Details

2021

  • Document-level Entity-based Extraction as Template Generation

    Kung-Hsiang Huang, Sam Tang, and Nanyun Peng, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    Full Text Code Abstract BibTeX Details
    Document-level entity-based extraction (EE), aiming at extracting entity-centric information such as entity roles and entity relations, is key to automatic knowledge acquisition from text corpora for various domains. Most document-level EE systems build extractive models, which struggle to model long-term dependencies among entities at the document level. To address this issue, we propose a generative framework for two document-level EE tasks: role-filler entity extraction (REE) and relation extraction (RE). We first formulate them as a template generation problem, allowing models to efficiently capture cross-entity dependencies, exploit label semantics, and avoid the exponential computation complexity of identifying N-ary relations. A novel cross-attention guided copy mechanism, TopK Copy, is incorporated into a pre-trained sequence-to-sequence model to enhance the capabilities of identifying key information in the input document. Experiments done on the MUC-4 and SciREX dataset show new state-of-the-art results on REE (+3.26%), binary RE (+4.8%), and 4-ary RE (+2.7%) in F1 score.
    @inproceedings{huang2021tempgen,
      title = {Document-level Entity-based Extraction as Template Generation},
      author = {Huang, Kung-Hsiang and Tang, Sam and Peng, Nanyun},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • AESOP: Paraphrase Generation with Adaptive Syntactic Control

    Jiao Sun, Xuezhe Ma, and Nanyun Peng, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    Full Text Code Abstract BibTeX Details
    We propose to control paraphrase generation through carefully chosen target syntactic structures to generate more proper and higher quality paraphrases. Our model, AESOP, leverages a pretrained language model and adds deliberately chosen syntactical control via a retrieval-based selection module to generate fluent paraphrases. Experiments show that AESOP achieves state-of-the-art performances on semantic preservation and syntactic conformation on two benchmark datasets with ground-truth syntactic control from human-annotated exemplars. Moreover, with the retrieval-based target syntax selection module, AESOP generates paraphrases with even better qualities than the current best model using human-annotated target syntactic parses according to human evaluation. We further demonstrate the effectiveness of AESOP to improve classification models’ robustness to syntactic perturbation by data augmentation on two GLUE tasks.
    @inproceedings{sun2021aesop,
      title = {AESOP: Paraphrase Generation with Adaptive Syntactic Control},
      author = {Sun, Jiao and Ma, Xuezhe and Peng, Nanyun},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning

    Rujun Han, I.-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, and Nanyun Peng, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    Full Text Code Abstract BibTeX Details
    Understanding how events are semantically related to each other is the essence of reading comprehension. Recent event-centric reading comprehension datasets focus mostly on event arguments or temporal relations. While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning. For example, to understand causality between events, we need to infer motivation or purpose; to establish event hierarchy, we need to understand the composition of events. To facilitate these tasks, we introduce ESTER, a comprehensive machine reading comprehension (MRC) dataset for Event Semantic Relation Reasoning. The dataset leverages natural language queries to reason about the five most common event semantic relations, provides more than 6K questions, and captures 10.1K event relation pairs. Experimental results show that the current SOTA systems achieve 22.1%, 63.3% and 83.5% for token-based exact-match (EM), F1 and event-based HIT@1 scores, which are all significantly below human performances (36.0%, 79.6%, 100% respectively), highlighting our dataset as a challenging benchmark.
    @inproceedings{han2021ester,
      title = {ESTER: A Machine Reading Comprehension Dataset for Event Semantic Relation Reasoning},
      author = {Han, Rujun and Hsu, I-Hung and Sun, Jiao and Baylon, Julia and Ning, Qiang and Roth, Dan and Peng, Nanyun},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

    Rujun Han, Xiang Ren, and Nanyun Peng, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    Full Text Code Abstract BibTeX Details
    While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications. We present a continual pre-training approach that equips PTLMs with targeted knowledge about event temporal relations. We design self-supervised learning objectives to recover masked-out event and temporal indicators and to discriminate sentences from their corrupted counterparts (where event or temporal indicators got replaced). By further pre-training a PTLM with these objectives jointly, we reinforce its attention to event and temporal information, yielding enhanced capability on event temporal reasoning. This Effective CONtinual pre-training framework for Event Temporal reasoning (ECONET) improves the PTLMs’ fine-tuning performances across five relation extraction and question answering tasks and achieves new or on-par state-of-the-art performances in most of our downstream tasks.
    @inproceedings{han2021econet,
      title = {ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning},
      author = {Han, Rujun and Ren, Xiang and Peng, Nanyun},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • Improving Pre-trained Vision-and-Language Embeddings for Phrase Grounding

    Zi-Yi Dou and Nanyun Peng, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), short, 2021.
    Full Text Code Abstract BibTeX Details
    Phrase grounding aims to map textual phrases to their associated image regions, which can be a prerequisite for multimodal reasoning and can benefit tasks requiring identifying objects based on language. With pre-trained vision-and-language models achieving impressive performance across tasks, it remains unclear if we can directly utilize their learned embeddings for phrase grounding without fine-tuning. To this end, we propose a method to extract matched phrase-region pairs from pre-trained vision-and-language embeddings and propose four fine-tuning objectives to improve the model phrase grounding ability using image-caption data without any supervised grounding signals. Experiments on two representative datasets demonstrate the effectiveness of our objectives, outperforming baseline models in both weakly-supervised and supervised phrase grounding settings. In addition, we evaluate the aligned embeddings on several other downstream tasks and show that we can achieve better phrase grounding without sacrificing representation generality.
    @inproceedings{dou2021improving,
      title = {Improving Pre-trained Vision-and-Language Embeddings for Phrase Grounding},
      author = {Dou, Zi-Yi and Peng, Nanyun},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), short},
      year = {2021}
    }
    
    Details
  • Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training

    Kuan-Hao Huang, Wasi Uddin Ahmad, Nanyun Peng, and Kai-Wei Chang, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    Full Text Code Abstract BibTeX Details
    Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer. However, these multilingual encoders do not precisely align words and phrases across languages. Especially, learning alignments in the multilingual embedding space usually requires sentence-level or word-level parallel corpora, which are expensive to be obtained for low-resource languages. An alternative is to make the multilingual encoders more robust; when fine-tuning the encoder using downstream task, we train the encoder to tolerate noise in the contextual embedding spaces such that even if the representations of different languages are not aligned well, the model can still achieve good performance on zero-shot cross-lingual transfer. In this work, we propose a learning strategy for training robust models by drawing connections between adversarial examples and the failure cases of zero-shot cross-lingual transfer. We adopt two widely used robust training methods, adversarial training and randomized smoothing, to train the desired robust model. The experimental results demonstrate that robust training improves zero-shot cross-lingual transfer on text classification tasks. The improvement is more significant in the generalized cross-lingual transfer setting, where the pair of input sentences belong to two different languages.
    @inproceedings{huang2021improving,
      title = {Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training},
      author = {Huang, Kuan-Hao and Ahmad, Wasi Uddin and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning

    Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, and Kai-Wei Chang, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    Full Text Video Code Abstract BibTeX Details
    Commonsense is defined as the knowledge on which everyone agrees. However, certain types of commonsense knowledge are correlated with culture and geographic locations and they are only shared locally. For example, the scenes of wedding ceremonies vary across regions due to different customs influenced by historical and religious factors. Such regional characteristics, however, are generally omitted in prior work. In this paper, we construct a Geo-Diverse Visual Commonsense Reasoning dataset (GD-VCR) to test vision-and-language models’ ability to understand cultural and geo-location-specific commonsense. In particular, we study two state-of-the-art Vision-and-Language models, VisualBERT and ViLBERT trained on VCR, a standard benchmark with images primarily from Western regions. We then evaluate how well the trained models can generalize to answering the questions in GD-VCR. We find that the performance of both models for non-Western regions including East Asia, South Asia, and Africa is significantly lower than that for Western region. We analyze the reasons behind the performance disparity and find that the performance gap is larger on QA pairs that: 1) are concerned with culture-related scenarios, e.g., weddings, religious activities, and festivals; 2) require high-level geo-diverse commonsense reasoning rather than low-order perception and recognition.
    @inproceedings{yin2021broaden,
      title = {Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning},
      author = {Yin, Da and Li, Liunian Harold and Hu, Ziniu and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • HypoGen: Hyperbole Generation with Commonsense and Counterfactual Knowledge

    Yufei Tian, Arvind krishna Sridhar, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP, 2021.
    Full Text Video Code Abstract BibTeX Details
     A hyperbole is an intentional and creative exaggeration not to be taken literally. Despite its ubiquity in daily life, the computational explorations of hyperboles are scarce. In this paper, we tackle the under-explored and challenging task: sentence-level hyperbole generation. We start with a representative syntactic pattern for intensification and systematically study the semantic (commonsense and counterfactual) relationships between each component in such hyperboles. We then leverage commonsense and counterfactual inference to generate hyperbole candidates based on our findings from the pattern, and train neural classifiers to rank and select high-quality hyperboles. Automatic and human evaluations show that our generation method is able to generate hyperboles creatively with high success rate and intensity.
    @inproceedings{tian2021hypogen,
      title = {HypoGen: Hyperbole Generation with Commonsense and Counterfactual Knowledge},
      author = {Tian, Yufei and Sridhar, Arvind krishna and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP},
      year = {2021}
    }
    
    Details
  • HyperExpan: Taxonomy Expansion with Hyperbolic Representation Learning

    Mingyu Derek Ma, Muhao Chen, Te-Lin Wu, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP, 2021.
    Full Text Slides Video Code Abstract BibTeX Details
    Taxonomies are valuable resources for many applications, but the limited coverage due to the expensive manual curation process hinders their general applicability. Prior works attempt to automatically expand existing taxonomies to improve their coverage by learning concept embeddings in Euclidean space, while taxonomies, inherently hierarchical, more naturally align with the geometric properties of a hyperbolic space. In this paper, we present HyperExpan, a taxonomy expansion algorithm that seeks to preserve the structure of a taxonomy in a more expressive hyperbolic embedding space and learn to represent concepts and their relations with a Hyperbolic Graph Neural Network (HGNN). Specifically, HyperExpan leverages position embeddings to exploit the structure of the existing taxonomies, and characterizes the concept profile information to support the inference on unseen concepts during training. Experiments show that our proposed HyperExpan outperforms baseline models with representation learning in a Euclidean feature space and achieves state-of-the-art performance on the taxonomy expansion benchmarks.
    @inproceedings{ma2021hyperexpan,
      title = {HyperExpan: Taxonomy Expansion with Hyperbolic Representation Learning},
      author = {Ma, Mingyu Derek and Chen, Muhao and Wu, Te-Lin and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP},
      year = {2021}
    }
    
    Details
  • Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia

    Jiao Sun and Nanyun Peng, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    Full Text Code Abstract BibTeX Details
    Human activities can be seen as sequences of events, which are crucial to understanding societies. Disproportional event distribution for different demographic groups can manifest and amplify social stereotypes, and potentially jeopardize the ability of members in some groups to pursue certain goals. In this paper, we present the first event-centric study of gender biases in a Wikipedia corpus. To facilitate the study, we curate a corpus of career and personal life descriptions with demographic information consisting of 7,854 fragments from 10,412 celebrities. Then we detect events with a state-of-the-art event detection model, calibrate the results using strategically generated templates, and extract events that have asymmetric associations with genders. Our study discovers that Wikipedia pages tend to intermingle personal life events with professional events for females but not for males, which calls for the awareness of the Wikipedia community to formalize guidelines and train the editors to mind the implicit biases that contributors carry. Our work also lays the foundation for future works on quantifying and discovering event biases at the corpus level.
    @inproceedings{sun2021men,
      title = {Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia},
      author = {Sun, Jiao and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2021}
    }
    
    Details
  • Societal Biases in Language Generation: Progress and Challenges

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    Full Text Abstract BibTeX Details
    Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner. While techniques can effectively generate fluent text, they can also produce undesirable societal biases that can have a disproportionately negative impact on marginalized populations. Language generation presents unique challenges for biases in terms of direct user interaction and the structure of decoding techniques. To better understand these challenges, we present a survey on societal biases in language generation, focusing on how data and techniques contribute to biases and progress towards reducing biases. Motivated by a lack of studies on biases from decoding techniques, we also conduct experiments to quantify the effects of these techniques. By further discussing general trends and open challenges, we call to attention promising directions for research and the importance of fairness and inclusivity considerations for language generation applications.
    @inproceedings{sheng2021societal,
      title = {Societal Biases in Language Generation: Progress and Challenges},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2021}
    }
    
    Details
  • Metaphor Generation with Conceptual Mappings

    Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, and Iryna Gurevych, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    Full Text Code Abstract BibTeX Details
    Generating metaphors is a difficult task as it requires understanding nuanced relationships between abstract concepts. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions. To achieve this, we develop two methods: 1) using FrameNetbased embeddings to learn mappings between domains and applying them at the lexical level (CM-Lex), and 2) deriving source/target pairs to train a controlled seq-to-seq generation model (CM-BART). We assess our methods through automatic and human evaluation for basic metaphoricity and conceptual metaphor presence. We show that the unsupervised CMLex model is competitive with recent deep learning metaphor generation systems, and CM-BART outperforms all other models both in automatic and human evaluations.
    @inproceedings{stowe2021metaphor,
      title = {Metaphor Generation with Conceptual Mappings},
      author = {Stowe, Kevin and Chakrabarty, Tuhin and Peng, Nanyun and Muresan, Smaranda and Gurevych, Iryna},
      booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2021}
    }
    
    Details
  • COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences

    Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-lin Wu, Xuezhe Ma, and Nanyun Peng, in Proceedings of Findings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-Findings), 2021.
    Full Text Code Abstract BibTeX Details
    Commonsense reasoning is intuitive for humans but has been a long-term challenge for artificial intelligence (AI). Recent advancements in pretrained language models have shown promising results on several commonsense benchmark datasets. However, the reliability and comprehensiveness of these benchmarks towards assessing model’s commonsense reasoning ability remains unclear. To this end, we introduce a new commonsense reasoning benchmark dataset comprising natural language true/false statements, with each sample paired with its complementary counterpart, resulting in 4k sentence pairs. We propose a pairwise accuracy metric to reliably measure an agent’s ability to perform commonsense reasoning over a given situation. The dataset is crowdsourced and enhanced with an adversarial model-in-the-loop setup to incentivize challenging samples. To facilitate a systematic analysis of commonsense capabilities, we design our dataset along the dimensions of knowledge domains, reasoning scenarios and numeracy. Experimental results demonstrate that our strongest baseline (UnifiedQA-3B), after fine-tuning, achieves  71% standard accuracy and  51% pairwise accuracy, well below human performance ( 95% for both metrics).
    @inproceedings{sw2021com,
      title = {COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences},
      author = {Singh, Shikhar and Wen, Nuan and Hou, Yu and Alipoormolabashi, Pegah and Wu, Te-lin and Ma, Xuezhe and Peng, Nanyun},
      booktitle = {Proceedings of Findings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-Findings)},
      year = {2021}
    }
    
    Details
  • "Nice Try, Kiddo": Ad Hominems in Dialogue Systems

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
    Full Text Video Code Abstract BibTeX Details
    Ad hominem attacks are those that attack some feature of a person’s character instead of the position the person is maintaining. As a form of toxic and abusive language, ad hominems contain harmful language that could further amplify the skew of power inequality for marginalized populations. Since dialogue systems are designed to respond directly to user input, it is important to study ad hominems in these system responses. In this work, we propose categories of ad hominems that allow us to analyze human and dialogue system responses to Twitter posts. We specifically compare responses to Twitter posts about marginalized communities (#BlackLivesMatter, #MeToo) and other topics (#Vegan, #WFH). Furthermore, we propose a constrained decoding technique that uses salient n-gram similarity to apply soft constraints to top-k sampling and can decrease the amount of ad hominems generated by dialogue systems. Our results indicate that 1) responses composed by both humans and DialoGPT contain more ad hominems for discussions around marginalized communities versus other topics, 2) different amounts of ad hominems in the training data can influence the likelihood of the model generating ad hominems, and 3) we can thus carefully choose training data and use constrained decoding techniques to decrease the amount of ad hominems generated by dialogue systems.
    @inproceedings{sheng2021nice,
      title = {"Nice Try, Kiddo": Ad Hominems in Dialogue Systems},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      publisher = {Association for Computational Linguistics},
      pages = {750--767},
      year = {2021}
    }
    
    Details
  • Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation

    Sarik Ghazarian, Zixi Liu, Akash S. M, Ralph Weischedel, Aram Galstyan, and Nanyun Peng, in The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
    Full Text Slides Code Abstract BibTeX Details
    With the recent advances of open-domain story generation models, the lack of reliable automatic evaluation metrics becomes an increasingly imperative issue that hinders the development of such models. A critical bottleneck of obtaining a trustworthy learnable evaluation metric is the lack of high-quality training data for learning classifiers to efficiently distinguish between plausible and implausible machine-generated stories. Previous works relied on heuristically manipulate plausible examples to mimic possible system drawbacks such as repetition, contradiction, or irrelevant content in the text level, which can be unnatural and oversimplify the characteristics of implausible machine-generated stories. We propose to tackle these issues by generating a more comprehensive set of implausible stories using plots, which are structured representations of controllable factors used to generate stories.  Since these plots are compact and structured, it is easier to manipulate them to generate text with targeted undesirable properties, while at the same time maintain the naturalness of the generation. To improve the quality of incoherent stories, we further apply the adversarial filtering procedure to select a more nuanced set of implausible texts. We find that the evaluation metrics trained on our generated data result in more reliable automatic assessments that correlate remarkably better with human judgments than other baselines.
    @inproceedings{ghazarian2021plot,
      title = {Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation},
      author = {Ghazarian, Sarik and Liu, Zixi and M, Akash S and Weischedel, Ralph and Galstyan, Aram and Peng, Nanyun},
      booktitle = {The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      publisher = {Association for Computational Linguistics},
      pages = {4334–-4344},
      year = {2021}
    }
    
    Details
  • MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding

    Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, and Nanyun Peng, in The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
    Full Text Poster Code Abstract BibTeX Details
    Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (CITATION) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data to generate high-quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. A task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.
    @inproceedings{chakrabarty2021mermaid,
      title = {MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding},
      author = {Chakrabarty, Tuhin and Zhang, Xurui and Muresan, Smaranda and Peng, Nanyun},
      booktitle = {The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      talk_url = {https://underline.io/events/122/sessions/4240/lecture/19642-mermaid-metaphor-generation-with-symbolism-and-discriminative-decoding},
      year = {2021}
    }
    
    Details
  • DiSCoL: Toward Engaging Dialogue Systems through Conversational Line Guided Response Generation

    Sarik Ghazarian, Zixi Liu, Tuhin Chakrabarty, Xuezhe Ma, Aram Galstyan, and Nanyun Peng, in 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Demonstrations Track, 2021.
    Full Text Code Abstract BibTeX Details
    Having engaging and informative conversations with users is the utmost goal for open-domain conversational systems. Recent advances in transformer-based language models and their applications to dialogue systems have succeeded to generate fluent and human-like responses. However, they still lack control over the generation process towards producing contentful responses and achieving engaging conversations. To achieve this goal, we present DiSCoL (Dialogue Systems through Coversational Line guided response generation). DiSCoL is an open-domain dialogue system that leverages conversational lines (briefly convlines) as controllable and informative content-planning elements to guide the generation model produce engaging and informative responses. Two primary modules in DiSCoL’s pipeline are conditional generators trained for 1) predicting relevant and informative convlines for dialogue contexts and 2) generating high-quality responses conditioned on the predicted convlines. Users can also change the returned convlines to control the direction of the conversations towards topics that are more interesting for them. Through automatic and human evaluations, we demonstrate the efficiency of the convlines in producing engaging conversations.
    @inproceedings{ghazarian2021discol,
      title = {DiSCoL: Toward Engaging Dialogue Systems through Conversational Line Guided Response Generation},
      author = {Ghazarian, Sarik and Liu, Zixi and Chakrabarty, Tuhin and Ma, Xuezhe and Galstyan, Aram and Peng, Nanyun},
      booktitle = {2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Demonstrations Track},
      pages = {26–-34},
      publisher = {Association for Computational Linguistics},
      year = {2021}
    }
    
    Details
  • EventPlus: A Temporal Event Understanding Pipeline

    Mingyu Derek Ma, Jiao Sun, Mu Yang, Kung-Hsiang Huang, Nuan Wen, Shikhar Singh, Rujun Han, and Nanyun Peng, in 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Demonstrations Track, 2021.
    Full Text Slides Poster Video Code Abstract BibTeX Details
    We present EventPlus, a temporal event understanding pipeline that integrates various state-of-the-art event understanding components including event trigger and type detection, event argument detection, event duration and temporal relation extraction. Event information, especially event temporal knowledge, is a type of common sense knowledge that helps people understand how stories evolve and provides predictive hints for future events. EventPlus as the first comprehensive temporal event understanding pipeline provides a convenient tool for users to quickly obtain annotations about events and their temporal information for any user-provided document. Furthermore, we show EventPlus can be easily adapted to other domains (e.g., biomedical domain). We make EventPlus publicly available to facilitate event-related information extraction and downstream applications.
    @inproceedings{ma2021eventplus,
      title = {EventPlus: A Temporal Event Understanding Pipeline},
      author = {Ma, Mingyu Derek and Sun, Jiao and Yang, Mu and Huang, Kung-Hsiang and Wen, Nuan and Singh, Shikhar and Han, Rujun and Peng, Nanyun},
      booktitle = {2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Demonstrations Track},
      year = {2021}
    }
    
    Details
  • Identifying Distributional Perspective Differences from Colingual Groups

    Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, and Nanyun Peng, in NAACL 2021 Workshop of Social NLP, 2021.
    Full Text Code Abstract BibTeX Details
    Perspective differences exist among different cultures or languages. A lack of mutual understanding among different groups about their perspectives on specific values or events may lead to uninformed decisions or biased opinions. Automatically understanding the group perspectives can provide essential background for many downstream applications of natural language processing techniques. In this paper, we study colingual groups and use language corpora as a proxy to identify their distributional perspectives. We present a novel computational approach to learn shared understandings, and benchmark our method by building culturally-aware models for the English, Chinese, and Japanese languages. On a held out set of diverse topics including marriage, corruption, democracy, our model achieves high correlation with human judgements regarding intra-group values and inter-group differences.
    @inproceedings{tian2021identifying,
      title = {Identifying Distributional Perspective Differences from Colingual Groups},
      author = {Tian, Yufei and Chakrabarty, Tuhin and Morstatter, Fred and Peng, Nanyun},
      booktitle = {NAACL 2021 Workshop of Social NLP},
      year = {2021}
    }
    
    Details
  • Document-level Event Extraction with Efficient End-to-end Learning of Cross-event Dependencies

    Kung-Hsiang Huang and Nanyun Peng, in The 3rd Workshop on Narrative Understanding (NAACL 2021), 2021.
    Full Text Abstract BibTeX Details
    Fully understanding narratives often requires identifying events in the context of whole documents and modeling the event relations. However, document-level event extraction is a challenging task as it requires the extraction of event and entity coreference, and capturing arguments that span across different sentences. Existing works on event extraction usually confine on extracting events from single sentences, which fail to capture the relationships between the event mentions at the scale of a document, as well as the event arguments that appear in a different sentence than the event trigger. In this paper, we propose an end-to-end model leveraging Deep Value Networks (DVN), a structured prediction algorithm, to efficiently capture cross-event dependencies for document-level event extraction. Experimental results show that our approach achieves comparable performance to CRF-based models on ACE05, while enjoys significantly higher computational efficiency.
    @inproceedings{huang2021document,
      title = {Document-level Event Extraction with Efficient End-to-end Learning of Cross-event Dependencies},
      author = {Huang, Kung-Hsiang and Peng, Nanyun},
      booktitle = {The 3rd Workshop on Narrative Understanding (NAACL 2021)},
      year = {2021}
    }
    
    Details
  • Discourse Tagging for Scientific Evidence Extraction

    Xiangci Li, Gully Burns, and Nanyun Peng, in The 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2021.
    Full Text Code Abstract BibTeX Details
    Evidence plays a crucial role in any biomedical research narrative, providing justification for some claims and refutation for others. We seek to build models of scientific argument using information extraction methods from fulltext papers. We present the capability of automatically extracting text fragments from primary research papers that describe the evidence presented in that paper’s figures, which arguably provides the raw material of any scientific argument made within the paper. We apply richly contextualized deep representation learning pre-trained on biomedical domain corpus to the analysis of scientific discourse structures and the extraction of "evidence fragments" (i.e., the text in the results section describing data presented in a specified subfigure) from a set of biomedical experimental research articles. We first demonstrate our state-of-the-art scientific discourse tagger on two scientific discourse tagging datasets and its transferability to new datasets. We then show the benefit of leveraging scientific discourse tags for downstream tasks such as claim-extraction and evidence fragment detection. Our work demonstrates the potential of using evidence fragments derived from figure spans for improving the quality of scientific claims by cataloging, indexing and reusing evidence fragments as independent documents.
    @inproceedings{li2021discourse,
      title = {Discourse Tagging for Scientific Evidence Extraction},
      author = {Li, Xiangci and Burns, Gully and Peng, Nanyun},
      booktitle = {The 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL)},
      year = {2021}
    }
    
    Details
  • MELINDA: A Multimodal Dataset for Biomedical Experiment Method Classification

    Wu Te-Lin, Shikhar Singh, Sayan Paul, Gully Burns, and Nanyun Peng, in The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 2021.
    Full Text Code Abstract BibTeX Details
    We introduce a new dataset, MELINDA, for Multimodal Biomedical Experiment Method Classification. The dataset is collected in a fully automated distant supervision manner, where the labels are obtained from an existing curated database, and the actual contents are extracted from papers associated with each of the records in the database. We benchmark various state-of-the-art NLP and computer vision models, including unimodal models which only take either caption texts or images as inputs, and multimodal models. Our extensive experimental results show that multimodal models, despite outperforming other benchmarked models, require certain improvements especially a less-supervised way of grounding visual concepts with languages, and better transfer learning for low resource tasks.  We release our dataset and the benchmarks to facilitate future research in multimodal learning, especially to motivate targeted improvements for applications in scientific domains.
    @inproceedings{wu2021melinda,
      title = {MELINDA: A Multimodal Dataset for Biomedical Experiment Method Classification},
      author = {Te-Lin, Wu and Singh, Shikhar and Paul, Sayan and Burns, Gully and Peng, Nanyun},
      booktitle = {The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)},
      year = {2021}
    }
    
    Details
  • GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction

    Wasi Ahmad, Nanyun Peng, and Kai-Wei Chang, in The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 2021.
    Full Text Code Abstract BibTeX Details
    Prevalent approaches in cross-lingual relation and event extraction use graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic representations such that models trained on one language can be applied to other languages. However, GCNs lack in modeling long-range dependencies or disconnected words in the dependency tree. To address this challenge, we propose to utilize the self-attention mechanism where we explicitly fuse structural information to learn the dependencies between words at different syntactic distances. We introduce GATE, a Graph Attention Transformer Encoder, and test its cross-lingual transferability on relation and event extraction tasks. We perform rigorous experiments on the widely used ACE05 dataset that includes three typologically different languages: English, Chinese, and Arabic. The evaluation results show that GATE outperforms three recently proposed methods by a large margin. Our detailed analysis reveals that due to the reliance on syntactic dependencies, GATE produces robust representations that facilitate transfer across languages.
    @inproceedings{ahmad2021gate,
      author = {Ahmad, Wasi and Peng, Nanyun and Chang, Kai-Wei},
      title = {GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction},
      booktitle = {The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)},
      year = {2021}
    }
    
    Details
  • A Paragraph-level Multi-task Learning Model for Scientific Fact-Verification

    Xiangci Li, Gully Burns, and Nanyun Peng, in Scientific Document Understanding Workshop at the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 2021.
    Full Text Code Abstract BibTeX Details
    Even for domain experts, it is a non-trivial task to verify a scientific claim by providing supporting or refuting evidence rationales. The situation worsens as misinformation is proliferated on social media or news websites, manually or programmatically, at every moment. As a result, an automatic fact-verification tool becomes crucial for combating the spread of misinformation. In this work, we propose a novel, paragraph-level, multi-task learning model for the SciFact task by directly computing a sequence of contextualized sentence embeddings from a BERT model and jointly training the model on rationale selection and stance prediction.
    @inproceedings{li2021paragraph,
      title = {A Paragraph-level Multi-task Learning Model for Scientific Fact-Verification},
      author = {Li, Xiangci and Burns, Gully and Peng, Nanyun},
      booktitle = {Scientific Document Understanding Workshop at the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)},
      year = {2021}
    }
    
    Details

2020

  • Content Planning for Neural Story Generation with Aristotelian Rescoring

    Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, and Nanyun Peng, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
    Full Text Slides Code Abstract BibTeX Details
    Long-form narrative text generated from largelanguage models manages a fluent impersonation of human writing, but only at the localsentence level, and lacks structure or global cohesion. We posit that many of the problem of story generation can be addressed via high quality content planning, and present a systemthat focuses on how to learn good plot structures to guide story generation. We utilize a plot-generation language model along with an ensemble of rescoring models that each implement an aspect of good story-writing as detailed in Aristotle’s Poetics. We find that stories written with our more principled plot structure are both more relevant to a given prompt and higher quality than baselines that do not content plan, or that plan in an unprincipled way.
    @inproceedings{goldfarb2020content,
      title = {Content Planning for Neural Story Generation with Aristotelian Rescoring},
      author = {Goldfarb-Tarrant, Seraphina and Chakrabarty, Tuhin and Weischedel, Ralph and Peng, Nanyun},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      pages = {4319--4338},
      slideslive_id = {38939240},
      year = {2020}
    }
    
    Details
  • Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile Generation

    Tuhin Chakrabarty, Smaranda Muresan, and Nanyun Peng, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
    Full Text Slides Code Abstract BibTeX Details
    Literary tropes, from poetry to stories, are at the crux of human imagination and communication. Figurative language, such as a simile,goes beyond plain expressions to give readers new insights and inspirations. We tackle the problem of simile generation. Generating a simile requires proper understanding for effective mapping of properties between two concepts. To this end, we first propose a method to automatically construct a parallel corpus by transforming a large number of similes collected from Reddit to their literal counterpart using structured common sense knowledge. We then fine-tune a pretrained sequence to sequence model, BART (Lewis et al., 2019),on the literal-simile pairs to generate novel similes given a literal sentence. Experiments show that our approach generates 88% novel similes that do not share properties with the training data. Human evaluation on an independent set of literal statements shows that our model generates similes better than two literary experts 37% of the times, and three baseline systems including a recent metaphor generation model 71% of the times when compared pairwise. We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
    @inproceedings{chakrabarty-etal-2020-generating,
      title = {Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile Generation},
      author = {Chakrabarty, Tuhin and Muresan, Smaranda and Peng, Nanyun},
      booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      pages = {6455--6469},
      publisher = {Association for Computational Linguistics},
      slideslive_id = {38938962},
      year = {2020}
    }
    
    Details
  • Domain Knowledge Empowered Structured Neural Net for End-to-End Event Temporal Relation Extraction

    Rujun Han, Yichao Zhou, and Nanyun Peng, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
    Full Text Slides Code Abstract BibTeX Details
    Extracting event temporal relations is a critical task for information extraction and plays an important role in natural language understanding. Prior systems leverage deep learning and pre-trained language models to improve the performance of the task. However, these systems often suffer from two shortcomings: 1) when performing maximum a posteriori (MAP) inference based on neural models, previous systems only used structured knowledge that is assumed to be absolutely correct, i.e., hard constraints; 2) biased predictions on dominant temporal relations when training with a limited amount of data. To address these issues, we propose a framework that enhances deep neural network with distributional constraints constructed by probabilistic domain knowledge. We solve the constrained inference problem via Lagrangian Relaxation and apply it to end-to-end event temporal relation extraction tasks. Experimental results show our framework is able to improve the baseline neural network models with strong statistical significance on two widely used datasets in news and clinical domains.
    @inproceedings{han2020knowledge,
      title = {Domain Knowledge Empowered Structured Neural Net for End-to-End Event Temporal Relation Extraction},
      author = {Han, Rujun and Zhou, Yichao and Peng, Nanyun},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      publisher = {Association for Computational Linguistics},
      pages = {5717--5729},
      slideslive_id = {38939236},
      year = {2020}
    }
    
    Details
  • TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

    Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
    Full Text Code Abstract BibTeX Details
    A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated. However, current machine reading comprehension benchmarks have practically no questions that test temporal phenomena, so systems trained on these benchmarks have no capacity to answer questions such as "what happened before/after [some event]?" We introduce TORQUE, a new English reading comprehension benchmark built on 3.2k news snippets with 21k human-generated questions querying temporal relationships. Results show that RoBERTa-large achieves an exact-match score of 51% on the test set of TORQUE, about 30% behind human performance.
    @inproceedings{ning2020torque,
      title = {TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions},
      author = {Ning, Qiang and Wu, Hao and Han, Rujun and Peng, Nanyun and Gardner, Matt and Roth, Dan},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      publisher = {Association for Computational Linguistics},
      pages = {1158--1172},
      slideslive_id = {38938807},
      year = {2020}
    }
    
    Details
  • STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation

    Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
    Full Text Code Abstract BibTeX Details
    Systems for story generation are asked to produce plausible and enjoyable stories given an input context. This task is underspecified, as a vast number of diverse stories can originate from a single input. The large output space makes it difficult to build and evaluate story generation models, as (1) existing datasets lack rich enough contexts to meaningfully guide models, and (2) existing evaluations (both crowdsourced and automatic) are unreliable for assessing long-form creative text. To address these issues, we introduce a dataset and evaluation platform built from STORIUM, an online collaborative storytelling community. Our author-generated dataset contains 6K lengthy stories (125M tokens) with fine-grained natural language annotations (e.g., character goals and attributes) interspersed throughout each narrative, forming a robust source for guiding models. We evaluate language models fine-tuned on our dataset by integrating them onto STORIUM, where real authors can query a model for suggested story continuations and then edit them. Automatic metrics computed over these edits correlate well with both user ratings of generated stories and qualitative feedback from semi-structured user interviews. We release both the STORIUM dataset and evaluation platform to spur more principled research into story generation.
    @inproceedings{akoury2020storium,
      title = {STORIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation},
      author = {Akoury, Nader and Wang, Shufan and Whiting, Josh and Hood, Stephen and Peng, Nanyun and Iyyer, Mohit},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      slideslive_id = {38939010},
      year = {2020}
    }
    
    Details
  • Towards Controllable Biases in Language Generation

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, long, 2020.
    Full Text Poster Code Abstract BibTeX Details
    We present a general approach towards controllable societal biases in natural language generation (NLG). Building upon the idea of adversarial triggers, we develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups. We then analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics. The former scenario enables us to detect the types of biases present in the model. Specifically, we show the effectiveness of our approach at facilitating bias analysis by finding topics that correspond to demographic inequalities in generated text and comparing the relative effectiveness of inducing biases for different demographics. The second scenario is useful for mitigating biases in downstream applications such as dialogue generation. In our experiments, the mitigation technique proves to be effective at equalizing the amount of biases across demographics while simultaneously generating less negatively biased text overall.
    @inproceedings{sheng2020towards,
      title = {Towards Controllable Biases in Language Generation},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, long},
      year = {2020}
    }
    
    Details
  • Biomedical Event Extraction with Hierarchical Knowledge Graphs

    Kung-Hsiang Huang, Mu Yang, and Nanyun Peng, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, short, 2020.
    Full Text Slides Code Abstract BibTeX Details
    Biomedical event extraction is critical in understanding biomolecular interactions described in scientific corpus. One of the main challenges is to identify nested structured events that are associated with non-indicative trigger words. We propose to incorporate domain knowledge from Unified Medical Language System (UMLS) to a pre-trained language model via a hierarchical graph representation encoded by a proposed Graph Edgeconditioned Attention Networks (GEANet). To better recognize the trigger words, each sentence is first grounded to a sentence graph based on a jointly modeled hierarchical knowledge graph from UMLS. The grounded graphs are then propagated by GEANet, a novel graph neural networks for enhanced capabilities in inferring complex events. On BioNLP 2011 GENIA Event Extraction task, our approach achieved 1.41% F1 and 3.19% F1 improvements on all events and complex events, respectively. Ablation studies confirm the importance of GEANet and hierarchical KG.
    @inproceedings{huang2020event,
      title = {Biomedical Event Extraction with Hierarchical Knowledge Graphs},
      author = {Huang, Kung-Hsiang and Yang, Mu and Peng, Nanyun},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, short},
      slideslive_id = {38940169},
      year = {2020}
    }
    
    Details
  • Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

    Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, and Xiang Ren, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, 2020.
    Full Text Code Abstract BibTeX Details
    Commonsense question answering (QA) requires background knowledge which is not explicitly stated in a given context. Prior works use commonsense knowledge graphs (KGs) to obtain this knowledge for reasoning. However, relying entirely on these KGs may not suffice, considering their limited coverage and the contextual dependence of their knowledge. In this paper, we augment a general commonsense QA framework with a knowledgeable path generator. By extrapolating over existing paths in a KG with a state-of-the-art language model, our generator learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path. Such paths can provide structured evidence for solving commonsense questions without fine-tuning the path generator. Experiments on two datasets show the superiority of our method over previous works which fully rely on knowledge from KGs (with up to 6% improvement in accuracy), across various amounts of training data. Further evaluation suggests that the generated paths are typically interpretable, novel, and relevant to the task.
    @inproceedings{wang2020connecting,
      title = {Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering},
      author = {Wang, Peifeng and Peng, Nanyun and Ilievski, Filip and Szekely, Pedro and Ren, Xiang},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings},
      pages = {4129--4140},
      year = {2020}
    }
    
    Details
  • R3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge

    Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, and Nanyun Peng, in the 2020 Annual Conference of the Association for Computational Linguistics (ACL), 2020.
    Full Text Code BibTeX Details
    @inproceedings{chakrabarty2020r,
      title = {R3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge},
      author = {Chakrabarty, Tuhin and Ghosh, Debanjan and Muresan, Smaranda and Peng, Nanyun},
      booktitle = {the 2020 Annual Conference of the Association for Computational Linguistics (ACL)},
      year = {2020}
    }
    
    Details
  • Predictive Engagement: An Efficient Metric For Automatic Evaluation of Open-Domain Dialogue Systems

    Sarik Ghazarian, Ralph Weischedel, Aram Galstyan, and Nanyun Peng, in The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), 2020.
    Full Text Code Abstract BibTeX Details
    User engagement is a critical metric for evaluating the quality of open-domain dialogue systems. Prior work has focused on conversation-level engagement by using heuristically constructed features such as the number of turns and the total time of the conversation. In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, predictive engagement, for automatic evaluation of open-domain dialogue systems. Our experiments demonstrate that (1) human annotators have high agreement on assessing utterance-level engagement scores; (2) conversation-level engagement scores can be predicted from properly aggregated utterance-level engagement scores. Furthermore, we show that the utterance-level engagement scores can be learned from data. These scores can be incorporated into automatic evaluation metrics for open-domain dialogue systems to improve the correlation with human judgements. This suggests that predictive engagement can be used as a real-time feedback for training better dialogue models.
    @inproceedings{ghazarian2020predictive,
      title = {Predictive Engagement: An Efficient Metric For Automatic Evaluation of Open-Domain Dialogue Systems},
      author = {Ghazarian, Sarik and Weischedel, Ralph and Galstyan, Aram and Peng, Nanyun},
      booktitle = {The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)},
      pages = {7789–-7796},
      year = {2020}
    }
    
    Details
  • Enabling Low-Resource Transfer Learning across COVID-19 Corpora by Combining Event-Extraction and Co-Training

    Alexander Spangher, Nanyun Peng, Jonathan May, and Emilio Ferrara, in ACL 2020 Workshop on Natural Language Processing for COVID-19 (NLP-COVID), 2020.
    Full Text BibTeX Details
    @inproceedings{spangher2020enabling,
      title = {Enabling Low-Resource Transfer Learning across COVID-19 Corpora by Combining Event-Extraction and Co-Training},
      author = {Spangher, Alexander and Peng, Nanyun and May, Jonathan and Ferrara, Emilio},
      booktitle = {ACL 2020 Workshop on Natural Language Processing for COVID-19 (NLP-COVID)},
      year = {2020}
    }
    
    Details
  • Man is to person as woman is to location: Measuring gender bias in named entity recognition

    Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan, in 31st ACM Conference on Hypertext and Social Media (HT’20), 2020.
    Full Text BibTeX Details
    @inproceedings{mehrabi2020man,
      title = {Man is to person as woman is to location: Measuring gender bias in named entity recognition},
      author = {Mehrabi, Ninareh and Gowda, Thamme and Morstatter, Fred and Peng, Nanyun and Galstyan, Aram},
      booktitle = {31st ACM Conference on Hypertext and Social Media (HT’20)},
      year = {2020}
    }
    
    Details

2019

  • Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction

    Rujun Han, Qiang Ning, and Nanyun Peng, in 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
    Full Text Poster Code BibTeX Details
    @inproceedings{han2019joint,
      title = {Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction},
      author = {Han, Rujun and Ning, Qiang and Peng, Nanyun},
      booktitle = {2019 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2019}
    }
    
    Details
  • The Woman Worked as a Babysitter: On Biases in Language Generation

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), short, 2019.
    Full Text BibTeX Details
    @inproceedings{sheng2019woman,
      title = {The Woman Worked as a Babysitter: On Biases in Language Generation},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), short},
      year = {2019}
    }
    
    Details
  • Target Language-Aware Constrained Inference for Cross-lingual Dependency Parsing

    Tao Meng, Nanyun Peng, and Kai-Wei Chang, in 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
    Full Text BibTeX Details
    @inproceedings{meng2019target,
      title = {Target Language-Aware Constrained Inference for Cross-lingual Dependency Parsing},
      author = {Meng, Tao and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {2019 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2019}
    }
    
    Details
  • What Matters for Neural Cross-Lingual Named Entity Recognition: An Empirical Analysis

    Xiaolei Huang, Jonathan May, and Nanyun Peng, in 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), short, 2019.
    Full Text BibTeX Details
    @inproceedings{huang2019matters,
      title = {What Matters for Neural Cross-Lingual Named Entity Recognition: An Empirical Analysis},
      author = {Huang, Xiaolei and May, Jonathan and Peng, Nanyun},
      booktitle = {2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), short},
      year = {2019}
    }
    
    Details
  • Do Nuclear Submarines Have Nuclear Captains? A Challenge Dataset for Commonsense Reasoning over Adjectives and Objects

    James Mullenbach, Jonathan Gordon, Nanyun Peng, and Jonathan May, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), short, 2019.
    Full Text BibTeX Details
    @inproceedings{mullenbach2019nuclear,
      title = {Do Nuclear Submarines Have Nuclear Captains? A Challenge Dataset for Commonsense Reasoning over Adjectives and Objects},
      author = {Mullenbach, James and Gordon, Jonathan and Peng, Nanyun and May, Jonathan},
      booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), short},
      pages = {6054--6060},
      year = {2019}
    }
    
    Details
  • Deep Structured Neural Network for Event Temporal Relation Extraction

    Rujun Han, I.-Hung Hsu, Mu Yang, Aram Galstyan, Ralph Weischedel, and Nanyun Peng, in The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL), 2019.
    Full Text Code BibTeX Details
    @inproceedings{han2019deep,
      title = {Deep Structured Neural Network for Event Temporal Relation Extraction},
      author = {Han, Rujun and Hsu, I-Hung and Yang, Mu and Galstyan, Aram and Weischedel, Ralph and Peng, Nanyun},
      booktitle = {The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL)},
      year = {2019}
    }
    
    Details
  • Cross-lingual Dependency Parsing with Unlabeled Auxiliary Languages

    Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Kai-Wei Chang, and Nanyun Peng, in The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL), 2019.
    Full Text BibTeX Details
    @inproceedings{ahmad2019cross,
      title = {Cross-lingual Dependency Parsing with Unlabeled Auxiliary Languages},
      author = {Ahmad, Wasi Uddin and Zhang, Zhisong and Ma, Xuezhe and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL)},
      year = {2019}
    }
    
    Details
  • Learning A Unified Named Entity Tagger From Multiple Partially Annotated Corpora For Efficient Adaptation

    Xiao Huang, Li Dong, Elizabeth Boschee, and Nanyun Peng, in The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL), 2019.
    Full Text Code Abstract BibTeX Details
    Named entity recognition (NER) identifies typed entity mentions in raw text. While the task is well-established, there is no universally used tagset: often, datasets are annotated for use in downstream applications and accordingly only cover a small set of entity types relevant to a particular task. For instance, in the biomedical domain, one corpus might annotate genes, another chemicals, and another diseases—despite the texts in each corpus containing references to all three types of entities. In this paper, we propose a deep structured model to integrate these “partially annotated” datasets to jointly identify all entity types appearing in the training corpora. By leveraging multiple datasets, the model can learn robust input representations; by building a joint structured model, it avoids potential conflicts caused by combining several models’ predictions at test time. Experiments show that the proposed model significantly outperforms strong multi-task learning baselines when training on multiple, partially annotated datasets and testing on datasets that contain tags from more than one of the training corpora
    @inproceedings{huang2019learning,
      title = {Learning A Unified Named Entity Tagger From Multiple Partially Annotated Corpora For Efficient Adaptation},
      author = {Huang, Xiao and Dong, Li and Boschee, Elizabeth and Peng, Nanyun},
      booktitle = {The 2019 SIGNLL Conference on Computational Natural Language Learning (CoNLL)},
      year = {2019}
    }
    
    Details
  • Pun Generation with Surprise

    He He, Nanyun Peng, and Percy Liang, in 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), 2019.
    Full Text BibTeX Details
    @inproceedings{he2019pun,
      title = {Pun Generation with Surprise},
      author = {He, He and Peng, Nanyun and Liang, Percy},
      booktitle = {2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019)},
      volume = {1},
      year = {2019}
    }
    
    Details
  • On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing

    Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng, in Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
    Full Text BibTeX Details
    @inproceedings{ahmad2019difficulties,
      title = {On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing},
      author = {Ahmad, Wasi Uddin and Zhang, Zhisong and Ma, Xuezhe and Hovy, Eduard and Chang, Kai-Wei and Peng, Nanyun},
      booktitle = {Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2019}
    }
    
    Details
  • Plan-And-Write: Towards Better Automatic Storytelling

    Lili Yao, Nanyun Peng, Weischedel Ralph, Kevin Knight, Dongyan Zhao, and Rui Yan, in The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), 2019.
    Full Text BibTeX Details
    @inproceedings{yao2019plan,
      title = {Plan-And-Write: Towards Better Automatic Storytelling},
      author = {Yao, Lili and Peng, Nanyun and Ralph, Weischedel and Knight, Kevin and Zhao, Dongyan and Yan, Rui},
      booktitle = {The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)},
      year = {2019}
    }
    
    Details
  • Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation

    Seraphina Goldfarb-Tarrant, Haining Feng, and Nanyun Peng, in 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), Demonstrations Track, 2019.
    Full Text Video Code Abstract BibTeX Details
    Story composition is a challenging problem for machines and even for humans. We present a neural narrative generation system that interacts with humans to generate stories. Our system has different levels of human interaction, which enables us to understand at what stage of story-writing human collaboration is most productive, both to improving story quality and human engagement in the writing process. We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines. We also show an accompanying increase in user engagement and satisfaction with stories as compared to our own less interactive systems and to previous turn-taking approaches to interaction. Finally, we find that humans tasked with collaboratively improving a particular characteristic of a story are in fact able to do so, which has implications for future uses of human-in-the-loop systems.
    @inproceedings{goldfarb2019plan,
      title = {Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation},
      author = {Goldfarb-Tarrant, Seraphina and Feng, Haining and Peng, Nanyun},
      booktitle = {2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), Demonstrations Track},
      volume = {4},
      pages = {89--97},
      year = {2019}
    }
    
    Details
  • Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings

    Sarik Ghazarian, Johnny Tian-Zheng Wei, Aram Galstyan, and Nanyun Peng, in 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), NeuralGen Workshop, 2019.
    Full Text BibTeX Details
    @inproceedings{ghazarian2019better,
      title = {Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings},
      author = {Ghazarian, Sarik and Wei, Johnny Tian-Zheng and Galstyan, Aram and Peng, Nanyun},
      booktitle = {2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), NeuralGen Workshop},
      year = {2019}
    }
    
    Details
  • Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding

    Rujun Han, Mengyue Liang, Bashar Alhafni, and Nanyun Peng, in 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), Workshop on Narrative Understanding, 2019.
    Full Text BibTeX Details
    @inproceedings{han2019contextualized,
      title = {Contextualized Word Embeddings Enhanced Event Temporal Relation Extraction for Story Understanding},
      author = {Han, Rujun and Liang, Mengyue and Alhafni, Bashar and Peng, Nanyun},
      booktitle = {2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), Workshop on Narrative Understanding},
      year = {2019}
    }
    
    Details
  • Building deep learning models for evidence classification from the open access biomedical literature

    Gully A. Burns, Xiangci Li, and Nanyun Peng, Database, 2019.
    Full Text BibTeX Details
    @article{burns2019building,
      title = {Building deep learning models for evidence classification from the open access biomedical literature},
      author = {Burns, Gully A and Li, Xiangci and Peng, Nanyun},
      journal = {Database},
      year = {2019},
      publisher = {Narnia}
    }
    
    Details
  • Espresso: A Fast End-to-end Neural Speech Recognition Toolkit

    Yiming Wang, Tongfei Chen, Hainan Xu, Shuoyang Ding, Hang Lv, Yiwen Shao, Nanyun Peng, Lei Xie, Shinji Watanabe, and Sanjeev Khudanpur, in The 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2019.
    Full Text BibTeX Details
    @inproceedings{wang2019espresso,
      title = {Espresso: A Fast End-to-end Neural Speech Recognition Toolkit},
      author = {Wang, Yiming and Chen, Tongfei and Xu, Hainan and Ding, Shuoyang and Lv, Hang and Shao, Yiwen and Peng, Nanyun and Xie, Lei and Watanabe, Shinji and Khudanpur, Sanjeev},
      booktitle = {The 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
      year = {2019}
    }
    
    Details
  • Evaluating and Enhancing the Robustness of Retrieval-Based Dialogue Systems with Adversarial Examples

    Jia Li, Chongyang Tao, Nanyun Peng, Wei Wu, Dongyan Zhao, and Rui Yan, in CCF International Conference on Natural Language Processing and Chinese Computing, 2019.
    Full Text BibTeX Details
    @inproceedings{li2019evaluating,
      title = {Evaluating and Enhancing the Robustness of Retrieval-Based Dialogue Systems with Adversarial Examples},
      author = {Li, Jia and Tao, Chongyang and Peng, Nanyun and Wu, Wei and Zhao, Dongyan and Yan, Rui},
      booktitle = {CCF International Conference on Natural Language Processing and Chinese Computing},
      pages = {142--154},
      year = {2019},
      organization = {Springer}
    }
    
    Details
  • Debiasing Community Detection: The Importance of Lowly-Connected Nodes

    Ninareh Mehrabi, Fred Morstatter, Nanyun Peng, and Aram Galstyan, in The 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2019), 2019.
    Full Text BibTeX Details
    @inproceedings{mehrabi2019debiasing,
      title = {Debiasing Community Detection: The Importance of Lowly-Connected Nodes},
      author = {Mehrabi, Ninareh and Morstatter, Fred and Peng, Nanyun and Galstyan, Aram},
      booktitle = {The 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2019)},
      year = {2019}
    }
    
    Details

2018

  • Scalable Construction and Reasoning of Massive Knowledge Bases

    Xiang Ren, Nanyun Peng, and William Yang Wang, in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts, 2018.
    Full Text BibTeX Details
    @inproceedings{ren2018scalable,
      title = {Scalable Construction and Reasoning of Massive Knowledge Bases},
      author = {Ren, Xiang and Peng, Nanyun and Wang, William Yang},
      booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts},
      pages = {10--16},
      year = {2018}
    }
    
    Details
  • Towards controllable story generation

    Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight, in NAACL Workshop, 2018.
    Full Text BibTeX Details
    @inproceedings{peng2018towards,
      title = {Towards controllable story generation},
      author = {Peng, Nanyun and Ghazvininejad, Marjan and May, Jonathan and Knight, Kevin},
      booktitle = {NAACL Workshop},
      year = {2018}
    }
    
    Details
  • Learning to Converse with Noisy Data: Generation with Calibration.

    Mingyue Shang, Zhenxin Fu, Nanyun Peng, Yansong Feng, Dongyan Zhao, and Rui Yan, in IJCAI, 2018.
    Full Text BibTeX Details
    @inproceedings{shang2018learning,
      title = {Learning to Converse with Noisy Data: Generation with Calibration.},
      author = {Shang, Mingyue and Fu, Zhenxin and Peng, Nanyun and Feng, Yansong and Zhao, Dongyan and Yan, Rui},
      booktitle = {IJCAI},
      pages = {4338--4344},
      year = {2018}
    }
    
    Details
  • Stack-pointer networks for dependency parsing

    Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy, in The 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), 2018.
    Full Text BibTeX Details
    @inproceedings{ma2018stack,
      title = {Stack-pointer networks for dependency parsing},
      author = {Ma, Xuezhe and Hu, Zecong and Liu, Jingzhou and Peng, Nanyun and Neubig, Graham and Hovy, Eduard},
      booktitle = {The 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)},
      volume = {1},
      year = {2018}
    }
    
    Details
  • Style Transfer in Text: Exploration and Evaluation

    Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan, in Proceedings of The Thirty-Second Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), 2018.
    Full Text BibTeX Details
    @inproceedings{fu2018style,
      title = {Style Transfer in Text: Exploration and Evaluation},
      author = {Fu, Zhenxin and Tan, Xiaoye and Peng, Nanyun and Zhao, Dongyan and Yan, Rui},
      booktitle = {Proceedings of The Thirty-Second Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI)},
      year = {2018}
    }
    
    Details

2017

2016

2015

  • HLTCOE Participation in TAC KBP 2015: Cold Start and TEDL

    Taneeya Satyapanich, Tim Finin, Paul McNamee, James Mayfield, Doug Oard, Nanyun Peng, Ning Gao, Yiu-Chang Lin, Joshi MacKin, and Tim Dowd, UMBC Faculty Collection, 2015.
    BibTeX Details
    @article{satyapanich2015hltcoe,
      title = {HLTCOE Participation in TAC KBP 2015: Cold Start and TEDL},
      author = {Satyapanich, Taneeya and Finin, Tim and McNamee, Paul and Mayfield, James and Oard, Doug and Peng, Nanyun and Gao, Ning and Lin, Yiu-Chang and MacKin, Joshi and Dowd, Tim},
      journal = {UMBC Faculty Collection},
      year = {2015},
      publisher = {National Institute of Standards and Technology}
    }
    
    Details
  • Named entity recognition for chinese social media with jointly trained embeddings

    Nanyun Peng and Mark Dredze, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015.
    Full Text BibTeX Details
    @inproceedings{peng2015named,
      title = {Named entity recognition for chinese social media with jointly trained embeddings},
      author = {Peng, Nanyun and Dredze, Mark},
      booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
      pages = {548--554},
      year = {2015}
    }
    
    Details
  • An Empirical Study of Chinese Name Matching and Applications

    Nanyun Peng, Mo Yu, and Mark Dredze, in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL), 2015.
    BibTeX Details
    @inproceedings{peng2015empirical,
      title = {An Empirical Study of Chinese Name Matching and Applications},
      author = {Peng, Nanyun and Yu, Mo and Dredze, Mark},
      booktitle = {Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2015}
    }
    
    Details
  • Modeling word forms using latent underlying morphs and phonology

    Ryan Cotterell, Nanyun Peng, and Jason Eisner, Transactions of the Association of Computational Linguistics, 2015.
    Full Text BibTeX Details
    @article{cotterell2015modeling,
      title = {Modeling word forms using latent underlying morphs and phonology},
      author = {Cotterell, Ryan and Peng, Nanyun and Eisner, Jason},
      journal = {Transactions of the Association of Computational Linguistics},
      volume = {3},
      number = {1},
      year = {2015}
    }
    
    Details
  • A concrete chinese NLP pipeline

    Nanyun Peng, Francis Ferraro, Mo Yu, Nicholas Andrews, Jay DeYoung, Max Thomas, Matthew R. Gormley, Travis Wolfe, Craig Harman, Benjamin Van Durme, and others, in Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, 2015.
    BibTeX Details
    @inproceedings{peng2015concrete,
      title = {A concrete chinese NLP pipeline},
      author = {Peng, Nanyun and Ferraro, Francis and Yu, Mo and Andrews, Nicholas and DeYoung, Jay and Thomas, Max and Gormley, Matthew R and Wolfe, Travis and Harman, Craig and Van Durme, Benjamin and others},
      booktitle = {Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations},
      pages = {86--90},
      year = {2015}
    }
    
    Details
  • A chinese concrete nlp pipeline

    Nanyun Peng, Francis Ferraro, Mo Yu, Nicholas Andrews, Jay DeYoung, Max Thomas, Matt Gormley, Travis Wolfe, Craig Harman, Benjamin Van Durme, and others, North American Chapter of the Association for Computational Linguistics (NAACL), Demonstration Session, 2015.
    BibTeX Details
    @article{peng2015chinese,
      title = {A chinese concrete nlp pipeline},
      author = {Peng, Nanyun and Ferraro, Francis and Yu, Mo and Andrews, Nicholas and DeYoung, Jay and Thomas, Max and Gormley, Matt and Wolfe, Travis and Harman, Craig and Van Durme, Benjamin and others},
      journal = {North American Chapter of the Association for Computational Linguistics (NAACL), Demonstration Session},
      year = {2015}
    }
    
    Details
  • HLTCOE participation in TAC KBP 2015: Cold start and TEDL

    Tim Finin, Dawn Lawrie, Paul McNamee, James Mayfield, Doug Oard, Nanyun Peng, Ning Gao, Yiu-Chang Lin, Joshi MacKin, Tim Dowd, and others, in Eighth Text Analysis Conference, 2015.
    BibTeX Details
    @inproceedings{finin2015hltcoe,
      title = {HLTCOE participation in TAC KBP 2015: Cold start and TEDL},
      author = {Finin, Tim and Lawrie, Dawn and McNamee, Paul and Mayfield, James and Oard, Doug and Peng, Nanyun and Gao, Ning and Lin, Yiu-Chang and MacKin, Joshi and Dowd, Tim and others},
      booktitle = {Eighth Text Analysis Conference},
      year = {2015}
    }
    
    Details
  • Dual decomposition inference for graphical models over strings

    Nanyun Peng, Ryan Cotterell, and Jason Eisner, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015.
    Full Text BibTeX Details
    @inproceedings{peng2015dual,
      title = {Dual decomposition inference for graphical models over strings},
      author = {Peng, Nanyun and Cotterell, Ryan and Eisner, Jason},
      booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
      pages = {917--927},
      year = {2015}
    }
    
    Details

2014

  • Stochastic Contextual Edit Distance and Probabilistic FSTs

    Ryan Cotterell, Nanyun Peng, and Jason Eisner, in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, 2014.
    Full Text BibTeX Details
    @inproceedings{cotterell2014stochastic,
      title = {Stochastic Contextual Edit Distance and Probabilistic FSTs},
      author = {Cotterell, Ryan and Peng, Nanyun and Eisner, Jason},
      booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics},
      year = {2014}
    }
    
    Details
  • Learning polylingual topic models from code-switched social media documents

    Nanyun Peng, Yiming Wang, and Mark Dredze, in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2014.
    Full Text BibTeX Details
    @inproceedings{peng2014learning,
      title = {Learning polylingual topic models from code-switched social media documents},
      author = {Peng, Nanyun and Wang, Yiming and Dredze, Mark},
      booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
      pages = {674--679},
      year = {2014}
    }
    
    Details

2012

  • Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information

    Wan-Yu Lin, Nanyun Peng, Chun-Chao Yen, and Shou-de Lin, in Proceedings of the ACL 2012 System Demonstrations, 2012.
    BibTeX Details
    @inproceedings{lin2012online,
      title = {Online Plagiarized Detection Through Exploiting Lexical, Syntax, and Semantic Information},
      author = {Lin, Wan-Yu and Peng, Nanyun and Yen, Chun-Chao and Lin, Shou-de},
      booktitle = {Proceedings of the ACL 2012 System Demonstrations},
      pages = {145--150},
      year = {2012}
    }
    
    Details
  • Exploiting latent information to predict diffusions of novel topics on social networks

    Tsung-Ting Kuo, San-Chuan Hung, Wei-Shih Lin, Nanyun Peng, Shou-De Lin, and Wei-Fen Lin, in Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2012.
    BibTeX Details
    @inproceedings{kuo2012exploiting,
      title = {Exploiting latent information to predict diffusions of novel topics on social networks},
      author = {Kuo, Tsung-Ting and Hung, San-Chuan and Lin, Wei-Shih and Peng, Nanyun and Lin, Shou-De and Lin, Wei-Fen},
      booktitle = {Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
      pages = {344--348},
      year = {2012}
    }
    
    Details
  • On convergence rate of concave-convex procedure

    Ian E. H. Yen, Nanyun Peng, Po-Wei Wang, and Shou-De Lin, in Proceedings of the NIPS 2012 Optimization Workshop, 2012.
    BibTeX Details
    @inproceedings{yen2012convergence,
      title = {On convergence rate of concave-convex procedure},
      author = {Yen, Ian EH and Peng, Nanyun and Wang, Po-Wei and Lin, Shou-De},
      booktitle = {Proceedings of the NIPS 2012 Optimization Workshop},
      pages = {31--35},
      year = {2012}
    }
    
    Details