Share this page:

Context-Situated Pun Generation

Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.

Download the full text


Abstract


Bib Entry

@inproceedings{sun2022context,
  title = {Context-Situated Pun Generation},
  author = {Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Gao, Shuyang and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year = {2022}
}

Related Publications

  • Unsupervised Melody-to-Lyric Generation

    Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text Slides BibTeX Details
    @inproceedings{tian2023lyric,
      title = {Unsupervised Melody-to-Lyric Generation},
      author = {Tian, Yufei and Narayan-Chen, Anjali and Oraby, Shereen and Cervone, Alessandra and Sigurdsson, Gunnar and Tao, Chenyang and Zhao, Wenbo and Chung, Tagyoung and Huang, Jing and Peng, Nanyun},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • DOC: Improving Long Story Coherence With Detailed Outline Control

    Kevin Yang, Dan Klein, Nanyun Peng, and Yuandong Tian, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), 2023.
    Full Text BibTeX Details
    @inproceedings{yang2023doc,
      title = {DOC: Improving Long Story Coherence With Detailed Outline Control},
      author = {Yang, Kevin and Klein, Dan and Peng, Nanyun and Tian, Yuandong},
      booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2023}
    }
    
    Details
  • ExPUNations: Augmenting Puns with Keywords and Explanations

    Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
    Full Text BibTeX Details
    @inproceedings{sun2022expun,
      title = {ExPUNations: Augmenting Puns with Keywords and Explanations},
      author = {Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Cervone, Alessandra and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2022}
    }
    
    Details
  • Context-Situated Pun Generation

    Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng, in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
    Full Text BibTeX Details
    @inproceedings{sun2022context,
      title = {Context-Situated Pun Generation},
      author = {Sun, Jiao and Narayan-Chen, Anjali and Oraby, Shereen and Gao, Shuyang and Chung, Tagyoung and Huang, Jing and Liu, Yang and Peng, Nanyun},
      booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2022}
    }
    
    Details
  • A Unified Framework for Pun Generation with Humor Principles

    Yufei Tian, Divyanshu Arun Sheth, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings), 2022.
    Full Text BibTeX Details
    @inproceedings{tian2022unified,
      title = {A Unified Framework for Pun Generation with Humor Principles},
      author = {Tian, Yufei and Arun Sheth, Divyanshu and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP (EMNLP-findings)},
      year = {2022}
    }
    
    Details
  • Controllable Text Generation for Open-Domain Creativity and Fairness

    Nanyun Peng, in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Early Career Track, 2022.
    Full Text BibTeX Details
    @inproceedings{peng2022controllable,
      title = {Controllable Text Generation for Open-Domain Creativity and Fairness},
      author = {Peng, Nanyun},
      booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Early Career Track},
      year = {2022}
    }
    
    Details
  • Zero-Shot Sonnet Generation with Discourse-Level Planning and Aesthetics Features

    Yufei Tian and Nanyun Peng, in 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2022.
    Full Text Code BibTeX Details
    @inproceedings{tian2022sonnet,
      title = {Zero-Shot Sonnet Generation with Discourse-Level Planning and Aesthetics Features},
      author = {Tian, Yufei and Peng, Nanyun},
      booktitle = {2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      year = {2022}
    }
    
    Details
  • AmbiPun: Generating Humorous Puns with Ambiguous Context

    Anirudh Mittal, Yufei Tian, and Nanyun Peng, in 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short, 2022.
    Full Text Code BibTeX Details
    @inproceedings{Mittal2022ambipun,
      title = {AmbiPun: Generating Humorous Puns with Ambiguous Context},
      author = {Mittal, Anirudh and Tian, Yufei and Peng, Nanyun},
      booktitle = {2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), short},
      year = {2022}
    }
    
    Details
  • HypoGen: Hyperbole Generation with Commonsense and Counterfactual Knowledge

    Yufei Tian, Arvind krishna Sridhar, and Nanyun Peng, in Findings of the Association for Computational Linguistics: EMNLP, 2021.
    Full Text Video Code Abstract BibTeX Details
     A hyperbole is an intentional and creative exaggeration not to be taken literally. Despite its ubiquity in daily life, the computational explorations of hyperboles are scarce. In this paper, we tackle the under-explored and challenging task: sentence-level hyperbole generation. We start with a representative syntactic pattern for intensification and systematically study the semantic (commonsense and counterfactual) relationships between each component in such hyperboles. We then leverage commonsense and counterfactual inference to generate hyperbole candidates based on our findings from the pattern, and train neural classifiers to rank and select high-quality hyperboles. Automatic and human evaluations show that our generation method is able to generate hyperboles creatively with high success rate and intensity.
    @inproceedings{tian2021hypogen,
      title = {HypoGen: Hyperbole Generation with Commonsense and Counterfactual Knowledge},
      author = {Tian, Yufei and Sridhar, Arvind krishna and Peng, Nanyun},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP},
      year = {2021}
    }
    
    Details
  • Metaphor Generation with Conceptual Mappings

    Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, and Iryna Gurevych, in Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    Full Text Code Abstract BibTeX Details
    Generating metaphors is a difficult task as it requires understanding nuanced relationships between abstract concepts. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions. To achieve this, we develop two methods: 1) using FrameNetbased embeddings to learn mappings between domains and applying them at the lexical level (CM-Lex), and 2) deriving source/target pairs to train a controlled seq-to-seq generation model (CM-BART). We assess our methods through automatic and human evaluation for basic metaphoricity and conceptual metaphor presence. We show that the unsupervised CMLex model is competitive with recent deep learning metaphor generation systems, and CM-BART outperforms all other models both in automatic and human evaluations.
    @inproceedings{stowe2021metaphor,
      title = {Metaphor Generation with Conceptual Mappings},
      author = {Stowe, Kevin and Chakrabarty, Tuhin and Peng, Nanyun and Muresan, Smaranda and Gurevych, Iryna},
      booktitle = {Proceedings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL)},
      year = {2021}
    }
    
    Details
  • MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding

    Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, and Nanyun Peng, in The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
    Full Text Poster Code Abstract BibTeX Details
    Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (CITATION) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data to generate high-quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. A task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.
    @inproceedings{chakrabarty2021mermaid,
      title = {MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding},
      author = {Chakrabarty, Tuhin and Zhang, Xurui and Muresan, Smaranda and Peng, Nanyun},
      booktitle = {The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
      talk_url = {https://underline.io/events/122/sessions/4240/lecture/19642-mermaid-metaphor-generation-with-symbolism-and-discriminative-decoding},
      year = {2021}
    }
    
    Details
  • Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile Generation

    Tuhin Chakrabarty, Smaranda Muresan, and Nanyun Peng, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
    Full Text Slides Code Abstract BibTeX Details
    Literary tropes, from poetry to stories, are at the crux of human imagination and communication. Figurative language, such as a simile,goes beyond plain expressions to give readers new insights and inspirations. We tackle the problem of simile generation. Generating a simile requires proper understanding for effective mapping of properties between two concepts. To this end, we first propose a method to automatically construct a parallel corpus by transforming a large number of similes collected from Reddit to their literal counterpart using structured common sense knowledge. We then fine-tune a pretrained sequence to sequence model, BART (Lewis et al., 2019),on the literal-simile pairs to generate novel similes given a literal sentence. Experiments show that our approach generates 88% novel similes that do not share properties with the training data. Human evaluation on an independent set of literal statements shows that our model generates similes better than two literary experts 37% of the times, and three baseline systems including a recent metaphor generation model 71% of the times when compared pairwise. We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
    @inproceedings{chakrabarty-etal-2020-generating,
      title = {Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile Generation},
      author = {Chakrabarty, Tuhin and Muresan, Smaranda and Peng, Nanyun},
      booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      pages = {6455--6469},
      publisher = {Association for Computational Linguistics},
      slideslive_id = {38938962},
      year = {2020}
    }
    
    Details
  • R3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge

    Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, and Nanyun Peng, in the 2020 Annual Conference of the Association for Computational Linguistics (ACL), 2020.
    Full Text Code BibTeX Details
    @inproceedings{chakrabarty2020r,
      title = {R3: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge},
      author = {Chakrabarty, Tuhin and Ghosh, Debanjan and Muresan, Smaranda and Peng, Nanyun},
      booktitle = {the 2020 Annual Conference of the Association for Computational Linguistics (ACL)},
      year = {2020}
    }
    
    Details
  • Pun Generation with Surprise

    He He, Nanyun Peng, and Percy Liang, in 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019), 2019.
    Full Text BibTeX Details
    @inproceedings{he2019pun,
      title = {Pun Generation with Surprise},
      author = {He, He and Peng, Nanyun and Liang, Percy},
      booktitle = {2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2019)},
      volume = {1},
      year = {2019}
    }
    
    Details