Share this page:

Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning

Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, and Kai-Wei Chang, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.

Abstract


Bib Entry

@inproceedings{huang2021improvinh,
  title = {Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning},
  author = {Yin, Da and Li, Liunian Harold and Hu, Ziniu and Peng, Nanyun and Chang, Kai-Wei},
  booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year = {2021}
}

Related Publications

  • Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning

    Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, and Kai-Wei Chang, in The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
    BibTeX Details
    @inproceedings{huang2021improvinh,
      title = {Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning},
      author = {Yin, Da and Li, Liunian Harold and Hu, Ziniu and Peng, Nanyun and Chang, Kai-Wei},
      booktitle = {The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2021}
    }
    
    Details
  • COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences

    Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-lin Wu, Xuezhe Ma, and Nanyun Peng, in Proceedings of Findings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-Findings), 2021.
    Full Text BibTeX Details
    @inproceedings{sw2021com,
      title = {COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences},
      author = {Singh, Shikhar and Wen, Nuan and Hou, Yu and Alipoormolabashi, Pegah and Wu, Te-lin and Ma, Xuezhe and Peng, Nanyun},
      booktitle = {Proceedings of Findings of the Conference of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-Findings)},
      year = {2021}
    }
    
    Details
  • Identifying Distributional Perspective Differences from Colingual Groups

    Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, and Nanyun Peng, in NAACL 2021 Workshop of Social NLP, 2021.
    Full Text Code Abstract BibTeX Details
    Perspective differences exist among different cultures or languages. A lack of mutual understanding among different groups about their perspectives on specific values or events may lead to uninformed decisions or biased opinions. Automatically understanding the group perspectives can provide essential background for many downstream applications of natural language processing techniques. In this paper, we study colingual groups and use language corpora as a proxy to identify their distributional perspectives. We present a novel computational approach to learn shared understandings, and benchmark our method by building culturally-aware models for the English, Chinese, and Japanese languages. On a held out set of diverse topics including marriage, corruption, democracy, our model achieves high correlation with human judgements regarding intra-group values and inter-group differences.
    @inproceedings{tian2021identifying,
      title = {Identifying Distributional Perspective Differences from Colingual Groups},
      author = {Tian, Yufei and Chakrabarty, Tuhin and Morstatter, Fred and Peng, Nanyun},
      booktitle = {NAACL 2021 Workshop of Social NLP},
      year = {2021}
    }
    
    Details
  • Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering

    Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, and Xiang Ren, in the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings, 2020.
    Full Text Code Abstract BibTeX Details
    Commonsense question answering (QA) requires background knowledge which is not explicitly stated in a given context. Prior works use commonsense knowledge graphs (KGs) to obtain this knowledge for reasoning. However, relying entirely on these KGs may not suffice, considering their limited coverage and the contextual dependence of their knowledge. In this paper, we augment a general commonsense QA framework with a knowledgeable path generator. By extrapolating over existing paths in a KG with a state-of-the-art language model, our generator learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path. Such paths can provide structured evidence for solving commonsense questions without fine-tuning the path generator. Experiments on two datasets show the superiority of our method over previous works which fully rely on knowledge from KGs (with up to 6% improvement in accuracy), across various amounts of training data. Further evaluation suggests that the generated paths are typically interpretable, novel, and relevant to the task.
    @inproceedings{wang2020connecting,
      title = {Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering},
      author = {Wang, Peifeng and Peng, Nanyun and Ilievski, Filip and Szekely, Pedro and Ren, Xiang},
      booktitle = {the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)-Findings},
      pages = {4129--4140},
      year = {2020}
    }
    
    Details
  • Do Nuclear Submarines Have Nuclear Captains? A Challenge Dataset for Commonsense Reasoning over Adjectives and Objects

    James Mullenbach, Jonathan Gordon, Nanyun Peng, and Jonathan May, in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), short, 2019.
    Full Text BibTeX Details
    @inproceedings{mullenbach2019nuclear,
      title = {Do Nuclear Submarines Have Nuclear Captains? A Challenge Dataset for Commonsense Reasoning over Adjectives and Objects},
      author = {Mullenbach, James and Gordon, Jonathan and Peng, Nanyun and May, Jonathan},
      booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), short},
      pages = {6054--6060},
      year = {2019}
    }
    
    Details