Peifeng Wang

I am a PhD candidate in the Computer Science Department at the University of Southern California (USC). I serve as a research assistant at USC's Information Sciences Institute (ISI), where I'm advised by Prof. Xiang Ren and Prof. Muhao Chen. At ISI, I've worked on machine common sense, explainable question answering, and controllable text generation.

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  CV

profile photo

Research

I'm broadly interested in natural language processing, with a particular focus on reasoning and developing explainable artificial intelligence systems. My current research interests include:

  • Complex reasoning with large language models
  • Developing faithful explanations for AI models
  • Plan-ahead natural language generation
  • Integration of structured knowledge into language models
project image

DOMINO: A Dual-System for Multi-step Visual Language Reasoning


Peifeng Wang, Olga Golovneva, Armen Aghajanyan, Xiang Ren, Muhao Chen, Asli Celikyilmaz, Maryam Fazel-Zarandi
preprint, 2023
arxiv / code /

A dual-system for answering a complex question over a chart step-by-step with LLM and a vision module.

project image

SCOTT: Self-Consistent Chain-of-Thought Distillation


Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, Xiang Ren
Annual Meeting of the Association for Computational Linguistics, 2023
Outstanding Paper Award
arxiv / code / blog /

A faithful knowledge distillation method that learns a small, self-consistent Chain-of-Thought model from a large teacher model.

project image

PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales


Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren
International Conference on Learning Representations, 2023
arxiv / code /

An LM pipeline that rationalizes via prompt-based learning, and learns to faithfully reason over rationales.

project image

Contextualized Scene Imagination for Generative Commonsense Reasoning


Peifeng Wang, Jonathan Zamora, Junfeng Liu, Filip Ilievski, Muhao Chen, Xiang Ren
International Conference on Learning Representations, 2022
arxiv / code /

An Imagine-and-Verbalize framework, which learns to imagine a scene knowledge graph (SKG), and leverage the SKG as a constraint when generating a plausible scene description.

project image

Do Language Models Perform Generalizable Commonsense Inference?


Peifeng Wang, Filip Ilievski, Muhao Chen, Xiang Ren
Annual Meeting of the Association for Computational Linguistics - Findings, 2021
arxiv / code /

An analysis over the ability of LMs to perform generalizable commonsense inference, in terms of knowledge capacity, transferability, and induction

project image

Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation


Mrigank Raman, Siddhant Agarwal, Peifeng Wang, Aaron Chan, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren
International Conference on Learning Representations, 2021
arxiv / code /

An analysis over the effects of strategically perturbed knowledge graphs (KGs) on KG-augmented model performance on downstream tasks.

project image

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering


Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang Ren
Conference on Empirical Methods in Natural Language Processing, 2020
arxiv / code /

A novel knowledge-aware approach that equips pre-trained language models with a multi-hop graph relation network.

project image

Connecting the Dots: A Knowledgeable Path Generator for Commonsense Question Answering


Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, Xiang Ren
Conference on Empirical Methods in Natural Language Processing - Findings, 2020
arxiv / code /

A general commonsense QA framework augmented with a knowledgeable path generator, which learns to connect a pair of concepts in text with a dynamic, and potentially novel, multi-hop relational path.

project image

Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding


Peifeng Wang, Jialong Han, Chenliang Li, Rong Pan
AAAI Conference on Artificial Intelligence, 2019
arxiv / code /

An inductive knowledge embedding model, Logic Attention Network, which aggregates entity neighbors with both rules- and network-based attention weights.

project image

Incorporating GAN for Negative Sampling in Knowledge Representation Learning


Peifeng Wang, Shuangyin Li, Rong pan
AAAI Conference on Artificial Intelligence, 2018
arxiv /

A knowledge representation learning framework based on Generative Adversarial Networks (GANs) that aims at obtaining high-quality negative samples for more effective knowledge embedding.




Work Experience

  • Summer 2023, Research Scientist Intern, FAIR Labs@Meta, New York, USA
  • Summer 2022, Applied Scientist Intern, Search@Amazon, Palo Alto, USA
  • Summer 2021, Research Intern, Azure AI@Microsoft, Remote
  • Summer 2018, Research Intern, AI Lab@Tencent, Shenzhen, China




Design and source code from Jon Barron's website