Recent QA Models:

UnifiedQA: Crossing Format Boundaries With a Single QA System (2020) Demo

ProQA: Resource-efficient method for pretraining a dense corpus index for open-domain QA and IR. (2020) paper github

TYDI QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages (2020) paper

Retrospective Reader for Machine Reading Comprehension paper

TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection (AAAI 2020) paper

Recent Language Models:

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, Kevin Clark, et al., ICLR, 2020.

TinyBERT: Distilling BERT for Natural Language Understanding, Xiaoqi Jiao, et al., ICLR, 2020.

MINILM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers, Wenhui Wang, et al., arXiv, 2020.

T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, Colin Raffel, et al., arXiv preprint, 2019.

ERNIE: Enhanced Language Representation with Informative Entities, Zhengyan Zhang, et al., ACL, 2019.

XLNet: Generalized Autoregressive Pretraining for Language Understanding, Zhilin Yang, et al., arXiv preprint, 2019.

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, Zhenzhong Lan, et al., arXiv preprint, 2019.

RoBERTa: A Robustly Optimized BERT Pretraining Approach, Yinhan Liu, et al., arXiv preprint, 2019.

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, Victor sanh, et al., arXiv, 2019.

SpanBERT: Improving Pre-training by Representing and Predicting Spans, Mandar Joshi, et al., TACL, 2019.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Jacob Devlin, et al., NAACL 2019, 2018.

Thanks to these repositories: https://github.com/seriousran/awesome-qa