Follow
Nan Duan
Nan Duan
Senior Principal Research Manager, Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Codebert: A pre-trained model for programming and natural languages
Z Feng, D Guo, D Tang, N Duan, X Feng, M Gong, L Shou, B Qin, T Liu, ...
arXiv preprint arXiv:2002.08155, 2020
24472020
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training
G Li, N Duan, Y Fang, M Gong, D Jiang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (07), 11336 …, 2020
9312020
Graphcodebert: Pre-training code representations with data flow
D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu, L Zhou, N Duan, ...
arXiv preprint arXiv:2009.08366, 2020
9122020
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
7302021
Visual chatgpt: Talking, drawing and editing with visual foundation models
C Wu, S Yin, W Qi, X Wang, Z Tang, N Duan
arXiv preprint arXiv:2303.04671, 2023
5602023
K-adapter: Infusing knowledge into pre-trained models with adapters
R Wang, D Tang, N Duan, Z Wei, X Huang, G Cao, D Jiang, M Zhou
arXiv preprint arXiv:2002.01808, 2020
5502020
Unixcoder: Unified cross-modal pre-training for code representation
D Guo, S Lu, N Duan, Y Wang, M Zhou, J Yin
arXiv preprint arXiv:2203.03850, 2022
4752022
Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training
W Qi, Y Yan, Y Gong, D Liu, N Duan, J Chen, R Zhang, M Zhou
arXiv preprint arXiv:2001.04063, 2020
4612020
Univl: A unified video and language pre-training model for multimodal understanding and generation
H Luo, L Ji, B Shi, H Huang, N Duan, T Li, J Li, T Bharti, M Zhou
arXiv preprint arXiv:2002.06353, 2020
4572020
CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning
H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li
Neurocomputing 508, 293-304, 2022
4512022
Question generation for question answering
N Duan, D Tang, P Chen, M Zhou
Proceedings of the 2017 conference on empirical methods in natural language …, 2017
3392017
Codexglue: A machine learning benchmark dataset for code understanding and generation
S Lu, D Guo, S Ren, J Huang, A Svyatkovskiy, A Blanco, C Clement, ...
arXiv preprint arXiv:2102.04664, 2021
334*2021
Clip4clip: An empirical study of clip for end to end video clip retrieval
H Luo, L Ji, M Zhong, Y Chen, W Lei, N Duan, T Li
arXiv preprint arXiv:2104.08860, 2021
3062021
Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation
Y Liang, N Duan, Y Gong, N Wu, F Guo, W Qi, M Gong, L Shou, D Jiang, ...
arXiv preprint arXiv:2004.01401, 2020
3052020
Nüwa: Visual synthesis pre-training for neural visual world creation
C Wu, J Liang, L Ji, F Yang, Y Fang, D Jiang, N Duan
European conference on computer vision, 720-736, 2022
2852022
Constraint-based question answering with knowledge graph
J Bao, N Duan, Z Yan, M Zhou, T Zhao
Proceedings of COLING 2016, the 26th international conference on …, 2016
2852016
Imagebert: Cross-modal pre-training with large-scale weak-supervised image-text data
D Qi, L Su, J Song, E Cui, T Bharti, A Sacheti
arXiv preprint arXiv:2001.07966, 2020
2842020
Pretraining-based natural language generation for text summarization
H Zhang, J Xu, J Wang
arXiv preprint arXiv:1902.09243, 2019
2712019
Agieval: A human-centric benchmark for evaluating foundation models
W Zhong, R Cui, Y Guo, Y Liang, S Lu, Y Wang, A Saied, W Chen, ...
arXiv preprint arXiv:2304.06364, 2023
2592023
Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow
D Guo, S Ren, S Lu, Z Feng, D Tang, S Liu
9th International Conference on Learning Representations, ICLR, 3-7, 2021
2512021
The system can't perform the operation now. Try again later.
Articles 1–20