Follow
Canwen Xu
Title
Cited by
Cited by
Year
🤗 Transformers: State-of-the-art natural language processing
T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, ...
EMNLP 2020 (Demo), 38-45, 2020
7528*2020
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
ICLR 2022, 2021
2842021
🤗 Datasets: A Community Library for Natural Language Processing
Q Lhoest, AV del Moral, Y Jernite, A Thakur, P von Platen, S Patil, ...
EMNLP 2021 (Demo), 2021
143*2021
Bert-of-theseus: Compressing bert by progressive module replacing
C Xu, W Zhou, T Ge, F Wei, M Zhou
EMNLP 2020, 7859--7869, 2020
1282020
BERT Loses Patience: Fast and Robust Inference with Early Exit
W Zhou, C Xu, T Ge, J McAuley, K Xu, F Wei
NeurIPS 2020, 2020
1212020
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
SH Bach, V Sanh, ZX Yong, A Webson, C Raffel, NV Nayak, A Sharma, ...
ACL 2022 (Demo), 2022
492022
Bloom: A 176b-parameter open-access multilingual language model
TL Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
arXiv preprint arXiv:2211.05100, 2022
332022
Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders
Y Duan, C Xu, J Pei, J Han, C Li
ACL 2020, 253–262, 2020
282020
BERT learns to teach: Knowledge distillation with meta learning
W Zhou, C Xu, J McAuley
ACL 2022, 7037-7049, 2022
26*2022
DLocRL: A deep learning pipeline for fine-grained location recognition and linking in tweets
C Xu, J Li, X Luo, J Pei, C Li, D Ji
The Web Conference (WWW) 2019, 3391-3397, 2019
252019
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
EMNLP 2021, 2021
222021
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
C Xu, D Guo, N Duan, J McAuley
ACL 2022 (Findings), 2022
162022
Matinf: A jointly labeled large-scale dataset for classification, question answering and summarization
C Xu, J Pei, H Wu, Y Liu, C Li
ACL 2020, 3586–3596, 2020
102020
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
W Zhou, T Ge, C Xu, K Xu, F Wei
EMNLP 2021, 2021
92021
Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge
C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei
NAACL-HLT 2021, 2021
62021
A survey on dynamic neural networks for natural language processing
C Xu, J McAuley
arXiv preprint arXiv:2202.07101, 2022
42022
A survey on model compression for natural language processing
C Xu, J McAuley
arXiv preprint arXiv:2202.07105, 2022
32022
UnihanLM: Coarse-to-Fine Chinese-Japanese Language Model Pretraining with the Unihan Database
C Xu, T Ge, C Li, F Wei
AACL-IJCNLP 2020, 201-211, 2020
32020
Obj-glove: Scene-based contextual object embedding
C Xu, Z Chen, C Li
arXiv preprint arXiv:1907.01478, 2019
32019
Leashing the Inner Demons: Self-Detoxification for Language Models
C Xu, Z He, Z He, J McAuley
AAAI 2022, 2022
22022
The system can't perform the operation now. Try again later.
Articles 1–20