Co-scale conv-attentional image transformers W Xu, Y Xu, T Chang, Z Tu ICCV 2021 (Oral), 2021 | 409 | 2021 |
Pose Recognition with Cascade Transformers K Li, S Wang, X Zhang, Y Xu, W Xu, Z Tu CVPR 2021, 2021 | 265 | 2021 |
Guided variational autoencoder for disentanglement learning Z Ding, Y Xu, W Xu, G Parmar, Y Yang, M Welling, Z Tu CVPR 2020, 2020 | 134 | 2020 |
Line Segment Detection Using Transformers without Edges Y Xu, W Xu, D Cheung, Z Tu CVPR 2021 (Oral), 2021 | 131 | 2021 |
Attentional Constellation Nets for Few-Shot Learning W Xu, Y Xu, H Wang, Z Tu ICLR 2021, 2020 | 117 | 2020 |
Bliva: A simple multimodal llm for better handling of text-rich visual questions W Hu, Y Xu, Y Li, W Li, Z Chen, Z Tu Proceedings of the AAAI Conference on Artificial Intelligence 38 (3), 2256-2264, 2024 | 111 | 2024 |
Mixed-phase TiO 2 nanorods assembled microsphere: crystal phase control and photovoltaic application P Ruan, J Qian, Y Xu, H Xie, C Shao, X Zhou CrystEngComm 15 (25), 5093-5099, 2013 | 30 | 2013 |
Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models TA Chang, Y Xu, W Xu, Z Tu ACL 2021, 2021 | 15 | 2021 |
On the feasibility of cross-task transfer with model-based reinforcement learning Y Xu, N Hansen, Z Wang, YC Chan, H Su, Z Tu ICLR 2023, 2022 | 14 | 2022 |
Rethinking exposure bias in language modeling Y Xu, K Zhang, H Dong, Y Sun, W Zhao, Z Tu arXiv preprint arXiv:1910.11235, 2019 | 7 | 2019 |
Neural program synthesis by self-learning Y Xu, L Dai, U Singh, K Zhang, Z Tu arXiv preprint arXiv:1910.05865, 2019 | 4 | 2019 |
Exploring Visual Perception with Transformers and World Model Representation Y Xu University of California, San Diego, 2023 | | 2023 |