Follow
Kris Cao
Kris Cao
DeepMind
Verified email at deepmind.com - Homepage
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
3482023
Emergent communication through negotiation
K Cao, A Lazaridou, M Lanctot, JZ Leibo, K Tuyls, S Clark
arXiv preprint arXiv:1804.03980, 2018
1752018
Mind the gap: Assessing temporal generalization in neural language models
A Lazaridou, A Kuncoro, E Gribovskaya, D Agrawal, A Liska, T Terzi, ...
Advances in Neural Information Processing Systems 34, 29348-29363, 2021
116*2021
A joint model for word embedding and word morphology
K Cao, M Rei
arXiv preprint arXiv:1606.02601, 2016
1132016
Latent variable dialogue models and their diversity
K Cao, S Clark
arXiv preprint arXiv:1702.05962, 2017
872017
Game Plan: What AI can do for Football, and What Football can do for AI
K Tuyls, S Omidshafiei, P Muller, Z Wang, J Connor, D Hennes, I Graham, ...
Journal of Artificial Intelligence Research 71, 41-88, 2021
802021
Control prefixes for parameter-efficient text generation
J Clive, K Cao, M Rei
arXiv preprint arXiv:2110.08329, 2021
592021
Factorising AMR generation through syntax
K Cao, S Clark
arXiv preprint arXiv:1804.07707, 2018
262018
Multiagent off-screen behavior prediction in football
S Omidshafiei, D Hennes, M Garnelo, Z Wang, A Recasens, E Tarassov, ...
Scientific reports 12 (1), 8638, 2022
142022
You should evaluate your language model on marginal likelihood over tokenisations
K Cao, L Rimell
arXiv preprint arXiv:2109.02550, 2021
112021
Towards coherent and consistent use of entities in narrative generation
P Papalampidi, K Cao, T Kocisky
International Conference on Machine Learning, 17278-17294, 2022
82022
Learning meaning representations for text generation with deep generative models
K Cao
82020
Modelling latent skills for multitask language generation
K Cao, D Yogatama
arXiv preprint arXiv:2002.09543, 2020
42020
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
32024
What is the best recipe for character-level encoder-only modelling?
K Cao
arXiv preprint arXiv:2305.05461, 2023
22023
Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance
O Goldman, A Caciularu, M Eyal, K Cao, I Szpektor, R Tsarfaty
arXiv preprint arXiv:2403.06265, 2024
2024
Dynamic entity representations for sequence generation
KY Cao, T Kocisky, P Papalampidi
US Patent App. 17/960,775, 2023
2023
Factorising
K Cao, S Clark
Proceedings of the 2019 Conference of the North, 2019
2019
Proceedings of the Third Workshop on Representation Learning for NLP
I Augenstein, K Cao, H He, F Hill, S Gella, J Kiros, H Mei, D Misra
Proceedings of the Third Workshop on Representation Learning for NLP, 2018
2018
CPGS First Year Report
K Cao
2015
The system can't perform the operation now. Try again later.
Articles 1–20