Følg
Marie-Anne Lachaux
Marie-Anne Lachaux
Mistral AI
Verificeret mail på mistral.ai
Titel
Citeret af
Citeret af
År
Llama: Open and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
128192023
Llama 2: Open foundation and fine-tuned chat models
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
125202023
Mixtral of experts
AQ Jiang, A Sablayrolles, A Roux, A Mensch, B Savary, C Bamford, ...
arXiv preprint arXiv:2401.04088, 2024
13362024
Mistral 7B
AQ Jiang, A Sablayrolles, A Mensch, C Bamford, DS Chaplot, D Casas, ...
arXiv preprint arXiv:2310.06825, 2023
12682023
CCNet: Extracting high quality monolingual datasets from web crawl data
G Wenzek, MA Lachaux, A Conneau, V Chaudhary, F Guzmán, A Joulin, ...
arXiv preprint arXiv:1911.00359, 2019
6682019
Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring
S Humeau
arXiv preprint arXiv:1905.01969, 2019
6072019
Unsupervised translation of programming languages
MA Lachaux, B Roziere, L Chanussot, G Lample
arXiv preprint arXiv:2006.03511, 2020
446*2020
LLaMA: open and efficient foundation language models. arXiv
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
arXiv preprint arXiv:2302.13971, 2023
3102023
Timo-401 thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open 402 and efficient foundation language models
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux
arXiv preprint arXiv:2302.13971 403, 2023
1962023
DOBF: A Deobfuscation Pre-Training Objective for Programming Languages
MA Lachaux, B Roziere, M Szafraniec, G Lample
Advances in Neural Information Processing Systems 34, 2021
157*2021
Llama 2: open foundation and fine-tuned chat models. arXiv
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
1532023
Llama 2: Open foundation and fine-tuned chat models. arXiv 2023
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 0
144
Hypertree proof search for neural theorem proving
G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ...
Advances in neural information processing systems 35, 26337-26349, 2022
1352022
Mistral 7B (2023)
AQ Jiang, A Sablayrolles, A Mensch, C Bamford, DS Chaplot, ...
arXiv preprint arXiv:2310.06825, 2023
1182023
Llama 2: open foundation and fine-tuned chat models. CoRR abs/2307.09288 (2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288 10, 2023
622023
Llama 2: Open foundation and fine-tuned chat models, 2023b
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307.09288, 2023
392023
LLaMA: Open and Efficient Foundation Language Models. ArXiv (Cornell University)
H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ...
202023
Target conditioning for one-to-many generation
MA Lachaux, A Joulin, G Lample
arXiv preprint arXiv:2009.09758, 2020
172020
Llama 2: Open foundation and fine-tuned chat models. arXiv [Preprint](2023)
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
URL https://arxiv. org/abs/2307 9288, 12, 0
14
Llama 2: Open Foundation and Fine-Tuned Chat Models (Jul
H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ...
arXiv preprint arXiv:2307.09288, 2023
82023
Systemet kan ikke foretage handlingen nu. Prøv igen senere.
Artikler 1–20