Follow
Nicholas Carlini
Nicholas Carlini
Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Towards evaluating the robustness of neural networks
N Carlini, D Wagner
2017 IEEE Symposium on Security and Privacy (SP), 39-57, 2017
87472017
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
A Athalye, N Carlini, D Wagner
ICML 2018, 2018
32722018
Mixmatch: A holistic approach to semi-supervised learning
D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, CA Raffel
Advances in Neural Information Processing Systems, 5050-5060, 2019
30282019
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
K Sohn, D Berthelot, CL Li, Z Zhang, N Carlini, ED Cubuk, A Kurakin, ...
arXiv preprint arXiv:2001.07685, 2020
28722020
Adversarial examples are not easily detected: Bypassing ten detection methods
N Carlini, D Wagner
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security …, 2017
19102017
Audio adversarial examples: Targeted attacks on speech-to-text
N Carlini, D Wagner
2018 IEEE Security and Privacy Workshops (SPW), 1-7, 2018
12242018
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
N Carlini, C Liu, J Kos, Ú Erlingsson, D Song
1125*2019
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
10802021
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
D Berthelot, N Carlini, ED Cubuk, A Kurakin, K Sohn, H Zhang, C Raffel
arXiv preprint arXiv:1911.09785, 2019
10162019
On Evaluating Adversarial Robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
8872019
On adaptive attacks to adversarial example defenses
F Tramer, N Carlini, W Brendel, A Madry
Advances in Neural Information Processing Systems 33, 1633-1645, 2020
7792020
Hidden Voice Commands.
N Carlini, P Mishra, T Vaidya, Y Zhang, M Sherr, C Shields, D Wagner, ...
USENIX Security Symposium, 513-530, 2016
7422016
cleverhans v2. 0.0: an adversarial machine learning library
N Papernot, N Carlini, I Goodfellow, R Feinman, F Faghri, A Matyasko, ...
arXiv preprint arXiv:1610.00768, 2016
691*2016
Control-flow bending: On the effectiveness of control-flow integrity
N Carlini, A Barresi, M Payer, D Wagner, TR Gross
24th {USENIX} Security Symposium ({USENIX} Security 15), 161-176, 2015
5402015
Provably minimally-distorted adversarial examples
N Carlini, G Katz, C Barrett, DL Dill
arXiv preprint arXiv:1709.10207, 2017
534*2017
{ROP} is Still Dangerous: Breaking Modern Defenses
N Carlini, D Wagner
23rd {USENIX} Security Symposium ({USENIX} Security 14), 385-399, 2014
4952014
Measuring Robustness to Natural Distribution Shifts in Image Classification
R Taori, A Dave, V Shankar, N Carlini, B Recht, L Schmidt
arXiv preprint arXiv:2007.00644, 2020
4562020
Imperceptible, robust, and targeted adversarial examples for automatic speech recognition
Y Qin, N Carlini, G Cottrell, I Goodfellow, C Raffel
International Conference on Machine Learning, 5231-5240, 2019
4302019
Adversarial example defense: Ensembles of weak defenses are not strong
W He, J Wei, X Chen, N Carlini, D Song
11th {USENIX} Workshop on Offensive Technologies ({WOOT} 17), 2017
4252017
Label-only membership inference attacks
CA Choquette-Choo, F Tramer, N Carlini, N Papernot
International Conference on Machine Learning, 1964-1974, 2021
3762021
The system can't perform the operation now. Try again later.
Articles 1–20