Follow
Milad Nasr
Milad Nasr
Google DeepMind
Verified email at srxzr.com - Homepage
Title
Cited by
Cited by
Year
Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks
M Nasr, R Shokri, A Houmansadr
2019 IEEE Symposium on Security and Privacy, 2019
1755*2019
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
10422023
Universal and transferable adversarial attacks on aligned language models
A Zou, Z Wang, N Carlini, M Nasr, JZ Kolter, M Fredrikson
arXiv preprint arXiv:2307.15043, 2023
5382023
Machine Learning with Membership Privacy using Adversarial Regularization
M Nasr, R Shokri, A Houmansadr
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications …, 2018
4872018
Membership inference attacks from first principles
N Carlini, S Chien, M Nasr, S Song, A Terzis, F Tramer
2022 IEEE Symposium on Security and Privacy (SP), 1897-1914, 2022
4802022
Extracting training data from diffusion models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramer, B Balle, ...
32nd USENIX Security Symposium (USENIX Security 23), 5253-5270, 2023
3792023
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
M Nasr, S Song, A Thakurta, N Papernot, N Carlini
2021 IEEE Symposium on Security and Privacy, 2021
2002021
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
1962024
DeepCorr: Strong Flow Correlation Attacks on Tor Using Deep Learning
M Nasr, A Bahramali, A Houmansadr
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications …, 2018
1892018
Are aligned neural networks adversarially aligned?
N Carlini, M Nasr, CA Choquette-Choo, M Jagielski, I Gao, PWW Koh, ...
Advances in Neural Information Processing Systems 36, 2024
1592024
Scalable Extraction of Training Data from (Production) Language Models
M Nasr, N Carlini, J Hayase, M Jagielski, AF Cooper, D Ippolito, ...
arXiv preprint arXiv:2311.17035, 2023
1272023
Defeating {DNN-Based} Traffic Analysis Systems in {Real-Time} With Blind Adversarial Perturbations
M Nasr, A Bahramali, A Houmansadr
30th USENIX Security Symposium (USENIX Security 21), 2705-2722, 2021
1092021
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
D Ippolito, F Tramèr, M Nasr, C Zhang, M Jagielski, K Lee, ...
arXiv preprint arXiv:2210.17546, 2022
922022
Compressive Traffic Analysis: A New Paradigm for Scalable Traffic Analysis
M Nasr, A Houmansadr, A Mazumdar
Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications …, 2017
812017
Mitigating Membership Inference Attacks by {Self-Distillation} Through a Novel Ensemble Architecture
X Tang, S Mahloujifar, L Song, V Shejwalkar, M Nasr, A Houmansadr, ...
31st USENIX Security Symposium (USENIX Security 22), 1433-1450, 2022
682022
Daemo: A Self-Governed Crowdsourcing Marketplace
SN Gaikwad, D Morina, R Nistala, M Agarwal, A Cossette, R Bhanu, ...
Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface …, 2015
652015
Robust adversarial attacks against DNN-based wireless communication systems
A Bahramali, M Nasr, A Houmansadr, D Goeckel, D Towsley
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications …, 2021
522021
The Waterfall of Liberty: Decoy Routing Circumvention that Resists Routing Attacks
M Nasr, H Zolfaghari, A Houmansadr
Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications …, 2017
502017
Tight auditing of differentially private machine learning
M Nasr, J Hayes, T Steinke, B Balle, F Tramèr, M Jagielski, N Carlini, ...
32nd USENIX Security Symposium (USENIX Security 23), 1631-1648, 2023
492023
Privacy auditing with one (1) training run
T Steinke, M Nasr, M Jagielski
Advances in Neural Information Processing Systems 36, 2024
442024
The system can't perform the operation now. Try again later.
Articles 1–20