Jonathan Peck
Title
Cited by
Cited by
Year
Lower bounds on the robustness to adversarial perturbations
J Peck, J Roels, B Goossens, Y Saeys
Advances in Neural Information Processing Systems, 804-813, 2017
452017
CharBot: A Simple and Effective Method for Evading DGA Classifiers
J Peck, C Nie, R Sivaguru, C Grumer, F Olumofin, B Yu, A Nascimento, ...
IEEE Access 7, 91759-91771, 2019
132019
Detecting Adversarial Examples with Inductive Venn-ABERS Predictors
J Peck, B Goossens, Y Saeys
European Symposium on Artificial Neural Networks, Computational Intelligence …, 2019
22019
Hardening DGA Classifiers Utilizing IVAP
C Grumer, J Peck, F Olumofin, A Nascimento, M De Cock
IEEE Big Data, 2019
12019
Distillation of Deep Reinforcement Learning Models using Fuzzy Inference Systems
A Gevaert, J Peck, Y Saeys
The 31st Benelux Conference on Artificial Intelligence (BNAIC 2019) and the …, 2019
12019
Calibrated Multi-Probabilistic Prediction as a Defense against Adversarial Attacks.
J Peck, B Goossens, Y Saeys
BNAIC/BENELEARN, 2019
12019
Regional Image Perturbation Reduces Norms of Adversarial Examples While Maintaining Model-to-model Transferability
U Ozbulak, J Peck, W De Neve, B Goossens, Y Saeys, A Van Messem
arXiv preprint arXiv:2007.03198, 2020
2020
Detecting adversarial manipulation using inductive Venn-ABERS predictors
J Peck, B Goossens, Y Saeys
Neurocomputing, 2020
2020
Inline Detection of DGA Domains Using Side Information
R Sivaguru, J Peck, F Olumofin, A Nascimento, M De Cock
arXiv preprint arXiv:2003.05703, 2020
2020
Robustness of Classifiers to Adversarial Perturbations
J Peck
2017
The system can't perform the operation now. Try again later.
Articles 1–10