Jonathan Peck

Post-doctoral researcher, Ghent University

I am a post-doctoral researcher at Ghent University, affiliated with the Department of Applied Mathematics, Computer Science and Statistics (TWIST) as well as the Data Mining and Modeling for Biomedicine group at the VIB Inflammation Research Center. I am also a teaching assistant for the Artificial Intelligence course offered by Ghent University at the Faculty of Sciences.

My main focus of research is the study of adversarial examples. Broadly speaking, adversarial examples are input samples deliberately crafted by a malicious adversary in order to obtain certain specific predictions from a targeted machine learning model. The intent here is usually to cause some form of harm, such as bypassing automated content filters, malware protections or biometric security systems. In my work, I try to devise countermeasures against this form of exploitation.

Aside from research into adversarial examples, I am also interested in issues of fairness in machine learning. In developing and deploying machine learning systems, researchers and practitioners alike are often ignorant of (or deliberately ignore) the disparate impact of their systems on women and minorities. Some of these tools, such as the recommender systems used by Twitter and Facebook, also facilitate the spread of hate and political extremism across the globe. We cannot afford to remain blind to these problems; the field of machine learning must take its social responsibilities seriously.


  • Deep learning
  • Adversarial robustness
  • Trustworthiness of machine learning models
  • Algorithmic fairness


Ghent University
2017 - 2023
Ph.D. Computer science
Ghent University
2015 - 2017
M.Sc. Mathematical Informatics
graduated summa cum laude, supervised by Prof. Yvan Saeys
Ghent University
2012 - 2015
B.Sc. Computer science



Calibrated Multi-Probabilistic Prediction as a Defense against Adversarial Attacks
BNAIC/Benelearn, 2021
Jonathan Peck , Bart Goossens , Yvan Saeys


Detecting adversarial manipulation using inductive Venn-ABERS predictors
Neurocomputing, 2020
Jonathan Peck , Bart Goossens , Yvan Saeys
Inline Detection of DGA Domains Using Side Information
IEEE Access, 2020
Raaghavi Sivaguru , Jonathan Peck , Femi Olumofin , Anderson Nascimento , Martine De Cock
Regional Image Perturbation Reduces Lp Norms of Adversarial Examples While Maintaining Model-to-model Transferability
International Conference on Machine Learning (ICML), Uncertainty & Robustness in Deep Learning (UDL), 2020
Utku Ozbulak , Jonathan Peck , Wesley De Neve , Bart Goossens , Yvan Saeys , Arnout Van Messem


Hardening DGA Classifiers Utilizing IVAP
IEEE Big Data, 2019
Charles Grumer , Jonathan Peck , Femi Olumofin , Anderson Nascimento , Martine De Cock
Distillation of Deep Reinforcement Learning Models using Fuzzy Inference Systems
BNAIC/Benelearn, 2019
Arne Gevaert , Jonathan Peck , Yvan Saeys
CharBot: A Simple and Effective Method for Evading DGA Classifiers
IEEE Access, 2019
Jonathan Peck , Claire Nie , Raaghavi Sivaguru , Charles Grumer , Femi Olumofin , Bin Yu , Anderson Nascimento , Martine De Cock
Detecting Adversarial Examples with Inductive Venn-ABERS Predictors
European Symposium on Artificial Neural Networks (ESANN), 2019
Jonathan Peck , Bart Goossens , Yvan Saeys


Lower bounds on the robustness to adversarial perturbations
Neural Information Processing Systems (NeurIPS), 2017
Jonathan Peck , Joris Roels , Bart Goossens , Yvan Saeys
Robustness of Classifiers to Adversarial Perturbations
Ghent University, 2017
Jonathan Peck , Joris Roels , Bart Goossens , Yvan Saeys

Other links

Support queer people in AI 🏳️‍🌈
My thoughts, ramblings and divine insights