Maksym Andriushchenko

prof_pic.jpg
Enjoying the gorgeous 🇨🇭 peaks! This one is Rochers de Naye.

[email]     [twitter]     [google scholar]     [github]     [cv]

Short bio. I’m a fourth-year PhD student in computer science at EPFL 🇨🇭 advised by Nicolas Flammarion. I did my MSc at Saarland University and the University of Tübingen, and interned at Adobe Research. My research is supported by the Google and OpenPhil PhD Fellowships.

Research interests. My primary research goal is to understand generalization in deep learning. More specifically, I’m interested in the training dynamics of gradient methods (e.g., SGD with large step sizes, sharpness-aware minimization, fine-tuning language models), adversarial robustness (formal guarantees, square attack, fast adversarial training, RobustBench), and out-of-distribution generalization (curious ReLU properties, generalization to image corruptions and digital manipulations).

On Ukraine. Since I’m from Ukraine, I’m often asked about the situation in my country and how one can help. The most effective way is to donate to local Ukrainian organization helping on the ground, e.g., see this list which includes both trusted military and humanitarian organizations. You can also host displaced scholars and students from Ukraine, e.g., see the #ScienceForUkraine project where I’m involved as a volunteer. You can also help simply by spreading the word about the war and going to demonstrations in your city. It’s very important that we don’t normalize annexations of territories, numerous war crimes, mass deportations, and nuclear threats. Otherwise, we’ll end up in a world we don’t really want to be in.

news

Mar 13, 2023 A talk at the OOD Robustness + Generalization Reading Group at CMU about our paper A modern look at the relationship between sharpness and generalization. Slides: pdf, pptx.
Feb 15, 2023 Our new paper A modern look at the relationship between sharpness and generalization is available online! Do flatter minima generalize better? Well, not really.
Sharpness-vs-generalization summary
Dec 9, 2022 A talk at the University of Luxembourg about our work with Adobe: ARIA: Adversarially Robust Image Attribution for Content Provenance.
Dec 1, 2022 A talk in the ML and Simulation Science Lab of the University of Stuttgart about RobustBench and SGD with large step sizes learns sparse features.
Nov 28, 2022 Going to NeurIPS’22 in New Orleans. Feel free to ping me if you want to chat!
Oct 28, 2022 A talk at the ELLIS Mathematics of Deep Learning reading group about our ICML’22 paper Towards Understanding Sharpness-Aware Minimization. Slides: pdf, pptx.
Oct 12, 2022 Our paper SGD with large step sizes learns sparse features is available online! TL;DR: loss stabilization achieved via SGD with large step sizes leads to a hidden dynamics that promotes sparse feature learning. Also see this twitter thread for a quick summary of the main ideas.
Summary
Oct 7, 2022 Recognized as one of the top reviewers at NeurIPS’22. Yay! 🎉
Sep 7, 2022 A talk at Machine Learning Security Seminar hosted by University of Cagliari about our paper ARIA: Adversarially Robust Image Attribution for Content Provenance (available on youtube).
Sep 1, 2022 Truly excited to be selected for the Google PhD fellowship and OpenPhil AI fellowship!