About me
(The site is under construction! Sorry for (temporarily) limited information.)
I am a Ph.D. candidate in applied physics at CEITEC in Brno, Czech Republic. I also collaborate with KASL and was previously a visiting Ph.D. student at the University of Cambridge, advised by David Krueger.
My current research focuses on foundational topics in machine learning (ML), where I am mostly interested in understanding deep learning through empirical and theoretical methods, usually grounded in physics. The main motivation is to achieve interpretable and safe ML/AI for science and general use. My work includes various topics in overparameterization, loss-landscape geometry, sparsity, and adversarial robustness of deep networks. Prior to that, I worked on interpretable machine learning applied to spectroscopic data and physics-inspired learning.
When I’m not busy with ML experiments, you can find me bouldering or cycling. I also enjoy hiking, playing guitar, and reading physics books from my vast collection.
News
Jan. 2025: Input space mode connectivity was accepted to ICLR 2025.
Oct. 2024: Input space mode connectivity was accepted for an oral presentation at SciForDL at NeurIPS 2024.
Aug. 2024: I am attending the IAIFI summer school and workshop at MIT, where I will give a talk on input space mode connectivity.
June 2024: I am visiting KASL $\subset$ CBL, University of Cambridge for four months.
May 2024: I will be at Youth in High Dimensions workshop at ICTP in Trieste, Italy.
Research interests
- Machine learning foundations
- overparametrization, double descent, NTK
- loss-landscape symmetries, mode connectivity
- sparsity, lottery tickets
- ANN interpretability (for spectroscopic data)
- feature visualization, optimal manifold
- sparsity for (mechanistic) interpretability
- custom loss penalization
- AI safety
- LLM jailbreaking (defenses)
Current projects

Input space mode connectivity
We generalized the concept of loss landscape mode connectivity to the input space of deep neural networks.
ICLR | arXiv | Talk (D. Krueger)
Sparse, interpretable ANNs for spectroscopic data
We study custom loss penalization for MLP that leads to interpretable and spectroscopically relevant weights in the first layer.
Code
Lottery tickets vs. double descent
In this solo project I study intrinsic limitations of lottery ticket performances that depends on the initial effective complexity.
Selected past projects
