About me
(The site is under construction! Sorry for (temporarily) limited information.)
NEW: For the summer of 2024, I am a visiting Ph.D. student at KASL $\subset$ ML group $\subset$ CBL $\subset$ University of Cambridge, working on input space mode connectivity (advised by D. Krueger).
I am a Ph.D. candidate in applied physics at CEITEC in the Czech Republic. My research focuses on interpretable machine learning (ML) applied to spectroscopic data and physics-inspired learning. Lately, I am mostly interested in understanding deep learning through empirical and theoretical methods, usually grounded in physics. The main motivation is to achieve interpretable and safe ML/AI for science and general use. This includes various topics in overparametrization, sparsity and adversarial robustness of deep networks.
When I’m not busy with ML experiments, you can find me bouldering or cycling. I also enjoy hiking, playing guitar, and reading physics books from my vast collection.
Research interests
- Machine learning foundations
- overparametrization, double descent, NTK
- loss-landscape symmetries, mode connectivity
- sparsity, lottery tickets
- ANN interpretability (for spectroscopic data)
- feature visualization, optimal manifold
- sparsity for (mechanistic) interpretability
- custom loss penalization
Current projects
Sparse, interpretable ANNs for spectroscopic data
We study custom loss penalization for MLP that leads to interpretable and spectroscopically relevant weights in the first layer.
CodeLottery tickets vs. double descent
In this solo project I study intrinsic limitations of lottery ticket performances that depends on the initial effective complexity.
Weight initialization with simulated spectra
We initialize MLP weights for the first layer with simulated pure-element spectra and mixtures to guide the model towards physics-relevant solutions.
Selected past projects