Meta-learning three-factor learning rules for reward-driven training of RNNs
Reinforcement learning plasticity rules for reinforcement training of RNNs
started as a postdoc independently (May 2025) 
Reinforcement learning plasticity rules for reinforcement training of RNNs
started as a postdoc independently (May 2025) 
started in Methods for Computational Neuroscience 2024 Summer School - with collaborators who will be disclosed in the future 
with myself
started as a postdoc independently (April 2025) 
We identify possible plasticity mechanisms that explain the adaptation of responses to repeated stimulus presentations and the emergence of prediction error responses in a recurrent circuit model with multiple interneuron sub-types
started during PostDoc at TUM 
Biologically-plausible training of low-rank RNNs
with Pablo Crespo, Dimitra Maoutsa, Matt Getz
started during PostDoc at TUM ![]()
How different forms of spike-timing dependent plasticity affect the functional properties of spiking SSNs
with Raul Adell Segarra, Dylan Festa, Dimitra Maoutsa
started during PostDoc at TUM ![]()
Inference of latent stochastic dynamics through stochastic control
with myself
started as an independent project during my PhD 
Introducing geometric inductive biases for learning stochastic systems
extended version of the last project of my PhD thesis 
Inference of neuronal interactions from spike train recordings from a geometric approximation of the inter-spike interval generating function of each recorded neuron
with Jose Casadiego*, Dimitra Maoutsa*, Marc Timme
before PhD 
Introduced a deterministic particle dynamics for sampling probability flows of stochastic systems (now known as Probability flow ODEs)
- with Dimitra Maoutsa, Sebastian Reich, Manfred Opper
PhD project .png)
Proposed a non-iterative formulation of Stochastic Path Integral Control
- with Dimitra Maoutsa, Manfred Opper
PhD project 