steve abreu

prof_pic.jpg

Bernoulliborg 407

9747 AG Groningen

The Netherlands

s.abreu@rug.nl

I am a PhD student at the AI department of the University of Groningen, working in the MINDS group under the supervision of Prof. Herbert Jaeger and co-supervision of Prof. Elisabetta Chicca. I am part of the CogniGron research center for cognitive computing and funded by the European project Post-Digital.

In research I keep a dual focus on advancing machine learning while delving into novel compute paradigms and hardware for AI, especially those taking inspiration from the brain. I enjoy cross-disciplinary research between AI, computer science, neuroscience, physics, and cognitive science.

Within ML, I work on efficient machine learning (enabling AI on low-power hardware) and I am interested in continual learning (e.g. using brain-inspired neuromodulation or plasticity rules), meta-learning, and neurosymbolic programming. To make progress towards well-aligned and interpretable AI, I work on mechanistic interpretability and reverse-engineering of ML models.

For my PhD I work on physical and brain-inspired computing, where I aim to develop theories that align computation with physics, in order to make better use of neuromorphic chips, photonic devices, and other physical computing systems. I aim to develop computational abstractions in physical substrates (e.g. through the neuromorphic intermediate representation, NIR), new hardware-compatible efficient learning algorithms, and principled ways of programming novel AI hardware (confronting challenges like device mismatch and limited observability).

I am also a hobby photographer, an enthusiastic coffee brewer, an avid motorcyclist, I like to read, travel, and go to music and art events.

student supervision

I am available to supervise bachelor and master projects at RUG. Please send me an email if you are interested in working on a project related to the following topics:

  • efficient ML: minimizing energy and/or latency in ML models.
  • large recurrent neural networks: e.g. using state-space models (S4, LRU)
  • interpretability of (multimodal) LLMs, e.g. using conceptors
  • continual lifelong learning on neuromorphic hardware (e.g. with plasticity rules).
  • automated machine learning or meta-learning (learning how to learn) in neural networks
  • investigating modularity in neural networks (e.g. in evolved RNNs)
  • brain-inspired local learning rules for efficient hardware-compatible learning (e.g. equilibrium propagation, 3-factor learning, differential plasticity, evolving plasticity rules).

Take a look at our group’s website for more information on the kind of research we do, and this page for more information on doing your thesis project in our group.

news

Dec 2023 Starting my research internship at Google in Waterloo (Canada), working on efficient and adaptive AR user interfaces with multimodal LLMs.
Oct 2023 Attending NNPC in Hannover. Presenting a poster on neuromorphic programming and abstractions.
Jun 2023 Spent three weeks in Telluride to work on neuromorphic abstractions, representation learning in RL, and other exciting stuff!
Jun 2023 Presenting our neuromorphic cytometry paper at CVPR (event-based vision workshop) in Vancouver.

selected publications

  1. Flow Cytometry With Event-Based Vision and Spiking Neuromorphic Hardware
    Steven Abreu, Muhammed Gouda, Alessio Lugnan, and Peter Bienstman
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2023
  2. Hands-on reservoir computing: a tutorial for practical implementation
    Matteo Cucchi,  Steven Abreu, Giuseppe Ciccone, Daniel Brunner, and Hans Kleemann
    Neuromorphic Computing and Engineering 2022
  3. Automated architecture design for deep neural networks
    Steven Abreu
    ArXiv Aug 2019