Freitag, 23. Juni 2023, 14:00 Uhr

Gradient-Free Optimization of Artificial and Biological Networks using Learning to Learn

  • Dipl. Inf. Alper Yegenoglu - SDL Neuroscience, Jülich Supercomputing Centre, Forschungszentrum Jülich, Institute of Geometry and Applied Mathematics & Informatik i3, RWTH Aachen
  • Ort: Kopernikusstrasse 6, 52704 Aachen, Seminarraum 003



The quest to understand intelligence, learning, decision-making, and memory in humans has been a longstanding endeavor in neuroscience. Our brain comprises networks of neurons and other cells, but it remains unclear how these networks are trained to solve specific tasks. In the field of machine learning and artificial intelligence, neural networks are commonly optimized using gradient descent and backpropagation. However, applying this optimization strategy to biological spiking networks (SNNs) presents challenges due to the binary communication scheme via spikes among neurons.

In my work, I propose gradient-free optimization techniques that can be directly applied to both artificial and biological neural networks. I utilize metaheuristics such as genetic algorithms and the ensemble Kalman Filter (EnKF) to optimize network parameters and train them to solve specific tasks. This optimization is integrated into the concept of learning to learn (L2L), which involves a two-loop optimization procedure. In the inner loop, the algorithm or network is trained on a set of tasks, while in the outer loop, the hyperparameters and parameters are optimized. Initially, I apply the EnKF to a convolutional neural network, achieving high accuracy in digit classification. I then extend this optimization approach to a spiking reservoir network within the Python based L2L-framework. Analyzing connection weights and the EnKF's covariance matrix offers insights into the optimization process.I further adapt the optimization by integrating the EnKF into the inner loop and update hyperparameters using a genetic algorithm. This automated approach suggests alternative configurations, in contrast to a manual parameter tuning. Additionally, I present a simulation where SNNs guide an ant colony in food foraging, resulting in emergent self-coordination and self-organization. I employ correlation analysis methods to understand the ants' behavior.

Through my work, I demonstrate the application of gradient-free optimization techniques utilizing learning to learn. I highlight their effectiveness in optimizing biological and artificial neural networks.

The computer science lecturers invite interested people to join.