Springe zum Hauptinhalt
Fakultät für Informatik
Informatik-Kolloquien

Informatik-Kolloquien

342. Informatik-Kolloquium

Öffentliche Verteidigung im Rahmen des Promotionsverfahrens

Herr Helge Ülo Dinkelbach M.Sc.

TU Chemnitz
Fakultät für Informatik
Professur Künstliche Intelligenz

"Efficient Simulation of Biologically Realistic Neural Networks on Different Parallel Hardware Using Code Generation"

Mittwoch, 02.10.2024, 14:00 Uhr, Straße der Nationen 62, Böttcher-Bau, A10.367 (alt: 1/367)

Alle interessierten Personen sind herzlich eingeladen!

Poster | .pdf


Abstract

Computational neuroscience is a rapidly developing field exploring the principles of information encoding and decoding in neural systems and trying to understand the brain on a functional level. The ongoing research in this field leads to models increasing in size and complexity. Modern multi-core CPUs and GPUs offer increasing computational power on shared memory systems. Neural simulators should help to make use of this parallel computational power for the simulation of biologically inspired networks.

Developing such neural simulators demands a good understanding of both models of biologically inspired networks and current hardware architectures. The simulation of rate-coded and spiking models places different requirements on their efficient implementation. Further, models used in computational neuroscience are diverse in terms of the used equations. At this point, one quickly notices the problem of specialization and generalization in neural simulation frameworks. Code generation approaches, already used in neural simulators such as Brian, GeNN, or ANNarchy, seem to be a suitable solution for this dilemma.

Code generation allows the adjustment of generated simulation code based on the used hardware platform and the structure of the network to be simulated. In the thesis, we analyzed several implementations of key operations within rate-coded and spiking neural networks and the impact of different data representations on those. Based on this acquired knowledge, we automatically select the code templates used for the code generation of our neural simulator ANNarchy. Using this approach, on shared memory systems, we could achieve a noticeable improvement on rate-coded neural models while we achieve comparable performance on spiking model benchmarks.