12-13 Mar 2018 Paris (France)

Organized by Andre Grüning

The workshop aims to bring together researchers from Computational Neuroscience and from Machine Learning and stimulate exchange and collaboration between researchers in these two fields. This is important for the following reasons:


1. Computational Neuroscience has made great progress in recent years at identifying and modelling neural-, synapse and system-levels of plasticity. However the functional and computational roles of these plasticity mechanisms on a behavioural or performance level are often less clear: Why does the brain use a specific plasticity mechanism to support a computational function? And how exactly is computation implemented on top of low-level neuroscientific plasticity processes?


2. On the other hand, Machine Learning has also made great advances, for example with the recent paradigm of deep learning: Simulations and cognitive learning models that were abandoned in the nineties due to do lack of hardware computational power can now be modelled and even implemented in a competitive way.


3. However the information flow between these two fields needs to improve. Exaggerating a bit, the classical stance of a computational neuroscientist is that any machine learning approaches to learning and behaviour are not relevant because they are not biologically plausible; likewise a researcher in machine learning may dismiss computational neuroscience approaches as just not competitive and performant.


4. Finally classical AI on classical computational hardware are reaching there limits of silicon integration, and are still order of magnitude more power-consuming than natural brains.

For all these reasons it is worthwhile for research from Computational Neuroscience and Machine Learning to get together and learn from each other:


- How low-level plasticity might support high-level behavioural and/or technical learning behaviour;
- How high-level task-driven optimisation approaches realise themselves in low-level biological constraints.
- What machine learning could learn from neuroscience, for example the sparse and energy-efficient encoding using spikes.
- What computational neuroscience can learn from machine learning, for example what computational feature representation can be expected to evolve in a learning system.
- That our insight complement each other to understand what intelligent behaviour is and how it can be achieved in natural and artificial systems.

 

The registration is free however mandatory due to a limited number of seats.

Speakers 

Anna Bulanova (CNRS UNIC EITN)
Joni Dambre (Ghent University)
Dominik Dold (University of Bern)
Damien Drix (University of Hertfordshire)
Brian Gardner (University of Surrey) *
Olivia Gozel (EPFL)
André Grüning (University of Surrey)
Nikola Kasabov (Auckland University of Technology)
Bipin Rajendran (New Jersey Institute of Technology)
Yulia Sandamirskaya (University and ETH Zürich)
Leslie Smith (University of Stirling)
Eleni Vasilaki (University of Sheffield)
Florian Walter (Technical University of Munich)

*via videoconference

 

 

Online user: 1