• Media type: E-Article
  • Title: Beyond spiking networks: The computational advantages of dendritic amplification and input segregation
  • Contributor: Capone, Cristiano; Lupo, Cosimo; Muratore, Paolo; Paolucci, Pier Stanislao
  • imprint: Proceedings of the National Academy of Sciences, 2023
  • Published in: Proceedings of the National Academy of Sciences
  • Language: English
  • DOI: 10.1073/pnas.2220743120
  • ISSN: 0027-8424; 1091-6490
  • Origination:
  • Footnote:
  • Description: <jats:p>The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating “hierarchical imitation learning”, enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.</jats:p>