Assembly-based STDP: a new learning rule for spiking neural networks inspired by biological assemblies

Spiking Neural Networks (SNNs), An alternative to sigmoidal neural networks, include time into their operations using discrete signals called spikes. Employing spikes enables SNNs to mimic any feedforward sigmoidal neural network with lower power consumption. Recently a new type of SNN has been intr...

Full description

Saved in:
Bibliographic Details
Main Authors: Vahid Saranirad, Shirin Dora, Martin McGinnity, Damien Coyle
Format: Default Conference proceeding
Published: 2022
Subjects:
DoB
SNN
Online Access:https://hdl.handle.net/2134/19666077.v1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spiking Neural Networks (SNNs), An alternative to sigmoidal neural networks, include time into their operations using discrete signals called spikes. Employing spikes enables SNNs to mimic any feedforward sigmoidal neural network with lower power consumption. Recently a new type of SNN has been introduced for classification problems, known as Degree of Belonging SNN (DoB-SNN). DoB-SNN is a two-layer spiking neural network that shows significant potential as an alternative SNN architecture and learning algorithm. This paper introduces a new variant of Spike-Timing Dependent Plasticity (STDP), which is based on the assembly of neurons and expands the DoBSNN's training algorithm for multilayer architectures. The new learning rule, known as assembly-based STDP, employs trained DoBs in each layer to train the next layer and build strong connections between neurons from the same assembly while creating inhibitory connections between neurons from different assemblies in two consecutive layers. The performance of the multilayer DoB-SNN is evaluated on five datasets from the UCI machine learning repository. Detailed comparisons on these datasets with other supervised learning algorithms show that the multilayer DoB-SNN can achieve better performance on 4/5 datasets and comparable performance on 5th when compared to multilayer algorithms that employ considerably more trainable parameters.