Cited 1 time in
MULTI-SAMPLE ONLINE LEARNING FOR SPIKING NEURAL NETWORKS BASED ON GENERALIZED EXPECTATION MAXIMIZATION
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jang, Hyeryung | - |
| dc.contributor.author | Simeone, Osvaldo | - |
| dc.date.accessioned | 2023-04-27T19:41:03Z | - |
| dc.date.available | 2023-04-27T19:41:03Z | - |
| dc.date.issued | 2021 | - |
| dc.identifier.issn | 1520-6149 | - |
| dc.identifier.issn | 2379-190X | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/5669 | - |
| dc.description.abstract | Spiking Neural Networks (SNNs) offer a novel computational paradigm that captures some of the efficiency of biological brains by processing through binary neural dynamic activations. Probabilistic SNN models are typically trained to maximize the likelihood of the desired outputs by using unbiased estimates of the log-likelihood gradients. While prior work used single-sample estimators obtained from a single run of the network, this paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights. The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient. The approach is based on generalized expectation-maximization (GEM), which optimizes a tighter approximation of the log-likelihood using importance sampling. The derived online learning algorithm implements a three-factor rule with global per-compartment learning signals. Experimental results on a classification task on the neuromorphic MNIST-DVS data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of compartments used for training and inference. | - |
| dc.format.extent | 5 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | MULTI-SAMPLE ONLINE LEARNING FOR SPIKING NEURAL NETWORKS BASED ON GENERALIZED EXPECTATION MAXIMIZATION | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ICASSP39728.2021.9414804 | - |
| dc.identifier.scopusid | 2-s2.0-85115117018 | - |
| dc.identifier.wosid | 000704288404068 | - |
| dc.identifier.bibliographicCitation | 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), v.2021-June, pp 4080 - 4084 | - |
| dc.citation.title | 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) | - |
| dc.citation.volume | 2021-June | - |
| dc.citation.startPage | 4080 | - |
| dc.citation.endPage | 4084 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Acoustics | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
| dc.relation.journalWebOfScienceCategory | Acoustics | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
| dc.subject.keywordAuthor | Spiking Neural Networks | - |
| dc.subject.keywordAuthor | Variational Learning | - |
| dc.subject.keywordAuthor | Expectation Maximization | - |
| dc.subject.keywordAuthor | Neuromorphic Computing | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
