Detailed Information

Cited 2 time in webofscience Cited 0 time in scopus
Metadata Downloads

Initialization by using truncated distributions in artificial neural network

Full metadata record
DC Field Value Language
dc.contributor.authorKim, MinJong-
dc.contributor.authorCho, Sungchul-
dc.contributor.authorJeong, Hyerin-
dc.contributor.authorLee, YungSeop-
dc.contributor.authorLim, Changwon-
dc.date.accessioned2023-04-28T02:40:50Z-
dc.date.available2023-04-28T02:40:50Z-
dc.date.issued2019-10-
dc.identifier.issn1225-066X-
dc.identifier.issn2383-5818-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/7609-
dc.description.abstractDeep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.-
dc.format.extent10-
dc.language한국어-
dc.language.isoKOR-
dc.publisherKOREAN STATISTICAL SOC-
dc.titleInitialization by using truncated distributions in artificial neural network-
dc.typeArticle-
dc.publisher.location대한민국-
dc.identifier.doi10.5351/KJAS.2019.32.5.693-
dc.identifier.bibliographicCitationKOREAN JOURNAL OF APPLIED STATISTICS, v.32, no.5, pp 693 - 702-
dc.citation.titleKOREAN JOURNAL OF APPLIED STATISTICS-
dc.citation.volume32-
dc.citation.number5-
dc.citation.startPage693-
dc.citation.endPage702-
dc.type.docTypeArticle-
dc.identifier.kciidART002520837-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassesci-
dc.description.journalRegisteredClasskci-
dc.relation.journalResearchAreaMathematics-
dc.relation.journalWebOfScienceCategoryStatistics & Probability-
dc.subject.keywordAuthorinitialization-
dc.subject.keywordAuthorsaturation-
dc.subject.keywordAuthorXavier initialization-
dc.subject.keywordAuthortruncated distribution-
dc.subject.keywordAuthordeep learning-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Natural Science > Department of Statistics > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Yung Seop photo

Lee, Yung Seop
College of Natural Science (Department of Statistics)
Read more

Altmetrics

Total Views & Downloads

BROWSE