Initialization by using truncated distributions in artificial neural network
- Authors
- Kim, MinJong; Cho, Sungchul; Jeong, Hyerin; Lee, YungSeop; Lim, Changwon
- Issue Date
- Oct-2019
- Publisher
- KOREAN STATISTICAL SOC
- Keywords
- initialization; saturation; Xavier initialization; truncated distribution; deep learning
- Citation
- KOREAN JOURNAL OF APPLIED STATISTICS, v.32, no.5, pp 693 - 702
- Pages
- 10
- Indexed
- ESCI
KCI
- Journal Title
- KOREAN JOURNAL OF APPLIED STATISTICS
- Volume
- 32
- Number
- 5
- Start Page
- 693
- End Page
- 702
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/7609
- DOI
- 10.5351/KJAS.2019.32.5.693
- ISSN
- 1225-066X
2383-5818
- Abstract
- Deep learning has gained popularity for the classification and prediction task. Neural network layers become deeper as more data becomes available. Saturation is the phenomenon that the gradient of an activation function gets closer to 0 and can happen when the value of weight is too big. Increased importance has been placed on the issue of saturation which limits the ability of weight to learn. To resolve this problem, Glorot and Bengio (Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249-256, 2010) claimed that efficient neural network training is possible when data flows variously between layers. They argued that variance over the output of each layer and variance over input of each layer are equal. They proposed a method of initialization that the variance of the output of each layer and the variance of the input should be the same. In this paper, we propose a new method of establishing initialization by adopting truncated normal distribution and truncated cauchy distribution. We decide where to truncate the distribution while adapting the initialization method by Glorot and Bengio (2010). Variances are made over output and input equal that are then accomplished by setting variances equal to the variance of truncated distribution. It manipulates the distribution so that the initial values of weights would not grow so large and with values that simultaneously get close to zero. To compare the performance of our proposed method with existing methods, we conducted experiments on MNIST and CIFAR-10 data using DNN and CNN. Our proposed method outperformed existing methods in terms of accuracy.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Natural Science > Department of Statistics > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.