scholarly journals Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition

Author(s):  
Shizhong Han ◽  
Zibo Meng ◽  
Zhiyuan Li ◽  
James O'Reilly ◽  
Jie Cai ◽  
...  
Author(s):  
Habibullah Akbar ◽  
Sintia Dewi ◽  
Yuli Azmi Rozali ◽  
Lita Patricia Lunanta ◽  
Nizirwan Anwar ◽  
...  

2020 ◽  
Vol 10 (9) ◽  
pp. 3286 ◽  
Author(s):  
Fernando Merchan ◽  
Ariel Guerra ◽  
Héctor Poveda ◽  
Héctor M. Guzmán ◽  
Javier E. Sanchez-Galan

We evaluated the potential of using convolutional neural networks in classifying spectrograms of Antillean manatee (Trichechus manatus manatus) vocalizations. Spectrograms using binary, linear and logarithmic amplitude formats were considered. Two deep convolutional neural networks (DCNN) architectures were tested: linear (fixed filter size) and pyramidal (incremental filter size). Six experiments were devised for testing the accuracy obtained for each spectrogram representation and architecture combination. Results show that binary spectrograms with both linear and pyramidal architectures with dropout provide a classification rate of 94–99% on the training and 92–98% on the testing set, respectively. The pyramidal network presents a shorter training and inference time. Results from the convolutional neural networks (CNN) are substantially better when compared with a signal processing fast Fourier transform (FFT)-based harmonic search approach in terms of accuracy and F1 Score. Taken together, these results prove the validity of using spectrograms and using DCNNs for manatee vocalization classification. These results can be used to improve future software and hardware implementations for the estimation of the manatee population in Panama.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 77816-77824 ◽  
Author(s):  
Trinh Thi Doan Pham ◽  
Chee Sun Won

2018 ◽  
Author(s):  
Peter K. Koo ◽  
Sean R. Eddy

AbstractAlthough convolutional neural networks (CNNs) have been applied to a variety of computational genomics problems, there remains a large gap in our understanding of how they build representations of regulatory genomic sequences. Here we perform systematic experiments on synthetic sequences to reveal how CNN architecture, specifically convolutional filter size and max-pooling, influences the extent that sequence motif representations are learned by first layer filters. We find that CNNs designed to foster hierarchical representation learning of sequence motifs - assembling partial features into whole features in deeper layers - tend to learn distributed representations, i.e. partial motifs. On the other hand, CNNs that are designed to limit the ability to hierarchically build sequence motif representations in deeper layers tend to learn more interpretable localist representations, i.e. whole motifs. We then validate that this representation learning principle established from synthetic sequences generalizes to in vivo sequences.


2021 ◽  
Author(s):  
Gentian Gashi

Handwriting recognition is the process of automatically converting handwritten text into electronic text (letter codes) usable by a computer. The increase in technology reliance during an international pandemic caused by COVID-19 has showcased the importance of ensuring the information stored and digitised is done accurately and efficiently. Interpreting handwriting remains complex for both humans and computers due to the various styles and skewed characters. In this study, we conducted a correlational analysis on the association between filter sizes and the convolutional neural networks (CNN’s) classification accuracy. The testing has been conducted from the publicly available MNIST database of handwritten digits (LeCun and Cortes, 2010). The dataset consists of a training set (N=60,000) and a testing set (N=10,000). Using ANOVA, our results indicate a strong correlation (.000,P≤0.05) between filter size and classification accuracy. However, this significance is only present when increasing the filter size from 1x1 to 2x2. Larger filter sizes were insignificant therefore, a filter size above 2x2 cannot be recommended.


2020 ◽  
Vol 10 (1) ◽  
pp. 55-60
Author(s):  
Owais Mujtaba Khanday ◽  
Samad Dadvandipour

Deep Neural Networks (DNN) in the past few years have revolutionized the computer vision by providing the best results on a large number of problems such as image classification, pattern recognition, and speech recognition. One of the essential models in deep learning used for image classification is convolutional neural networks. These networks can integrate a different number of features or so-called filters in a multi-layer fashion called convolutional layers. These models use convolutional, and pooling layers for feature abstraction and have neurons arranged in three dimensions: Height, Width, and Depth. Filters of 3 different sizes were used like 3×3, 5×5 and 7×7. It has been seen that the accuracy on the training data has been decreased from 100% to 97.8% as we increase the filter size and also the accuracy on the test data set decreases for 3×3 it is 98.7%, for 5×5 it is 98.5%, and for 7×7 it is 97.8%. The loss on the training data and test data per 10 epochs could be seen drastically increasing from 3.4% to 27.6% and 12.5% to 23.02%, respectively. Thus it is clear that using the filters having lesser dimensions is giving less loss than those having more dimensions. However, using the smaller filter size comes with the cost of computational complexity, which is very crucial in the case of larger data sets.


Sign in / Sign up

Export Citation Format

Share Document