scholarly journals Acoustic traits of bat-pollinated flowers compared to flowers of other pollination syndromes and their echo-based classification using convolutional neural networks

2021 ◽  
Vol 17 (12) ◽  
pp. e1009706
Author(s):  
Ralph Simon ◽  
Karol Bakunowski ◽  
Angel Eduardo Reyes-Vasques ◽  
Marco Tschapka ◽  
Mirjam Knörnschild ◽  
...  

Bat-pollinated flowers have to attract their pollinators in absence of light and therefore some species developed specialized echoic floral parts. These parts are usually concave shaped and act like acoustic retroreflectors making the flowers acoustically conspicuous to the bats. Acoustic plant specializations only have been described for two bat-pollinated species in the Neotropics and one other bat-dependent plant in South East Asia. However, it remains unclear whether other bat-pollinated plant species also show acoustic adaptations. Moreover, acoustic traits have never been compared between bat-pollinated flowers and flowers belonging to other pollination syndromes. To investigate acoustic traits of bat-pollinated flowers we recorded a dataset of 32320 flower echoes, collected from 168 individual flowers belonging to 12 different species. 6 of these species were pollinated by bats and 6 species were pollinated by insects or hummingbirds. We analyzed the spectral target strength of the flowers and trained a convolutional neural network (CNN) on the spectrograms of the flower echoes. We found that bat-pollinated flowers have a significantly higher echo target strength, independent of their size, and differ in their morphology, specifically in the lower variance of their morphological features. We found that a good classification accuracy by our CNN (up to 84%) can be achieved with only one echo/spectrogram to classify the 12 different plant species, both bat-pollinated and otherwise, with bat-pollinated flowers being easier to classify. The higher classification performance of bat-pollinated flowers can be explained by the lower variance of their morphology.

2020 ◽  
Vol 12 (11) ◽  
pp. 1780 ◽  
Author(s):  
Yao Liu ◽  
Lianru Gao ◽  
Chenchao Xiao ◽  
Ying Qu ◽  
Ke Zheng ◽  
...  

Convolutional neural networks (CNNs) have been widely applied in hyperspectral imagery (HSI) classification. However, their classification performance might be limited by the scarcity of labeled data to be used for training and validation. In this paper, we propose a novel lightweight shuffled group convolutional neural network (abbreviated as SG-CNN) to achieve efficient training with a limited training dataset in HSI classification. SG-CNN consists of SG conv units that employ conventional and atrous convolution in different groups, followed by channel shuffle operation and shortcut connection. In this way, SG-CNNs have less trainable parameters, whilst they can still be accurately and efficiently trained with fewer labeled samples. Transfer learning between different HSI datasets is also applied on the SG-CNN to further improve the classification accuracy. To evaluate the effectiveness of SG-CNNs for HSI classification, experiments have been conducted on three public HSI datasets pretrained on HSIs from different sensors. SG-CNNs with different levels of complexity were tested, and their classification results were compared with fine-tuned ShuffleNet2, ResNeXt, and their original counterparts. The experimental results demonstrate that SG-CNNs can achieve competitive classification performance when the amount of labeled data for training is poor, as well as efficiently providing satisfying classification results.


2021 ◽  
Vol 14 ◽  
Author(s):  
Kunqiang Qing ◽  
Ruisen Huang ◽  
Keum-Shik Hong

This study decodes consumers' preference levels using a convolutional neural network (CNN) in neuromarketing. The classification accuracy in neuromarketing is a critical factor in evaluating the intentions of the consumers. Functional near-infrared spectroscopy (fNIRS) is utilized as a neuroimaging modality to measure the cerebral hemodynamic responses. In this study, a specific decoding structure, called CNN-based fNIRS-data analysis, was designed to achieve a high classification accuracy. Compared to other methods, the automated characteristics, constant training of the dataset, and learning efficiency of the proposed method are the main advantages. The experimental procedure required eight healthy participants (four female and four male) to view commercial advertisement videos of different durations (15, 30, and 60 s). The cerebral hemodynamic responses of the participants were measured. To compare the preference classification performances, CNN was utilized to extract the most common features, including the mean, peak, variance, kurtosis, and skewness. Considering three video durations, the average classification accuracies of 15, 30, and 60 s videos were 84.3, 87.9, and 86.4%, respectively. Among them, the classification accuracy of 87.9% for 30 s videos was the highest. The average classification accuracies of three preferences in females and males were 86.2 and 86.3%, respectively, showing no difference in each group. By comparing the classification performances in three different combinations (like vs. so-so, like vs. dislike, and so-so vs. dislike) between two groups, male participants were observed to have targeted preferences for commercial advertising, and the classification performance 88.4% between “like” vs. “dislike” out of three categories was the highest. Finally, pairwise classification performance are shown as follows: For female, 86.1% (like vs. so-so), 87.4% (like vs. dislike), 85.2% (so-so vs. dislike), and for male 85.7, 88.4, 85.1%, respectively.


Author(s):  
Jonathan Readshaw ◽  
Stefano Giani

AbstractThis work presents a convolutional neural network for the prediction of next-day stock fluctuations using company-specific news headlines. Experiments to evaluate model performance using various configurations of word embeddings and convolutional filter widths are reported. The total number of convolutional filters used is far fewer than is common, reducing the dimensionality of the task without loss of accuracy. Furthermore, multiple hidden layers with decreasing dimensionality are employed. A classification accuracy of 61.7% is achieved using pre-learned embeddings, that are fine-tuned during training to represent the specific context of this task. Multiple filter widths are also implemented to detect different length phrases that are key for classification. Trading simulations are conducted using the presented classification results. Initial investments are more than tripled over an 838-day testing period using the optimal classification configuration and a simple trading strategy. Two novel methods are presented to reduce the risk of the trading simulations. Adjustment of the sigmoid class threshold and re-labelling headlines using multiple classes form the basis of these methods. A combination of these approaches is found to be more than double the Average Trade Profit achieved during baseline simulations.


Author(s):  
Yongping Xing ◽  
Chuangbai Xiao ◽  
Yifei Wu ◽  
Ziming Ding

Sentiment analysis, including aspect-level sentiment classification, is an important basic natural language processing (NLP) task. Aspect-level sentiment can provide complete and in-depth results. Words with different contexts variably influence the aspect-level sentiment polarity of sentences, and polarity varies based on different aspects of a sentence. Recurrent neural networks (RNNs) are regarded as effective models for handling NLP and have performed well in aspect-level sentiment classification. Extensive literature exists on sentiment classification that utilizes convolutional neural networks (CNNs); however, no literature on aspect-level sentiment classification that uses CNNs is available. In the present study, we develop a CNN model for handling aspect-level sentiment classification. In our model, attention-based input layers are incorporated into CNN to introduce aspect information. In our experiment, in which a benchmark dataset from Twitter is compared with other models, incorporating aspect information into CNN improves aspect-level sentiment classification performance without using syntactic parser or other language features.


2021 ◽  
Vol 14 (1) ◽  
pp. 9
Author(s):  
Cuiping Shi ◽  
Xinlei Zhang ◽  
Liguo Wang

With the development of remote sensing scene image classification, convolutional neural networks have become the most commonly used method in this field with their powerful feature extraction ability. In order to improve the classification performance of convolutional neural networks, many studies extract deeper features by increasing the depth and width of convolutional neural networks, which improves classification performance but also increases the complexity of the model. To solve this problem, a lightweight convolutional neural network based on channel multi-group fusion (LCNN-CMGF) is presented. For the proposed LCNN-CMGF method, a three-branch downsampling structure was designed to extract shallow features from remote sensing images. In the deep layer of the network, the channel multi-group fusion structure is used to extract the abstract semantic features of remote sensing scene images. The structure solves the problem of lack of information exchange between groups caused by group convolution through channel fusion of adjacent features. The four most commonly used remote sensing scene datasets, UCM21, RSSCN7, AID and NWPU45, were used to carry out a variety of experiments in this paper. The experimental results under the conditions of four datasets and multiple training ratios show that the proposed LCNN-CMGF method has more significant performance advantages than the compared advanced method.


2020 ◽  
Vol 36 (5) ◽  
pp. 743-749
Author(s):  
Xingwang Li ◽  
Xiaofei Fan ◽  
Lili Zhao ◽  
Sheng Huang ◽  
Yi He ◽  
...  

HighlightsThis study revealed the feasibility of to classify pepper seed varieties using multispectral imaging combined with one-dimensional convolutional neural network (1D-CNN).Convolutional neural networks were adopted to develop models for prediction of seed varieties, and the performance was compared with KNN and SVM.In this experiment, the classification effect of the SVM classification model is the best, but the 1D-CNN classification model is relatively easy to implement.Abstract. When non-seed materials are mixed in seeds or seed varieties of low value are mixed in high value varieties, it will cause losses to growers or businesses. Thus, the successful discrimination of seed varieties is critical for improvement of seed ralue. In recent years, convolutional neural networks (CNNs) have been used in classification of seed varieties. The feasibility of using multispectral imaging combined with one-dimensional convolutional neural network (1D-CNN) to classify pepper seed varieties was studied. The total number of three varieties of samples was 1472, and the average spectral curve between 365nm and 970nm of the three varieties was studied. The data were analyzed using full bands of the spectrum or the feature bands selected by successive projection algorithm (SPA). SPA extracted 9 feature bands from 19 bands (430, 450, 470, 490, 515, 570, 660, 780, and 880 nm). The classification accuracy of the three classification models developed with full band using K nearest neighbors (KNN), support vector machine (SVM), and 1D-CNN were 85.81%, 97.70%, and 90.50%, respectively. With full bands, SVM and 1D-CNN performed significantly better than KNN, and SVM performed slightly better than 1D-CNN. With feature bands, the testing accuracies of SVM and 1D-CNN were 97.30% and 92.6%, respectively. Although the classification accuracy of 1D-CNN was not the highest, the ease of operation made it the most feasible method for pepper seed variety prediction. Keywords: Multispectral imaging, One-dimensional convolutional neural network, Pepper seed, Variety classification.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4065 ◽  
Author(s):  
Zhu ◽  
Zhou ◽  
Zhang ◽  
Bao ◽  
Wu ◽  
...  

Soybean variety is connected to stress resistance ability, as well as nutritional and commercial value. Near-infrared hyperspectral imaging was applied to classify three varieties of soybeans (Zhonghuang37, Zhonghuang41, and Zhonghuang55). Pixel-wise spectra were extracted and preprocessed, and average spectra were also obtained. Convolutional neural networks (CNN) using the average spectra and pixel-wise spectra of different numbers of soybeans were built. Pixel-wise CNN models obtained good performance predicting pixel-wise spectra and average spectra. With the increase of soybean numbers, performances were improved, with the classification accuracy of each variety over 90%. Traditionally, the number of samples used for modeling is large. It is time-consuming and requires labor to obtain hyperspectral data from large batches of samples. To explore the possibility of achieving decent identification results with few samples, a majority vote was also applied to the pixel-wise CNN models to identify a single soybean variety. Prediction maps were obtained to present the classification results intuitively. Models using pixel-wise spectra of 60 soybeans showed equivalent performance to those using the average spectra of 810 soybeans, illustrating the possibility of discriminating soybean varieties using few samples by acquiring pixel-wise spectra.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Xihuizi Liang

Abstract Background Cotton diceases seriously affect the yield and quality of cotton. The type of pest or disease suffered by cotton can be determined by the disease spots on the cotton leaves. This paper presents a few-shot learning framework that can be used for cotton leaf disease spot classification task. This can be used in preventing and controlling cotton diseases timely. First, disease spots on cotton leaf’s disease images are segmented by different methods, compared by using support vector machine (SVM) method and threshold segmentation, and discussed the suitable one. Then, with segmented disease spot images as input, a disease spot dataset is established, and the cotton leaf disease spots were classified using a classical convolutional neural network classifier, the structure and framework of convolutional neural network had been designed. At last, the features of two different images are extracted by a parallel two-way convolutional neural network with weight sharing. Then, the network uses a loss function to learn the metric space, in which similar leaf samples are close to each other and different leaf samples are far away from each other. In summary, this work can be regarded as a significang reference and the benchmark comparison for the follow-up studies of few-shot learning tasks in the agricultural field. Results To achieve the classification of cotton leaf spots by small sample learning, a metric-based learning method was developed to extract cotton leaf spot features and classify the sick leaves. The threshold segmentation and SVM were compared in the extracting of leaf spot. The results showed that both of these two method can extract the leaf spot in a good performance, SVM expented more time, but the leaf spot which extracted from SVM was much more suitable for classifying, thus SVM method can retain much more information of leaf spot, such as color, shape, textures, ect, which can help classficating the leaf spot. In the process of leaf spot classification, the two-way parallel convolutional neural network was established for building the leaf spot feature extractor, and feature classifier is constructed. After establishing the metric space, KNN was used as the spot classifier, and for the construction of convolutional neural networks, commonly used models were selected for comparison, and a spatial structure optimizer (SSO) is introduced for local optimization of the model, include Vgg, DesenNet, and ResNet. Experimentally, it is demonstrated that the classification accuracy of DenseNet is the highest, compared to the other two networks, and the classification accuracy of S-DenseNet is 7.7% higher then DenseNet on average for different number of steps. Conclusions As the step increasing, the accuracy of DesenNet, and ResNet are all improved, and after using SSO, each of these neural networks can achieved better performance. But The extent of increase varies, DesenNet with SSO had been improved the most obviously.


2021 ◽  
Vol 2066 (1) ◽  
pp. 012091
Author(s):  
Xiaojing Fan ◽  
A Runa ◽  
Zhili Pei ◽  
Mingyang Jiang

Abstract This paper studies the text classification based on deep learning. Aiming at the problem of over fitting and training time consuming of CNN text classification model, a SDCNN model is constructed based on sparse dropout convolutional neural network. Experimental results show that, compared with CNN, SDCNN further improves the classification performance of the model, and its classification accuracy and precision can reach 98.96% and 85.61%, respectively, indicating that SDCNN has more advantages in text classification problems.


2021 ◽  
Vol 2021 (11) ◽  
Author(s):  
I.F. Kupryashkin ◽  

The results of MSTAR objects ten-classes classification using a VGG-type deep convolutional neural network with eight convolutional layers are presented. The maximum accuracy achieved by the network was 97.91%. In addition, the results of the MobileNetV1, Xception, InceptionV3, ResNet50, InceptionResNetV2, DenseNet121 networks, prepared using the transfer learning technique, are presented. It is shown that in the problem under consideration, the use of the listed pretrained convolutional networks did not improve the classification accuracy, which ranged from 93.79% to 97.36%. It has been established that even visually unobservable local features of the terrain background near each type of object are capable of providing a classification accuracy of about 51% (and not the expected 10% for a ten-alternative classification) even in the absence of object and their shadows. The procedure for preparing training data is described, which ensures the elimination of the influence of the terrain background on the result of neural network classification.


Sign in / Sign up

Export Citation Format

Share Document