scholarly journals A Postearthquake Multiple Scene Recognition Model Based on Classical SSD Method and Transfer Learning

2020 ◽  
Vol 9 (4) ◽  
pp. 238 ◽  
Author(s):  
Zhiqiang Xu ◽  
Yumin Chen ◽  
Fan Yang ◽  
Tianyou Chu ◽  
Hongyan Zhou

The recognition of postearthquake scenes plays an important role in postearthquake rescue and reconstruction. To overcome the over-reliance on expert visual interpretation and the poor recognition performance of traditional machine learning in postearthquake scene recognition, this paper proposes a postearthquake multiple scene recognition (PEMSR) model based on the classical deep learning Single Shot MultiBox Detector (SSD) method. In this paper, a labeled postearthquake scenes dataset is constructed by segmenting acquired remote sensing images, which are classified into six categories: landslide, houses, ruins, trees, clogged and ponding. Due to the insufficiency and imbalance of the original dataset, transfer learning and a data augmentation and balancing strategy are utilized in the PEMSR model. To evaluate the PEMSR model, the evaluation metrics of precision, recall and F1 score are used in the experiment. Multiple experimental test results demonstrate that the PEMSR model shows a stronger performance in postearthquake scene recognition. The PEMSR model improves the detection accuracy of each scene compared with SSD by transfer learning and data augmentation strategy. In addition, the average detection time of the PEMSR model only needs 0.4565s, which is far less than the 8.3472s of the traditional Histogram of Oriented Gradient + Support Vector Machine (HOG+SVM) method.

2018 ◽  
Vol 7 (8) ◽  
pp. 223 ◽  
Author(s):  
Zhidong Zhao ◽  
Yang Zhang ◽  
Yanjun Deng

Continuous monitoring of the fetal heart rate (FHR) signal has been widely used to allow obstetricians to obtain detailed physiological information about newborns. However, visual interpretation of FHR traces causes inter-observer and intra-observer variability. Therefore, this study proposed a novel computerized analysis software of the FHR signal (CAS-FHR), aimed at providing medical decision support. First, to the best of our knowledge, the software extracted the most comprehensive features (47) from different domains, including morphological, time, and frequency and nonlinear domains. Then, for the intelligent assessment of fetal state, three representative machine learning algorithms (decision tree (DT), support vector machine (SVM), and adaptive boosting (AdaBoost)) were chosen to execute the classification stage. To improve the performance, feature selection/dimensionality reduction methods (statistical test (ST), area under the curve (AUC), and principal component analysis (PCA)) were designed to determine informative features. Finally, the experimental results showed that AdaBoost had stronger classification ability, and the performance of the selected feature set using ST was better than that of the original dataset with accuracies of 92% and 89%, sensitivities of 92% and 89%, specificities of 90% and 88%, and F-measures of 95% and 92%, respectively. In summary, the results proved the effectiveness of our proposed approach involving the comprehensive analysis of the FHR signal for the intelligent prediction of fetal asphyxia accurately in clinical practice.


Author(s):  
Sakshi Ahuja ◽  
Bijaya Ketan Panigrahi ◽  
Nilanjan Dey ◽  
Venkatesan Rajinikanth ◽  
Tapan Kumar Gandhi

In the proposed research work; the COVID-19 is detected using transfer learning from CT scan images decomposed to three-level using stationary wavelet. A three-phase detection model is proposed to improve the detection accuracy and the procedures are as follows; Phase1- data augmentation using stationary wavelets, Phase2- COVID-19 detection using pre-trained CNN model and Phase3- abnormality localization in CT scan images. This work has considered the well known pre-trained architectures, such as ResNet18, ResNet50, ResNet101, and SqueezeNet for the experimental evaluation. In this work, 70% of images are considered to train the network and 30% images are considered to validate the network. The performance of the considered architectures is evaluated by computing the common performance measures.<br><br>


2019 ◽  
Vol 28 (1) ◽  
pp. 3-12
Author(s):  
Jarosław Kurek ◽  
Joanna Aleksiejuk-Gawron ◽  
Izabella Antoniuk ◽  
Jarosław Górski ◽  
Albina Jegorowa ◽  
...  

This paper presents an improved method for recognizing the drill state on the basis of hole images drilled in a laminated chipboard, using convolutional neural network (CNN) and data augmentation techniques. Three classes were used to describe the drill state: red -- for drill that is worn out and should be replaced, yellow -- for state in which the system should send a warning to the operator, indicating that this element should be checked manually, and green -- denoting the drill that is still in good condition, which allows for further use in the production process. The presented method combines the advantages of transfer learning and data augmentation methods to improve the accuracy of the received evaluations. In contrast to the classical deep learning methods, transfer learning requires much smaller training data sets to achieve acceptable results. At the same time, data augmentation customized for drill wear recognition makes it possible to expand the original dataset and to improve the overall accuracy. The experiments performed have confirmed the suitability of the presented approach to accurate class recognition in the given problem, even while using a small original dataset.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Zeming Fan ◽  
Mudasir Jamil ◽  
Muhammad Tariq Sadiq ◽  
Xiwei Huang ◽  
Xiaojun Yu

Due to the rapid spread of COVID-19 and its induced death worldwide, it is imperative to develop a reliable tool for the early detection of this disease. Chest X-ray is currently accepted to be one of the reliable means for such a detection purpose. However, most of the available methods utilize large training data, and there is a need for improvement in the detection accuracy due to the limited boundary segment of the acquired images for symptom identifications. In this study, a robust and efficient method based on transfer learning techniques is proposed to identify normal and COVID-19 patients by employing small training data. Transfer learning builds accurate models in a timesaving way. First, data augmentation was performed to help the network for memorization of image details. Next, five state-of-the-art transfer learning models, AlexNet, MobileNetv2, ShuffleNet, SqueezeNet, and Xception, with three optimizers, Adam, SGDM, and RMSProp, were implemented at various learning rates, 1e-4, 2e-4, 3e-4, and 4e-4, to reduce the probability of overfitting. All the experiments were performed on publicly available datasets with several analytical measurements attained after execution with a 10-fold cross-validation method. The results suggest that MobileNetv2 with Adam optimizer at a learning rate of 3e-4 provides an average accuracy, recall, precision, and F-score of 97%, 96.5%, 97.5%, and 97%, respectively, which are higher than those of all other combinations. The proposed method is competitive with the available literature, demonstrating that it could be used for the early detection of COVID-19 patients.


Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 323 ◽  
Author(s):  
Wentao Mao ◽  
Di Zhang ◽  
Siyu Tian ◽  
Jiamei Tang

In recent years, machine learning techniques have been proven to be a promising tool for early fault detection of rolling bearings. In many actual applications, however, bearing whole-life data are not easy to be historically accumulated, while insufficient data may result in training a detection model that is not good enough. If utilizing the available data under different working conditions to facilitate model training, the data distribution of different bearings are usually quite different, which does not meet the precondition of i n d e p e n d e n t a n d i d e n t i c a l d i s t r i b u t i o n ( i . i . d ) and tends to cause performance reduction. In addition, disturbed by the unstable noise under complex conditions, most of the current detection methods are inclined to raise false alarms, so that the reliability of detection results needs to be improved. To solve these problems, a robust detection method for bearings early fault is proposed based on deep transfer learning. The method includes offline stage and online stage. In the offline stage, by introducing a deep auto-encoder network with domain adaptation, the distribution inconsistency of normal state data among different bearings can be weakened, then the common feature representation of the normal state is obtained. With the extracted common features, a new state assessment method based on the robust deep auto-encoder network is proposed to evaluate the boundary between normal state and early fault state in the low-rank feature space. By training a support vector machine classifier, the detection model is established. In the online stage, along with the data batch arriving sequentially, the features of target bearing are extracted using the common representation learnt in the offline stage, and online detection is conducted by feeding them into the SVM model. Experimental results on IEEE PHM Challenge 2012 bearing dataset and XJTU-SY dataset show that the proposed approach outperforms several state-of-the-art detection methods in terms of detection accuracy and false alarm rate.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Zhiyong Tao ◽  
Xinru Zhou ◽  
Zhixue Xu ◽  
Sen Lin ◽  
Yalei Hu ◽  
...  

Accuracy and efficiency are essential topics in the current biometric feature recognition and security research. This paper proposes a deep neural network using bidirectional feature extraction and transfer learning to improve finger-vein recognition performance. Above all, we make a new finger-vein database with the opposite position information of the original one and adopt transfer learning to make the network suitable for our overall recognition framework. Next, the feature extractor is constructed by adjusting the unidirectional database’s parameters, capturing vein features from top to bottom and vice versa. Correspondingly, we concatenate the above two features to form the finger-veins’ bidirectional features, which are trained and classified by Support Vector Machines (SVM) to realize recognition. Experiments are conducted on the Malaysian Polytechnic University’s published database (FV-USM) and finger veins of Signal and Information Processing Laboratory (FV-SIPL). The accuracy of our proposed algorithm reaches 99.67% and 99.31%, which is significantly higher than the unidirectional recognition under each database. Compared with the algorithms cited in this paper, our proposed model based on bidirectional feature enjoys higher accuracy, faster recognition speed than the state-of-the-art frameworks, and excellent practical value.


Author(s):  
Balaji Sreenivasulu ◽  
◽  
Anjaneyulu Pasala ◽  
Gaikwad Vasanth ◽  
◽  
...  

In computer vision, domain adaptation or transfer learning plays an important role because it learns a target classifier characteristics using labeled data from various distribution. The existing researches mostly focused on minimizing the time complexity of neural networks and it effectively worked on low-level features. However, the existing method failed to concentrate on data augmentation time and cost of labeled data. Moreover, machine learning techniques face difficulty to obtain the large amount of distributed label data. In this research study, the pre-trained network called inception layer is fine-tuned with the augmented data. There are two phases present in this study, where the effectiveness of data augmentation for Inception pre-trained networks is investigated in the first phase. The transfer learning approach is used to enhance the results of the first phase and the Support Vector Machine (SVM) is used to learn all the features extracted from inception layers. The experiments are conducted on a publicly available dataset to estimate the effectiveness of proposed method. The results stated that the proposed method achieved 95.23% accuracy, where the existing techniques namely deep neural network and traditional convolutional networks achieved 87.32% and 91.32% accuracy respectively. This validation results proved that the developed method nearly achieved 4-8% improvement in accuracy than existing techniques.


2019 ◽  
Vol 36 (10) ◽  
pp. 1945-1956
Author(s):  
Qian Li ◽  
Shaoen Tang ◽  
Xuan Peng ◽  
Qiang Ma

AbstractAtmospheric visibility is an important element of meteorological observation. With existing methods, defining image features that reflect visibility accurately and comprehensively is difficult. This paper proposes a visibility detection method based on transfer learning using deep convolutional neural networks (DCNN) that addresses issues caused by a lack of sufficient visibility labeled datasets. In the proposed method, each image was first divided into several subregions, which were encoded to extract visual features using a pretrained no-reference image quality assessment neural network. Then a support vector regression model was trained to map the extracted features to the visibility. The fusion weight of each subregion was evaluated according to the error analysis of the regression model. Finally, the neural network was fine-tuned to better fit the problem of visibility detection using the current detection results conversely. Experimental results demonstrated that the detection accuracy of the proposed method exceeds 90% and satisfies the requirements of daily observation applications.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4430 ◽  
Author(s):  
Ze Luo ◽  
Huiling Yu ◽  
Yizhuo Zhang

The real-time detection of pine cones in Korean pine forests is not only the data basis for the mechanized picking of pine cones, but also one of the important methods for evaluating the yield of Korean pine forests. In recent years, there has been a certain number of detection accuracy for image processing of fruits in trees using deep-learning methods, but the overall performance of these methods has not been satisfactory, and they have never been used in the detection of pine cones. In this paper, a pine cone detection method based on Boundary Equilibrium Generative Adversarial Networks (BEGAN) and You Only Look Once (YOLO) v3 mode is proposed to solve the problems of insufficient data set, inaccurate detection result and slow detection speed. First, we use traditional image augmentation technology and generative adversarial network BEGAN to implement data augmentation. Second, we introduced a densely connected network (DenseNet) structure in the backbone network of YOLOv3. Third, we expanded the detection scale of YOLOv3, and optimized the loss function of YOLOv3 using the Distance-IoU (DIoU) algorithm. Finally, we conducted a comparative experiment. The experimental results show that the performance of the model can be effectively improved by using BEGAN for data augmentation. Under same conditions, the improved YOLOv3 model is better than the Single Shot MultiBox Detector (SSD), the faster-regions with convolutional neural network (Faster R-CNN) and the original YOLOv3 model. The detection accuracy reaches 95.3%, and the detection efficiency is 37.8% higher than that of the original YOLOv3.


Symmetry ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 606 ◽  
Author(s):  
Lvwen Huang ◽  
Along He ◽  
Mengqun Zhai ◽  
Yuxi Wang ◽  
Ruige Bai ◽  
...  

The fertility detection of Specific Pathogen Free (SPF) chicken embryo eggs in vaccine preparation is a challenging task due to the high similarity among six kinds of hatching embryos (weak, hemolytic, crack, infected, infertile, and fertile). This paper firstly analyzes two classification difficulties of feature similarity with subtle variations on six kinds of five- to seven-day embryos, and proposes a novel multi-feature fusion based on Deep Convolutional Neural Network (DCNN) architecture in a small dataset. To avoid overfitting, data augmentation is employed to generate enough training images after the Region of Interest (ROI) of original images are cropped. Then, all the augmented ROI images are fed into pretrained AlexNet and GoogLeNet to learn the discriminative deep features by transfer learning, respectively. After the local features of Speeded Up Robust Feature (SURF) and Histogram of Oriented Gradient (HOG) are extracted, the multi-feature fusion with deep features and local features is implemented. Finally, the Support Vector Machine (SVM) is trained with the fused features. The verified experiments show that this proposed method achieves an average classification accuracy rate of 98.4%, and that the proposed transfer learning has superior generalization and better classification performance for small-scale agricultural image samples.


Sign in / Sign up

Export Citation Format

Share Document