scholarly journals Woven Fabric Pattern Recognition and Classification Based on Deep Convolutional Neural Networks

Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 1048 ◽  
Author(s):  
Muhammad Ather Iqbal Hussain ◽  
Babar Khan ◽  
Zhijie Wang ◽  
Shenyi Ding

The weave pattern (texture) of woven fabric is considered to be an important factor of the design and production of high-quality fabric. Traditionally, the recognition of woven fabric has a lot of challenges due to its manual visual inspection. Moreover, the approaches based on early machine learning algorithms directly depend on handcrafted features, which are time-consuming and error-prone processes. Hence, an automated system is needed for classification of woven fabric to improve productivity. In this paper, we propose a deep learning model based on data augmentation and transfer learning approach for the classification and recognition of woven fabrics. The model uses the residual network (ResNet), where the fabric texture features are extracted and classified automatically in an end-to-end fashion. We evaluated the results of our model using evaluation metrics such as accuracy, balanced accuracy, and F1-score. The experimental results show that the proposed model is robust and achieves state-of-the-art accuracy even when the physical properties of the fabric are changed. We compared our results with other baseline approaches and a pretrained VGGNet deep learning model which showed that the proposed method achieved higher accuracy when rotational orientations in fabric and proper lighting effects were considered.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Author(s):  
Lukman Ismael ◽  
Pejman Rasti ◽  
Florian Bernard ◽  
Philippe Menei ◽  
Aram Ter Minassian ◽  
...  

BACKGROUND The functional MRI (fMRI) is an essential tool for the presurgical planning of brain tumor removal, allowing the identification of functional brain networks in order to preserve the patient’s neurological functions. One fMRI technique used to identify the functional brain network is the resting-state-fMRI (rsfMRI). However, this technique is not routinely used because of the necessity to have a expert reviewer to identify manually each functional networks. OBJECTIVE We aimed to automatize the detection of brain functional networks in rsfMRI data using deep learning and machine learning algorithms METHODS We used the rsfMRI data of 82 healthy patients to test the diagnostic performance of our proposed end-to-end deep learning model to the reference functional networks identified manually by 2 expert reviewers. RESULTS Experiment results show the best performance of 86% correct recognition rate obtained from the proposed deep learning architecture which shows its superiority over other machine learning algorithms that were equally tested for this classification task. CONCLUSIONS The proposed end-to-end deep learning model was the most performant machine learning algorithm. The use of this model to automatize the functional networks detection in rsfMRI may allow to broaden the use of the rsfMRI, allowing the presurgical identification of these networks and thus help to preserve the patient’s neurological status. CLINICALTRIAL Comité de protection des personnes Ouest II, decision reference CPP 2012-25)


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2972
Author(s):  
Qinghua Gao ◽  
Shuo Jiang ◽  
Peter B. Shull

Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.


Author(s):  
Wei Zhang ◽  
Gaoliang Peng ◽  
Chuanhao Li ◽  
Yuanhang Chen ◽  
Zhujun Zhang

Intelligent fault diagnosis techniques have replaced the time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning model can improve the accuracy of intelligent fault diagnosis with the help of its multilayer nonlinear mapping ability. This paper has proposed a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in first convolutional layer for extracting feature and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform state of the art DNN model which is based on frequency features under different working load and noisy environment.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Noha E. El-Attar ◽  
Mohamed K. Hassan ◽  
Othman A. Alghamdi ◽  
Wael A. Awad

AbstractReliance on deep learning techniques has become an important trend in several science domains including biological science, due to its proven efficiency in manipulating big data that are often characterized by their non-linear processes and complicated relationships. In this study, Convolutional Neural Networks (CNN) has been recruited, as one of the deep learning techniques, to be used in classifying and predicting the biological activities of the essential oil-producing plant/s through their chemical compositions. The model is established based on the available chemical composition’s information of a set of endemic Egyptian plants and their biological activities. Another type of machine learning algorithms, Multiclass Neural Network (MNN), has been applied on the same Essential Oils (EO) dataset. This aims to fairly evaluate the performance of the proposed CNN model. The recorded accuracy in the testing process for both CNN and MNN is 98.13% and 81.88%, respectively. Finally, the CNN technique has been adopted as a reliable model for classifying and predicting the bioactivities of the Egyptian EO-containing plants. The overall accuracy for the final prediction process is reported as approximately 97%. Hereby, the proposed deep learning model could be utilized as an efficient model in predicting the bioactivities of, at least Egyptian, EOs-producing plants.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6126
Author(s):  
Tae Hyong Kim ◽  
Ahnryul Choi ◽  
Hyun Mu Heo ◽  
Hyunggun Kim ◽  
Joung Hwan Mun

Pre-impact fall detection can detect a fall before a body segment hits the ground. When it is integrated with a protective system, it can directly prevent an injury due to hitting the ground. An impact acceleration peak magnitude is one of key measurement factors that can affect the severity of an injury. It can be used as a design parameter for wearable protective devices to prevent injuries. In our study, a novel method is proposed to predict an impact acceleration magnitude after loss of balance using a single inertial measurement unit (IMU) sensor and a sequential-based deep learning model. Twenty-four healthy participants participated in this study for fall experiments. Each participant worn a single IMU sensor on the waist to collect tri-axial accelerometer and angular velocity data. A deep learning method, bi-directional long short-term memory (LSTM) regression, is applied to predict a fall’s impact acceleration magnitude prior to fall impact (a fall in five directions). To improve prediction performance, a data augmentation technique with increment of dataset is applied. Our proposed model showed a mean absolute percentage error (MAPE) of 6.69 ± 0.33% with r value of 0.93 when all three different types of data augmentation techniques are applied. Additionally, there was a significant reduction of MAPE by 45.2% when the number of training datasets was increased by 4-fold. These results show that impact acceleration magnitude can be used as an activation parameter for fall prevention such as in a wearable airbag system by optimizing deployment process to minimize fall injury in real time.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tianliang Lu ◽  
Yanhui Du ◽  
Li Ouyang ◽  
Qiuyu Chen ◽  
Xirui Wang

In recent years, the number of malware on the Android platform has been increasing, and with the widespread use of code obfuscation technology, the accuracy of antivirus software and traditional detection algorithms is low. Current state-of-the-art research shows that researchers started applying deep learning methods for malware detection. We proposed an Android malware detection algorithm based on a hybrid deep learning model which combines deep belief network (DBN) and gate recurrent unit (GRU). First of all, analyze the Android malware; in addition to extracting static features, dynamic behavioral features with strong antiobfuscation ability are also extracted. Then, build a hybrid deep learning model for Android malware detection. Because the static features are relatively independent, the DBN is used to process the static features. Because the dynamic features have temporal correlation, the GRU is used to process the dynamic feature sequence. Finally, the training results of DBN and GRU are input into the BP neural network, and the final classification results are output. Experimental results show that, compared with the traditional machine learning algorithms, the Android malware detection model based on hybrid deep learning algorithms has a higher detection accuracy, and it also has a better detection effect on obfuscated malware.


2019 ◽  
Author(s):  
Xinyang Feng ◽  
Frank A. Provenzano ◽  
Scott A. Small ◽  

ABSTRACTDeep learning applied to MRI for Alzheimer’s classification is hypothesized to improve if the deep learning model implicates disease’s pathophysiology. The challenge in testing this hypothesis is that large-scale data are required to train this type of model. Here, we overcome this challenge by using a novel data augmentation strategy and show that our MRI-based deep learning model classifies Alzheimer’s dementia with high accuracy. Moreover, a class activation map was found dominated by signal from the hippocampal formation, a site where Alzheimer’s pathophysiology begins. Next, we tested the model’s performance in prodromal Alzheimer’s when patients present with mild cognitive impairment (MCI). We retroactively dichotomized a large cohort of MCI patients who were followed for up to 10 years into those with and without prodromal Alzheimer’s at baseline and used the dementia-derived model to generate individual ‘deep learning MRI’ scores. We compared the two groups on these scores, and on other biomarkers of amyloid pathology, tau pathology, and neurodegeneration. The deep learning MRI scores outperformed nearly all other biomarkers, including—unexpectedly—biomarkers of amyloid or tau pathology, in classifying prodromal disease and in predicting clinical progression. Providing a mechanistic explanation, the deep learning MRI scores were found to be linked to regional tau pathology, through investigations using cross-sectional, longitudinal, premortem and postmortem data. Our findings validate that a disease’s known pathophysiology can improve the design and performance of deep learning models. Moreover, by showing that deep learning can extract useful biomarker information from conventional MRIs, the advantages of this model extend practically, potentially reducing patient burden, risk, and cost.


Sign in / Sign up

Export Citation Format

Share Document