Multi-level Amplified Iterative Training of Semi-Supervision Deep Learning For Glaucoma Diagnosis

Author(s):  
Yu Tang ◽  
Gang Yang ◽  
Dayong Ding ◽  
Gangwei Cheng
2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2018 ◽  
Vol 14 (10) ◽  
pp. 155014771880671 ◽  
Author(s):  
Tao Li ◽  
Hai Wang ◽  
Yuan Shao ◽  
Qiang Niu

With the rapid growth of indoor positioning requirements without equipment and the convenience of channel state information acquisition, the research on indoor fingerprint positioning based on channel state information is increasingly valued. In this article, a multi-level fingerprinting approach is proposed, which is composed of two-level methods: the first layer is achieved by deep learning and the second layer is implemented by the optimal subcarriers filtering method. This method using channel state information is termed multi-level fingerprinting with deep learning. Deep neural networks are applied in the deep learning of the first layer of multi-level fingerprinting with deep learning, which includes two phases: an offline training phase and an online localization phase. In the offline training phase, deep neural networks are used to train the optimal weights. In the online localization phase, the top five closest positions to the location position are obtained through forward propagation. The second layer optimizes the results of the first layer through the optimal subcarriers filtering method. Under the accuracy of 0.6 m, the positioning accuracy of two common environments has reached, respectively, 96% and 93.9%. The evaluation results show that the positioning accuracy of this method is better than the method based on received signal strength, and it is better than the support vector machine method, which is also slightly improved compared with the deep learning method.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8080
Author(s):  
Ahmed Shaheen ◽  
Umair bin Waheed ◽  
Michael Fehler ◽  
Lubos Sokol ◽  
Sherif Hanafy

Automatic detection of low-magnitude earthquakes has become an increasingly important research topic in recent years due to a sharp increase in induced seismicity around the globe. The detection of low-magnitude seismic events is essential for microseismic monitoring of hydraulic fracturing, carbon capture and storage, and geothermal operations for hazard detection and mitigation. Moreover, the detection of micro-earthquakes is crucial to understanding the underlying mechanisms of larger earthquakes. Various algorithms, including deep learning methods, have been proposed over the years to detect such low-magnitude events. However, there is still a need for improving the robustness of these methods in discriminating between local sources of noise and weak seismic events. In this study, we propose a convolutional neural network (CNN) to detect seismic events from shallow borehole stations in Groningen, the Netherlands. We train a CNN model to detect low-magnitude earthquakes, harnessing the multi-level sensor configuration of the G-network in Groningen. Each G-network station consists of four geophones at depths of 50, 100, 150, and 200 m. Unlike prior deep learning approaches that use 3-component seismic records only at a single sensor level, we use records from the entire borehole as one training example. This allows us to train the CNN model using moveout patterns of the energy traveling across the borehole sensors to discriminate between events originating in the subsurface and local noise arriving from the surface. We compare the prediction accuracy of our trained CNN model to that of the STA/LTA and template matching algorithms on a two-month continuous record. We demonstrate that the CNN model shows significantly better performance than STA/LTA and template matching in detecting new events missing from the catalog and minimizing false detections. Moreover, we find that using the moveout feature allows us to effectively train our CNN model using only a fraction of the data that would be needed otherwise, saving plenty of manual labor in preparing training labels. The proposed approach can be easily applied to other microseismic monitoring networks with multi-level sensors.


Author(s):  
Arjun Benagatte Channegowda ◽  
H N Prakash

Providing security in biometrics is the major challenging task in the current situation. A lot of research work is going on in this area. Security can be more tightened by using complex security systems, like by using more than one biometric trait for recognition. In this paper multimodal biometric models are developed to improve the recognition rate of a person. The combination of physiological and behavioral biometrics characteristics is used in this work. Fingerprint and signature biometrics characteristics are used to develop a multimodal recognition system. Histograms of oriented gradients (HOG) features are extracted from biometric traits and for these feature fusions are applied at two levels. Features of fingerprint and signatures are fused using concatenation, sum, max, min, and product rule at multilevel stages, these features are used to train deep learning neural network model. In the proposed work, multi-level feature fusion for multimodal biometrics with a deep learning classifier is used and results are analyzed by a varying number of hidden neurons and hidden layers. Experiments are carried out on SDUMLA-HMT, machine learning and data mining lab, Shandong University fingerprint datasets, and MCYT signature biometric recognition group datasets, and encouraging results were obtained.


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 494-505
Author(s):  
Radu-Casian Mihailescu ◽  
Georgios Kyriakou ◽  
Angelos Papangelis

In this paper we address the problem of automatic sensor composition for servicing human-interpretable high-level tasks. To this end, we introduce multi-level distributed intelligent virtual sensors (multi-level DIVS) as an overlay framework for a given mesh of physical and/or virtual sensors already deployed in the environment. The goal for multi-level DIVS is two-fold: (i) to provide a convenient way for the user to specify high-level sensing tasks; (ii) to construct the computational graph that provides the correct output given a specific sensing task. For (i) we resort to a conversational user interface, which is an intuitive and user-friendly manner in which the user can express the sensing problem, i.e., natural language queries, while for (ii) we propose a deep learning approach that establishes the correspondence between the natural language queries and their virtual sensor representation. Finally, we evaluate and demonstrate the feasibility of our approach in the context of a smart city setup.


Author(s):  
Sayan Sakhakarmi ◽  
Jee Woong Park

A traditional structural analysis of scaffolding structures requires loading conditions that are only possible during design, but not in operation. Thus, this study proposes a method that can be used during operation to make an automated safety prediction for scaffolds. It implements a divide-and-conquer technique with deep learning. As a test scaffolding, a four-bay, three-story scaffold model was used. Analysis of the model led to 1411 unique safety cases for the model. To apply deep learning, a test simulation generated 1,540,000 datasets for pre-training, and an additional 141,100 datasets for testing purposes. The cases were then sub-divided into 18 categories based on failure modes at both global and local levels, along with a combination of member failures. Accordingly, the divide-and-conquer technique was applied to the 18 categories, each of which were pre-trained by a neural network. For the test datasets, the overall accuracy was 99%. The prediction model showed that 82.78% of the 1411 safety cases showed 100% accuracy for the test datasets, which contributed to the high accuracy. In addition, the higher values of precision, recall, and F1 score for the majority of the safety cases indicate good performance of the model, and a significant improvement compared with past research conducted on simpler cases. Specifically, the method demonstrated improved performance with respect to accuracy and the number of classifications. Thus, the results suggest that the methodology could be reliably applied for the safety assessment of scaffolding systems that are more complex than systems tested in past studies. Furthermore, the implemented methodology can easily be replicated for other classification problems.


Sign in / Sign up

Export Citation Format

Share Document