scholarly journals DensePILAE: a feature reuse pseudoinverse learning algorithm for deep stacked autoencoder

Author(s):  
Jue Wang ◽  
Ping Guo ◽  
Yanjun Li

AbstractAutoencoder has been widely used as a feature learning technique. In many works of autoencoder, the features of the original input are usually extracted layer by layer using multi-layer nonlinear mapping, and only the features of the last layer are used for classification or regression. Therefore, the features of the previous layer aren’t used explicitly. The loss of information and waste of computation is obvious. In addition, faster training and reasoning speed is generally required in the Internet of Things applications. But the stacked autoencoders model is usually trained by the BP algorithm, which has the problem of slow convergence. To solve the above two problems, the paper proposes a dense connection pseudoinverse learning autoencoder (DensePILAE) from reuse perspective. Pseudoinverse learning autoencoder (PILAE) can extract features in the form of analytic solution, without multiple iterations. Therefore, the time cost can be greatly reduced. At the same time, the features of all the previous layers in stacked PILAE are combined as the input of next layer. In this way, the information of all the previous layers not only has no loss, but also can be strengthened and refined, so that better features could be learned. The experimental results in 8 data sets of different domains show that the proposed DensePILAE is effective.

Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 949
Author(s):  
Jiangyi Wang ◽  
Min Liu ◽  
Xinwu Zeng ◽  
Xiaoqiang Hua

Convolutional neural networks have powerful performances in many visual tasks because of their hierarchical structures and powerful feature extraction capabilities. SPD (symmetric positive definition) matrix is paid attention to in visual classification, because it has excellent ability to learn proper statistical representation and distinguish samples with different information. In this paper, a deep neural network signal detection method based on spectral convolution features is proposed. In this method, local features extracted from convolutional neural network are used to construct the SPD matrix, and a deep learning algorithm for the SPD matrix is used to detect target signals. Feature maps extracted by two kinds of convolutional neural network models are applied in this study. Based on this method, signal detection has become a binary classification problem of signals in samples. In order to prove the availability and superiority of this method, simulated and semi-physical simulated data sets are used. The results show that, under low SCR (signal-to-clutter ratio), compared with the spectral signal detection method based on the deep neural network, this method can obtain a gain of 0.5–2 dB on simulated data sets and semi-physical simulated data sets.


Smart Cities ◽  
2020 ◽  
Vol 3 (2) ◽  
pp. 444-455
Author(s):  
Abdul Syafiq Abdull Sukor ◽  
Latifah Munirah Kamarudin ◽  
Ammar Zakaria ◽  
Norasmadi Abdul Rahim ◽  
Sukhairi Sudin ◽  
...  

Device-free localization (DFL) has become a hot topic in the paradigm of the Internet of Things. Traditional localization methods are focused on locating users with attached wearable devices. This involves privacy concerns and physical discomfort especially to users that need to wear and activate those devices daily. DFL makes use of the received signal strength indicator (RSSI) to characterize the user’s location based on their influence on wireless signals. Existing work utilizes statistical features extracted from wireless signals. However, some features may not perform well in different environments. They need to be manually designed for a specific application. Thus, data processing is an important step towards producing robust input data for the classification process. This paper presents experimental procedures using the deep learning approach to automatically learn discriminative features and classify the user’s location. Extensive experiments performed in an indoor laboratory environment demonstrate that the approach can achieve 84.2% accuracy compared to the other basic machine learning algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3790
Author(s):  
Zachary Choffin ◽  
Nathan Jeong ◽  
Michael Callihan ◽  
Savannah Olmstead ◽  
Edward Sazonov ◽  
...  

Ankle injuries may adversely increase the risk of injury to the joints of the lower extremity and can lead to various impairments in workplaces. The purpose of this study was to predict the ankle angles by developing a footwear pressure sensor and utilizing a machine learning technique. The footwear sensor was composed of six FSRs (force sensing resistors), a microcontroller and a Bluetooth LE chipset in a flexible substrate. Twenty-six subjects were tested in squat and stoop motions, which are common positions utilized when lifting objects from the floor and pose distinct risks to the lifter. The kNN (k-nearest neighbor) machine learning algorithm was used to create a representative model to predict the ankle angles. For the validation, a commercial IMU (inertial measurement unit) sensor system was used. The results showed that the proposed footwear pressure sensor could predict the ankle angles at more than 93% accuracy for squat and 87% accuracy for stoop motions. This study confirmed that the proposed plantar sensor system is a promising tool for the prediction of ankle angles and thus may be used to prevent potential injuries while lifting objects in workplaces.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 859
Author(s):  
Abdulaziz O. AlQabbany ◽  
Aqil M. Azmi

We are living in the age of big data, a majority of which is stream data. The real-time processing of this data requires careful consideration from different perspectives. Concept drift is a change in the data’s underlying distribution, a significant issue, especially when learning from data streams. It requires learners to be adaptive to dynamic changes. Random forest is an ensemble approach that is widely used in classical non-streaming settings of machine learning applications. At the same time, the Adaptive Random Forest (ARF) is a stream learning algorithm that showed promising results in terms of its accuracy and ability to deal with various types of drift. The incoming instances’ continuity allows for their binomial distribution to be approximated to a Poisson(1) distribution. In this study, we propose a mechanism to increase such streaming algorithms’ efficiency by focusing on resampling. Our measure, resampling effectiveness (ρ), fuses the two most essential aspects in online learning; accuracy and execution time. We use six different synthetic data sets, each having a different type of drift, to empirically select the parameter λ of the Poisson distribution that yields the best value for ρ. By comparing the standard ARF with its tuned variations, we show that ARF performance can be enhanced by tackling this important aspect. Finally, we present three case studies from different contexts to test our proposed enhancement method and demonstrate its effectiveness in processing large data sets: (a) Amazon customer reviews (written in English), (b) hotel reviews (in Arabic), and (c) real-time aspect-based sentiment analysis of COVID-19-related tweets in the United States during April 2020. Results indicate that our proposed method of enhancement exhibited considerable improvement in most of the situations.


2000 ◽  
Author(s):  
Magdy Mohamed Abdelhameed ◽  
Sabri Cetinkunt

Abstract Cerebellar model articulation controller (CMAC) is a useful neural network learning technique. It was developed two decades ago but yet lacks an adequate learning algorithm, especially when it is used in a hybrid- type controller. This work is intended to introduce a simulation study for examining the performance of a hybrid-type control system based on the conventional learning algorithm of CMAC neural network. This study showed that the control system is unstable. Then a new adaptive learning algorithm of a CMAC based hybrid- type controller is proposed. The main features of the proposed learning algorithm, as well as the effects of the newly introduced parameters of this algorithm have been studied extensively via simulation case studies. The simulation results showed that the proposed learning algorithm is a robust in stabilizing the control system. Also, this proposed learning algorithm preserved all the known advantages of the CMAC neural network. Part II of this work is dedicated to validate the effectiveness of the proposed CMAC learning algorithm experimentally.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R165-R174 ◽  
Author(s):  
Marcelo Jorge Luz Mesquita ◽  
João Carlos Ribeiro Cruz ◽  
German Garabito Callapino

Estimation of an accurate velocity macromodel is an important step in seismic imaging. We have developed an approach based on coherence measurements and finite-offset (FO) beam stacking. The algorithm is an FO common-reflection-surface tomography, which aims to determine the best layered depth-velocity model by finding the model that maximizes a semblance objective function calculated from the amplitudes in common-midpoint (CMP) gathers stacked over a predetermined aperture. We develop the subsurface velocity model with a stack of layers separated by smooth interfaces. The algorithm is applied layer by layer from the top downward in four steps per layer. First, by automatic or manual picking, we estimate the reflection times of events that describe the interfaces in a time-migrated section. Second, we convert these times to depth using the velocity model via application of Dix’s formula and the image rays to the events. Third, by using ray tracing, we calculate kinematic parameters along the central ray and build a paraxial FO traveltime approximation for the FO common-reflection-surface method. Finally, starting from CMP gathers, we calculate the semblance of the selected events using this paraxial traveltime approximation. After repeating this algorithm for all selected CMP gathers, we use the mean semblance values as an objective function for the target layer. When this coherence measure is maximized, the model is accepted and the process is completed. Otherwise, the process restarts from step two with the updated velocity model. Because the inverse problem we are solving is nonlinear, we use very fast simulated annealing to search the velocity parameters in the target layers. We test the method on synthetic and real data sets to study its use and advantages.


2010 ◽  
Vol 108-111 ◽  
pp. 1070-1074
Author(s):  
Li Ying Wang ◽  
Wei Guo Zhao ◽  
Jian Min Hou

The cascade correlation algorithm that is CC algorithms, CC network structure and CC network weights learning algorithm are introduced, based on the operation data of Wanjiazhai hydropower station, considering the pressure fluctuation, the network model of vibration characteristics is established based on CC algorithm, and the applications of CC and BP algorithm in vibration characteristics of turbine are compared. The results show that the CC algorithm is better than BP neural network, the results can be used in the optimal operation of hydropower, and it has a practical significance.


2021 ◽  
Vol 22 (Supplement_2) ◽  
Author(s):  
F Ghanbari ◽  
T Joyce ◽  
S Kozerke ◽  
AI Guaricci ◽  
PG Masci ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Other. Main funding source(s): J. Schwitter receives research support by “ Bayer Schweiz AG “. C.N.C. received grant by Siemens. Gianluca Pontone received institutional fees by General Electric, Bracco, Heartflow, Medtronic, and Bayer. U.J.S received grand by Astellas, Bayer, General Electric. This work was supported by Italian Ministry of Health, Rome, Italy (RC 2017 R659/17-CCM698). This work was supported by Gyrotools, Zurich, Switzerland. Background  Late Gadolinium enhancement (LGE) scar quantification is generally recognized as an accurate and reproducible technique, but it is observer-dependent and time consuming. Machine learning (ML) potentially offers to solve this problem.  Purpose  to develop and validate a ML-algorithm to allow for scar quantification thereby fully avoiding observer variability, and to apply this algorithm to the prospective international multicentre Derivate cohort. Method  The Derivate Registry collected heart failure patients with LV ejection fraction <50% in 20 European and US centres. In the post-myocardial infarction patients (n = 689) quality of the LGE short-axis breath-hold images was determined (good, acceptable, sufficient, borderline, poor, excluded) and ground truth (GT) was produced (endo-epicardial contours, 2 remote reference regions, artefact elimination) to determine mass of non-infarcted myocardium and of dense (≥5SD above mean-remote) and non-dense scar (>2SD to <5SD above mean-remote). Data were divided into the learning (total n = 573; training: n = 289; testing: n = 284) and validation set (n = 116). A Ternaus-network (loss function = average of dice and binary-cross-entropy) produced 4 outputs (initial prediction, test time augmentation (TTA), threshold-based prediction (TB), and TTA + TB) representing normal myocardium, non-dense, and dense scar (Figure 1).Outputs were evaluated by dice metrics, Bland-Altman, and correlations.  Results  In the validation and test data sets, both not used for training, the dense scar GT was 20.8 ± 9.6% and 21.9 ± 13.3% of LV mass, respectively. The TTA-network yielded the best results with small biases vs GT (-2.2 ± 6.1%, p < 0.02; -1.7 ± 6.0%, p < 0.003, respectively) and 95%CI vs GT in the range of inter-human comparisons, i.e. TTA yielded SD of the differences vs GT in the validation and test data of 6.1 and 6.0 percentage points (%p), respectively (Fig 2), which was comparable to the 7.7%p for the inter-observer comparison (n = 40). For non-dense scar, TTA performance was similar with small biases (-1.9 ± 8.6%, p < 0.0005, -1.4 ± 8.2%, p < 0.0001, in the validation and test sets, respectively, GT 39.2 ± 13.8% and 42.1 ± 14.2%) and acceptable 95%CI with SD of the differences of 8.6 and 8.2%p for TTA vs GT, respectively, and 9.3%p for inter-observer.  Conclusions  In the large Derivate cohort from 20 centres, performance of the presented ML-algorithm to quantify dense and non-dense scar fully automatically is comparable to that of experienced humans with small bias and acceptable 95%-CI. Such a tool could facilitate scar quantification in clinical routine as it eliminates human observer variability and can handle large data sets.


Author(s):  
Xu-Qin Li ◽  
Fei Han ◽  
Tat-Ming Lok ◽  
Michael R. Lyu ◽  
Guang-Bin Huang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document