Least square based ensemble deep learning for inertia tensor identification of combined spacecraft

2020 ◽  
Vol 106 ◽  
pp. 106189
Author(s):  
Weimeng Chu ◽  
Shunan Wu ◽  
Zhigang Wu ◽  
Yuefang Wang
2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Author(s):  
Weimeng Chu ◽  
Shunan Wu ◽  
Xiao He ◽  
Yufei Liu ◽  
Zhigang Wu

The identification accuracy of inertia tensor of combined spacecraft, which is composed by a servicing spacecraft and a captured target, could be easily affected by the measurement noise of angular rate. Due to frequently changing operating environments of combined spacecraft in space, the measurement noise of angular rate can be very complex. In this paper, an inertia tensor identification approach based on deep learning method is proposed to improve the ability of identifying inertia tensor of combined spacecraft in the presence of complex measurement noise. A deep neural network model for identification is constructed and trained by enough training data and a designed learning strategy. To verify the identification performance of the proposed deep neural network model, two testing set with different ranks of measure noises are used for simulation tests. Comparison tests are also delivered among the proposed deep neural network model, recursive least squares identification method, and tradition deep neural network model. The comparison results show that the proposed deep neural network model yields a more accurate and stable identification performance for inertia tensor of combined spacecraft in changeable and complex operating environments.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6403
Author(s):  
Wenxiang Li ◽  
Chao Kang ◽  
Hengrui Guan ◽  
Shen Huang ◽  
Jinbiao Zhao ◽  
...  

The correction of wavefront aberration plays a vital role in active optics. The traditional correction algorithms based on the deformation of the mirror cannot effectively deal with disturbances in the real system. In this study, a new algorithm called deep learning correction algorithm (DLCA) is proposed to compensate for wavefront aberrations and improve the correction capability. The DLCA consists of an actor network and a strategy unit. The actor network is utilized to establish the mapping of active optics systems with disturbances and provide a search basis for the strategy unit, which can increase the search speed; The strategy unit is used to optimize the correction force, which can improve the accuracy of the DLCA. Notably, a heuristic search algorithm is applied to reduce the search time in the strategy unit. The simulation results show that the DLCA can effectively improve correction capability and has good adaptability. Compared with the least square algorithm (LSA), the algorithm we proposed has better performance, indicating that the DLCA is more accurate and can be used in active optics. Moreover, the proposed approach can provide a new idea for further research of active optics.


2021 ◽  
Author(s):  
Sayan Kahali ◽  
Satya V.V.N. Kothapalli ◽  
Xiaojian Xu ◽  
Ulugbek S Kamilov ◽  
Dmitriy A Yablonskiy

Purpose: To introduce a Deep-Learning-Based Accelerated and Noise-Suppressed Estimation (DANSE) method for reconstructing quantitative maps of biological tissue cellular-specific, and hemodynamic-specific, from Gradient-Recalled-Echo (GRE) MRI data with multiple gradient-recalled echoes. Methods: DANSE method adapts supervised learning paradigm to train a convolutional neural network for robust estimation of and maps free from the adverse effects of macroscopic (B0) magnetic field inhomogeneities directly from the GRE magnitude images without utilizing phase images. The corresponding ground-truth maps were generated by means of a voxel-by-voxel fitting of a previously-developed biophysical quantitative GRE (qGRE) model accounting for tissue, hemodynamic and -inhomogeneities contributions to GRE signal with multiple gradient echoes using nonlinear least square (NLLS) algorithm. Results: We show that the DANSE model efficiently estimates the aforementioned brain maps and preserves all features of NLLS approach with significant improvements including noise-suppression and computation speed (from many hours to seconds). The noise-suppression feature of DANSE is especially prominent for data with SNR characteristic for typical GRE data (SNR~50), where DANSE-generated and maps had three times smaller errors than that of NLLS method. Conclusions: DANSE method enables fast reconstruction of magnetic-field-inhomogeneity-free and noise-suppressed quantitative qGRE brain maps. DANSE method does not require any information about field inhomogeneities during application. It exploits spatial patterns in the qGRE MRI data and previously-gained knowledge from the biophysical model, thus producing clean brain maps even in the environments with high noise levels. These features along with fast computational speed can lead to broad qGRE clinical and research applications.


2020 ◽  
Vol 27 (2) ◽  
pp. 477-485
Author(s):  
Yixing Huang ◽  
Shengxiang Wang ◽  
Yong Guan ◽  
Andreas Maier

In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision with other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts because of missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. In particular, U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in 100° limited angle tomography. For synthetic test data, U-Net significantly reduces the root-mean-square error (RMSE) from 2.55 × 10−3 µm−1 in the FBP reconstruction to 1.21 × 10−3 µm−1 in the U-Net reconstruction and also improves the structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least-square denoising of measured projections, the RMSE and SSIM are further improved to 1.16 × 10−3 µm−1 and 0.932, respectively. For real test data, the proposed method remarkably improves the 3D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nanoscale imaging in biology, nanoscience and materials science.


2020 ◽  
Author(s):  
Yuan-I Chen ◽  
Yin-Jui Chang ◽  
Shih-Chu Liao ◽  
Trung Duc Nguyen ◽  
Jianchen Yang ◽  
...  

AbstractFluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study the molecular states in the complex cellular environment as the lifetime readings are not biased by the fluorophore concentration or the excitation power. However, the current methods to generate FLIM images are either computationally intensive or unreliable when the number of photons acquired at each pixel is low. Here we introduce a new deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in the photon-starved conditions. We demonstrated our model is not only 258 times faster than the most popular time-domain least-square estimation (TD_LSE) method but also provide more accurate analysis in barcode identification, cellular structure visualization, Förster resonance energy transfer characterization, and metabolic state analysis. With its advantages in speed and reliability, flimGANE is particularly useful in fundamental biological research and clinical applications, where ultrafast analysis is critical.


2018 ◽  
Vol 4 ◽  
Author(s):  
Dipanjan Ghosh ◽  
Andrew Olewnik ◽  
Kemper Lewis

A critical task in product design is mapping information from the consumer space to the design space. This process is largely dependent on the designer to identify and relate psychological and consumer level factors to engineered product attributes. In this way, current methodologies lack provision to test a designer’s cognitive reasoning and may introduce bias through the mapping process. Prior work on Cyber-Empathic Design (CED) supports this mapping by relating user–product interaction data from embedded sensors to psychological constructs. To understand consumer perceptions, a network of psychological constructs is developed using Structural Equation Modeling for parameter estimation and hypothesis testing, making the framework falsifiable in nature. The focus of this technical brief is toward automating CED through unsupervised deep learning to extract features from raw data. Additionally, Partial Least Square Structural Equation Modeling is used with extracted sensor features as inputs. To demonstrate the effectiveness of the approach a case study involving sensor-integrated shoes compares three models – a survey-only model (no sensor data), the existing CED approach with manually extracted sensor features, and the proposed deep learning based CED approach. The deep learning based approach results in improved model fit.


Diagnostics ◽  
2020 ◽  
Vol 10 (8) ◽  
pp. 565 ◽  
Author(s):  
Muhammad Attique Khan ◽  
Imran Ashraf ◽  
Majed Alhaisoni ◽  
Robertas Damaševičius ◽  
Rafal Scherer ◽  
...  

Manual identification of brain tumors is an error-prone and tedious process for radiologists; therefore, it is crucial to adopt an automated system. The binary classification process, such as malignant or benign is relatively trivial; whereas, the multimodal brain tumors classification (T1, T2, T1CE, and Flair) is a challenging task for radiologists. Here, we present an automated multimodal classification method using deep learning for brain tumor type classification. The proposed method consists of five core steps. In the first step, the linear contrast stretching is employed using edge-based histogram equalization and discrete cosine transform (DCT). In the second step, deep learning feature extraction is performed. By utilizing transfer learning, two pre-trained convolutional neural network (CNN) models, namely VGG16 and VGG19, were used for feature extraction. In the third step, a correntropy-based joint learning approach was implemented along with the extreme learning machine (ELM) for the selection of best features. In the fourth step, the partial least square (PLS)-based robust covariant features were fused in one matrix. The combined matrix was fed to ELM for final classification. The proposed method was validated on the BraTS datasets and an accuracy of 97.8%, 96.9%, 92.5% for BraTs2015, BraTs2017, and BraTs2018, respectively, was achieved.


Sign in / Sign up

Export Citation Format

Share Document