Multi-Modality Non-rigid Image Registration Using Local Similarity Estimations

Author(s):  
Peter Rogelj ◽  
Wassim El-Hajj-Chehade

In this study, we focus on improving the efficiency and accuracy of nonrigid multi-modality registration of medical images. In this regard, we analyze the potentials of using the point similarity measurement approach as an alternative to global computation of mutual information (MI), which is still the most renown multi-modality similarity measure. The improvement capabilities are illustrated using the popular B-spline transformation model. The proposed solution is a combination of three related improvements of the most straightforward implementation, i.e., efficient computation of the voxel displacement field, local estimation of similarity and usage of a static image intensity dependence estimate. Five image registration prototypes were implemented to show contribution and dependence of the proposed improvements. When all the proposed improvements are applied, a significant reduction of computational cost and increased accuracy are obtained. The concept offers additional improvement opportunities by incorporating prior knowledge and machine learning techniques into the static intensity dependence estimation.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1511
Author(s):  
Saeed Mian Qaisar ◽  
Alaeddine Mihoub ◽  
Moez Krichen ◽  
Humaira Nisar

The usage of wearable gadgets is growing in the cloud-based health monitoring systems. The signal compression, computational and power efficiencies play an imperative part in this scenario. In this context, we propose an efficient method for the diagnosis of cardiovascular diseases based on electrocardiogram (ECG) signals. The method combines multirate processing, wavelet decomposition and frequency content-based subband coefficient selection and machine learning techniques. Multirate processing and features selection is used to reduce the amount of information processed thus reducing the computational complexity of the proposed system relative to the equivalent fixed-rate solutions. Frequency content-dependent subband coefficient selection enhances the compression gain and reduces the transmission activity and computational cost of the post cloud-based classification. We have used MIT-BIH dataset for our experiments. To avoid overfitting and biasness, the performance of considered classifiers is studied by using five-fold cross validation (5CV) and a novel proposed partial blind protocol. The designed method achieves more than 12-fold computational gain while assuring an appropriate signal reconstruction. The compression gain is 13 times compared to fixed-rate counterparts and the highest classification accuracies are 97.06% and 92.08% for the 5CV and partial blind cases, respectively. Results suggest the feasibility of detecting cardiac arrhythmias using the proposed approach.


Author(s):  
Paul Aljabar ◽  
Robin Wolz ◽  
Daniel Rueckert

The term manifold learning encompasses a class of machine learning techniques that convert data from a high to lower dimensional representation while respecting the intrinsic geometry of the data. The intuition underlying the use of manifold learning in the context of image analysis is that, while each image may be viewed as a single point in a very high-dimensional space, a set of such points for a population of images may be well represented by a sub-manifold of the space that is likely to be non-linear and of a significantly lower dimension. Recently, manifold learning techniques have begun to be applied to the field of medical image analysis. This chapter will review the most popular manifold learning techniques such as Multi-Dimensional Scaling (MDS), Isomap, Local linear embedding, and Laplacian eigenmaps. It will also demonstrate how these techniques can be used for image registration, segmentation, and biomarker discovery from medical images.


2021 ◽  
Author(s):  
Thiago Abdo ◽  
Fabiano Silva

The purpose of this paper is to analyze the use of different machine learning approaches and algorithms to be integrated as an automated assistance on a tool to aid the creation of new annotated datasets. We evaluate how they scale in an environment without dedicated machine learning hardware. In particular, we study the impact over a dataset with few examples and one that is being constructed. We experiment using deep learning algorithms (Bert) and classical learning algorithms with a lower computational cost (W2V and Glove combined with RF and SVM). Our experiments show that deep learning algorithms have a performance advantage over classical techniques. However, deep learning algorithms have a high computational cost, making them inadequate to an environment with reduced hardware resources. Simulations using Active and Iterative machine learning techniques to assist the creation of new datasets are conducted. For these simulations, we use the classical learning algorithms because of their computational cost. The knowledge gathered with our experimental evaluation aims to support the creation of a tool for building new text datasets.


2020 ◽  
Vol 10 (19) ◽  
pp. 6896
Author(s):  
Paloma Tirado-Martin ◽  
Judith Liu-Jimenez ◽  
Jorge Sanchez-Casanova ◽  
Raul Sanchez-Reillo

Currently, machine learning techniques are successfully applied in biometrics and Electrocardiogram (ECG) biometrics specifically. However, not many works deal with different physiological states in the user, which can provide significant heart rate variations, being these a key matter when working with ECG biometrics. Techniques in machine learning simplify the feature extraction process, where sometimes it can be reduced to a fixed segmentation. The applied database includes visits taken in two different days and three different conditions (sitting down, standing up after exercise), which is not common in current public databases. These characteristics allow studying differences among users under different scenarios, which may affect the pattern in the acquired data. Multilayer Perceptron (MLP) is used as a classifier to form a baseline, as it has a simple structure that has provided good results in the state-of-the-art. This work studies its behavior in ECG verification by using QRS complexes, finding its best hyperparameter configuration through tuning. The final performance is calculated considering different visits for enrolling and verification. Differentiation in the QRS complexes is also tested, as it is already required for detection, proving that applying a simple first differentiation gives a good result in comparison to state-of-the-art similar works. Moreover, it also improves the computational cost by avoiding complex transformations and using only one type of signal. When applying different numbers of complexes, the best results are obtained when 100 and 187 complexes in enrolment, obtaining Equal Error Rates (EER) that range between 2.79–4.95% and 2.69–4.71%, respectively.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1856 ◽  
Author(s):  
Hendrio Bragança ◽  
Juan G. Colonna ◽  
Wesllen Sousa Lima ◽  
Eduardo Souto

Smartphones have emerged as a revolutionary technology for monitoring everyday life, and they have played an important role in Human Activity Recognition (HAR) due to its ubiquity. The sensors embedded in these devices allows recognizing human behaviors using machine learning techniques. However, not all solutions are feasible for implementation in smartphones, mainly because of its high computational cost. In this context, the proposed method, called HAR-SR, introduces information theory quantifiers as new features extracted from sensors data to create simple activity classification models, increasing in this way the efficiency in terms of computational cost. Three public databases (SHOAIB, UCI, WISDM) are used in the evaluation process. The results have shown that HAR-SR can classify activities with 93% accuracy when using a leave-one-subject-out cross-validation procedure (LOSO).


Author(s):  
Axel-Cyrille Ngonga Ngomo ◽  
Mohamed Ahmed Sherif ◽  
Kleanthi Georgala ◽  
Mofeed Mohamed Hassan ◽  
Kevin Dreßler ◽  
...  

AbstractThe Linked Data paradigm builds upon the backbone of distributed knowledge bases connected by typed links. The mere volume of current knowledge bases as well as their sheer number pose two major challenges when aiming to support the computation of links across and within them. The first is that tools for link discovery have to be time-efficient when they compute links. Secondly, these tools have to produce links of high quality to serve the applications built upon Linked Data well. Solutions to the second problem build upon efficient computational approaches developed to solve the first and combine these with dedicated machine learning techniques. The current version of the Limes framework is the product of seven years of research on these two challenges. A series of machine learning techniques and efficient computation approaches were developed and integrated into this framework to address the link discovery problem. The framework combines these diverse algorithms within a generic and extensible architecture. In this article, we give an overview of version 1.7.4 of the open-source release of the framework. In particular, we focus on an overview of the architecture of the framework, an intuition of its inner workings and a brief overview of the approaches it contains. Some descriptions of the applications within which the framework was used complete the paper. Our framework is open-source and available under a GNU license at https://github.com/dice-group/LIMES together with a user manual and a developer manual.


2021 ◽  
Vol 2 ◽  
Author(s):  
Abel Sancarlos ◽  
Morgan Cameron ◽  
Jean-Marc Le Peuvedic ◽  
Juliette Groulier ◽  
Jean-Louis Duval ◽  
...  

Abstract The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1462
Author(s):  
Gustavo Henrique Bazan ◽  
Alessandro Goedtel ◽  
Oscar Duque-Perez ◽  
Daniel Morinigo-Sotelo

Induction motors are very robust, with low operating and maintenance costs, and are therefore widely used in industry. They are, however, not fault-free, with bearings and rotor bars accounting for about 50% of the total failures. This work presents a two-stage approach for three-phase induction motors diagnosis based on mutual information measures of the current signals, principal component analysis, and intelligent systems. In a first stage, the fault is identified, and, in a second stage, the severity of the defect is diagnosed. A case study is presented where different severities of bearing wear and bar breakage are analyzed. To test the robustness of the proposed method, voltage imbalances and load torque variations are considered. The results reveal the promising performance of the proposal with overall accuracies above 90% in all cases, and in many scenarios 100% of the cases are correctly classified. This work also evaluates different strategies for extracting the signals, showing the possibility of reducing the amount of information needed. Results show a satisfactory relation between efficiency and computational cost, with decreases in accuracy of less than 4% but reducing the amount of data by more than 90%, facilitating the efficient use of this method in embedded systems.


2020 ◽  
Author(s):  
Clayton Eduardo Rodrigues ◽  
Cairo Lúcio Nascimento Júnior ◽  
Domingos Alves Rade

A comparative analysis of machine learning techniques for rotating machine faults diagnosis based on vibration spectra images is presented. The feature extraction of dierent types of faults, such as unbalance, misalignment, shaft crack, rotor-stator rub, and hydrodynamic instability, is performed by processing the spectral image of vibration orbits acquired during the rotating machine run-up. The classiers are trained with simulation data and tested with both simulation and experimental data. The experimental data are obtained from measurements performed on an rotor-disk system test rig supported on hydrodynamic bearings. To generate the simulated data, a numerical model of the rotating system is developed using the Finite Element Method (FEM). Deep learning, ensemble and traditional classication methods are evaluated. The ability of the methods to generalize the image classication is evaluated based on their performance in classifying experimental test patterns that were not used during training. The obtained results suggest that despite considerable computational cost, the method based on Convolutional Neural Network (CNN) presents the best performance for classication of faults based on spectral images.


Sign in / Sign up

Export Citation Format

Share Document