scholarly journals Computer vision model for detecting block falls at the martian north polar region.

Author(s):  
Oleksii Martynchuk ◽  
Lida Fanara ◽  
Ernst Hauber ◽  
Juergen Oberst ◽  
Klaus Gwinner

<p>Dynamic changes of Martian north polar scarps present a valuable insight into the planet's natural climate cycles (Byrne, 2009; Head et al., 2003)<sup>1,2</sup>. Annual avalanches and block falls are amongst the most noticeable surface processes that can be directly linked with the extent of the latter dynamics (Fanara et al, 2020)<sup>3</sup>. New remote sensing approaches based on machine learning allow us to make precise records of the aforementioned mass wasting activity by automatically extracting and analyzing bulk information obtained from satellite imagery.  Previous studies have concluded that a Support Vector Machine (SVM) classifier trained using Histograms of Oriented Gradients (HOG) can be used to efficiently detect block falls, even against backgrounds with increased complexity (Fanara et al., 2020)<sup>4</sup>. We hypothesise that this pretrained model can now be utilized to generate an extended dataset of labelled image data, sufficient in size to opt for a deep learning approach. On top of improving the detection model we also attempt to address the image co-registration protocol. Prior research has suggested this to be a substantial bottleneck, which reduces the amounts of suitable images. We plan to overcome these limitations either by extending our model to include multi-sensor data, or by deploying improved methods designed for exclusively optical data (e.g.  COSI-CORR software (Ayoub, Leprince and Avouac, 2017)<sup>5</sup>).  The resulting algorithm should be a robust solution capable of improving on the already established baselines of 75.1% and 8.5% for TPR and FDR respectively (Fanara et al., 2020)4. The NPLD is our primary area of interest due to it’s high levels of activity and good satellite image density, yet we also plan to apply our pipeline to different surface changes and Martian regions as well as on other celestial objects.</p><p> </p><p>1. Head, J.W., Mustard, J.F., Kreslavsky, M.A., Milliken, R.E., Marchant, D.R., 2003. Recent ice ages on Mars. Nature 426, 797–802</p><p>2. Byrne, S., 2009. The polar deposits of Mars. Annu. Rev. Earth Planet. Sci. 37, 535–560.</p><p>3. Fanara, K. Gwinner, E. Hauber, J. Oberst, Present-day erosion rate of north polar scarps on Mars due to active mass wasting; Icarus,Volume 342, 2020; 113434, ISSN 0019-1035.</p><p>4. Fanara, K. Gwinner, E. Hauber, J. Oberst, Automated detection of block falls in the north polar region of Mars; Planetary and Space Science, Volume 180, 2020; 104733, ISSN 0032-0633.</p><p>5. Ayoub, F.; Leprince, S.; Avouac, J.-P. User’s Guide to COSI-CORR Co-registration of Optically Sensed Images and Correlation; California Institute of Technology: Pasadena, CA, USA, 2009; pp. 1–49.</p>

2020 ◽  
Author(s):  
Nalika Ulapane ◽  
Karthick Thiyagarajan ◽  
sarath kodagoda

<div>Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transformation’s potential for broader usage.</div><div><br></div>


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 194
Author(s):  
Sarah Gonzalez ◽  
Paul Stegall ◽  
Harvey Edwards ◽  
Leia Stirling ◽  
Ho Chit Siu

The field of human activity recognition (HAR) often utilizes wearable sensors and machine learning techniques in order to identify the actions of the subject. This paper considers the activity recognition of walking and running while using a support vector machine (SVM) that was trained on principal components derived from wearable sensor data. An ablation analysis is performed in order to select the subset of sensors that yield the highest classification accuracy. The paper also compares principal components across trials to inform the similarity of the trials. Five subjects were instructed to perform standing, walking, running, and sprinting on a self-paced treadmill, and the data were recorded while using surface electromyography sensors (sEMGs), inertial measurement units (IMUs), and force plates. When all of the sensors were included, the SVM had over 90% classification accuracy using only the first three principal components of the data with the classes of stand, walk, and run/sprint (combined run and sprint class). It was found that sensors that were placed only on the lower leg produce higher accuracies than sensors placed on the upper leg. There was a small decrease in accuracy when the force plates are ablated, but the difference may not be operationally relevant. Using only accelerometers without sEMGs was shown to decrease the accuracy of the SVM.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


2021 ◽  
pp. 016173462199809
Author(s):  
Dhurgham Al-karawi ◽  
Hisham Al-Assam ◽  
Hongbo Du ◽  
Ahmad Sayasneh ◽  
Chiara Landolfo ◽  
...  

Significant successes in machine learning approaches to image analysis for various applications have energized strong interest in automated diagnostic support systems for medical images. The evolving in-depth understanding of the way carcinogenesis changes the texture of cellular networks of a mass/tumor has been informing such diagnostics systems with use of more suitable image texture features and their extraction methods. Several texture features have been recently applied in discriminating malignant and benign ovarian masses by analysing B-mode images from ultrasound scan of the ovary with different levels of performance. However, comparative performance evaluation of these reported features using common sets of clinically approved images is lacking. This paper presents an empirical evaluation of seven commonly used texture features (histograms, moments of histogram, local binary patterns [256-bin and 59-bin], histograms of oriented gradients, fractal dimensions, and Gabor filter), using a collection of 242 ultrasound scan images of ovarian masses of various pathological characteristics. The evaluation examines not only the effectiveness of classification schemes based on the individual texture features but also the effectiveness of various combinations of these schemes using the simple majority-rule decision level fusion. Trained support vector machine classifiers on the individual texture features without any specific pre-processing, achieve levels of accuracy between 75% and 85% where the seven moments and the 256-bin LBP are at the lower end while the Gabor filter is at the upper end. Combining the classification results of the top k ( k = 3, 5, 7) best performing features further improve the overall accuracy to a level between 86% and 90%. These evaluation results demonstrate that each of the investigated image-based texture features provides informative support in distinguishing benign or malignant ovarian masses.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1668
Author(s):  
Zongming Dai ◽  
Kai Hu ◽  
Jie Xie ◽  
Shengyu Shen ◽  
Jie Zheng ◽  
...  

Traditional co-word networks do not discriminate keywords of researcher interest from general keywords. Co-word networks are therefore often too general to provide knowledge if interest to domain experts. Inspired by the recent work that uses an automatic method to identify the questions of interest to researchers like “problems” and “solutions”, we try to answer a similar question “what sensors can be used for what kind of applications”, which is great interest in sensor- related fields. By generalizing the specific questions as “questions of interest”, we built a knowledge network considering researcher interest, called bipartite network of interest (BNOI). Different from a co-word approaches using accurate keywords from a list, BNOI uses classification models to find possible entities of interest. A total of nine feature extraction methods including N-grams, Word2Vec, BERT, etc. were used to extract features to train the classification models, including naïve Bayes (NB), support vector machines (SVM) and logistic regression (LR). In addition, a multi-feature fusion strategy and a voting principle (VP) method are applied to assemble the capability of the features and the classification models. Using the abstract text data of 350 remote sensing articles, features are extracted and the models trained. The experiment results show that after removing the biased words and using the ten-fold cross-validation method, the F-measure of “sensors” and “applications” are 93.2% and 85.5%, respectively. It is thus demonstrated that researcher questions of interest can be better answered by the constructed BNOI based on classification results, comparedwith the traditional co-word network approach.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 846
Author(s):  
Liang Zhao ◽  
Yu Bao ◽  
Yu Zhang ◽  
Ruidong Ye ◽  
Aijuan Zhang

When the displacement of an object is evaluated using sensor data, its movement back to the starting point can be used to correct the measurement error of the sensor. In medicine, the movements of chest compressions also involve a reciprocating movement back to the starting point. The traditional method of evaluating the effects of chest compression depth (CCD) is to use an acceleration sensor or gyroscope to obtain chest compression movement data; from these data, the displacement value can be calculated and the CCD effect evaluated. However, this evaluation procedure suffers from sensor errors and environmental interference, limiting its applicability. Our objective is to reduce the auxiliary computing devices employed for CCD effectiveness evaluation and improve the accuracy of the evaluation results. To this end, we propose a one-dimensional convolutional neural network (1D-CNN) classification method. First, we use the chest compression evaluation criterion to classify the pre-collected sensor signal data, from which the proposed 1D-CNN model learns classification features. After training, the model is used to classify and evaluate sensor signal data instead of distance measurements; this effectively avoids the influence of pressure occlusion and electromagnetic waves. We collect and label 937 valid CCD results from an emergency care simulator. In addition, the proposed 1D-CNN structure is experimentally evaluated and compared against other CNN models and support vector machines. The results show that after sufficient training, the proposed 1D-CNN model can recognize the CCD results with an accuracy rate of more than 95%. The execution time suggests that the model balances accuracy and hardware requirements and can be embedded in portable devices.


2018 ◽  
Vol 10 (8) ◽  
pp. 1285 ◽  
Author(s):  
Reza Attarzadeh ◽  
Jalal Amini ◽  
Claudia Notarnicola ◽  
Felix Greifeneder

This paper presents an approach for retrieval of soil moisture content (SMC) by coupling single polarization C-band synthetic aperture radar (SAR) and optical data at the plot scale in vegetated areas. The study was carried out at five different sites with dominant vegetation cover located in Kenya. In the initial stage of the process, different features are extracted from single polarization mode (VV polarization) SAR and optical data. Subsequently, proper selection of the relevant features is conducted on the extracted features. An advanced state-of-the-art machine learning regression approach, the support vector regression (SVR) technique, is used to retrieve soil moisture. This paper takes a new look at soil moisture retrieval in vegetated areas considering the needs of practical applications. In this context, we tried to work at the object level instead of the pixel level. Accordingly, a group of pixels (an image object) represents the reality of the land cover at the plot scale. Three approaches, a pixel-based approach, an object-based approach, and a combination of pixel- and object-based approaches, were used to estimate soil moisture. The results show that the combined approach outperforms the other approaches in terms of estimation accuracy (4.94% and 0.89 compared to 6.41% and 0.62 in terms of root mean square error (RMSE) and R2), flexibility on retrieving the level of soil moisture, and better quality of visual representation of the SMC map.


Author(s):  
Osman Salem ◽  
Alexey Guerassimov ◽  
Ahmed Mehaoua ◽  
Anthony Marcus ◽  
Borko Furht

This paper details the architecture and describes the preliminary experimentation with the proposed framework for anomaly detection in medical wireless body area networks for ubiquitous patient and healthcare monitoring. The architecture integrates novel data mining and machine learning algorithms with modern sensor fusion techniques. Knowing wireless sensor networks are prone to failures resulting from their limitations (i.e. limited energy resources and computational power), using this framework, the authors can distinguish between irregular variations in the physiological parameters of the monitored patient and faulty sensor data, to ensure reliable operations and real time global monitoring from smart devices. Sensor nodes are used to measure characteristics of the patient and the sensed data is stored on the local processing unit. Authorized users may access this patient data remotely as long as they maintain connectivity with their application enabled smart device. Anomalous or faulty measurement data resulting from damaged sensor nodes or caused by malicious external parties may lead to misdiagnosis or even death for patients. The authors' application uses a Support Vector Machine to classify abnormal instances in the incoming sensor data. If found, the authors apply a periodically rebuilt, regressive prediction model to the abnormal instance and determine if the patient is entering a critical state or if a sensor is reporting faulty readings. Using real patient data in our experiments, the results validate the robustness of our proposed framework. The authors further discuss the experimental analysis with the proposed approach which shows that it is quickly able to identify sensor anomalies and compared with several other algorithms, it maintains a higher true positive and lower false negative rate.


2015 ◽  
Vol 11 (6) ◽  
pp. 4 ◽  
Author(s):  
Xianfeng Yuan ◽  
Mumin Song ◽  
Fengyu Zhou ◽  
Yugang Wang ◽  
Zhumin Chen

Support Vector Machines (SVM) is a set of popular machine learning algorithms which have been successfully applied in diverse aspects, but for large training data sets the processing time and computational costs are prohibitive. This paper presents a novel fast training method for SVM, which is applied in the fault diagnosis of service robot. Firstly, sensor data are sampled under different running conditions of the robot and those samples are divided as training sets and testing sets. Secondly, the sampled data are preprocessed and the principal component analysis (PCA) model is established for fault feature extraction. Thirdly, the feature vectors are used to train the SVM classifier, which achieves the fault diagnosis of the robot. To speed up the training process of SVM, on the one hand, sample reduction is done using the proposed support vectors selection (SVS) algorithm, which can ensure good classification accuracy and generalization capability. On the other hand, we take advantage of the excellent parallel computing abilities of Graphics Processing Unit (GPU) to pre-calculate the kernel matrix, which avoids the recalculation during the cross validation process. Experimental results illustrate that the proposed method can significantly reduce the training time without decreasing the classification accuracy.


Author(s):  
Ahmad Iwan Fadli ◽  
Selo Sulistyo ◽  
Sigit Wibowo

Traffic accident is a very difficult problem to handle on a large scale in a country. Indonesia is one of the most populated, developing countries that use vehicles for daily activities as its main transportation.  It is also the country with the largest number of car users in Southeast Asia, so driving safety needs to be considered. Using machine learning classification method to determine whether a driver is driving safely or not can help reduce the risk of driving accidents. We created a detection system to classify whether the driver is driving safely or unsafely using trip sensor data, which include Gyroscope, Acceleration, and GPS. The classification methods used in this study are Random Forest (RF) classification algorithm, Support Vector Machine (SVM), and Multilayer Perceptron (MLP) by improving data preprocessing using feature extraction and oversampling methods. This study shows that RF has the best performance with 98% accuracy, 98% precision, and 97% sensitivity using the proposed preprocessing stages compared to SVM or MLP.


Sign in / Sign up

Export Citation Format

Share Document