scholarly journals RIANN—A Robust Neural Network Outperforms Attitude Estimation Filters

AI ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 444-463
Author(s):  
Daniel Weber ◽  
Clemens Gühmann ◽  
Thomas Seel

Inertial-sensor-based attitude estimation is a crucial technology in various applications, from human motion tracking to autonomous aerial and ground vehicles. Application scenarios differ in characteristics of the performed motion, presence of disturbances, and environmental conditions. Since state-of-the-art attitude estimators do not generalize well over these characteristics, their parameters must be tuned for the individual motion characteristics and circumstances. We propose RIANN, a ready-to-use, neural network-based, parameter-free, real-time-capable inertial attitude estimator, which generalizes well across different motion dynamics, environments, and sampling rates, without the need for application-specific adaptations. We gather six publicly available datasets of which we exploit two datasets for the method development and the training, and we use four datasets for evaluation of the trained estimator in three different test scenarios with varying practical relevance. Results show that RIANN outperforms state-of-the-art attitude estimation filters in the sense that it generalizes much better across a variety of motions and conditions in different applications, with different sensor hardware and different sampling frequencies. This is true even if the filters are tuned on each individual test dataset, whereas RIANN was trained on completely separate data and has never seen any of these test datasets. RIANN can be applied directly without adaptations or training and is therefore expected to enable plug-and-play solutions in numerous applications, especially when accuracy is crucial but no ground-truth data is available for tuning or when motion and disturbance characteristics are uncertain. We made RIANN publicly available.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4050
Author(s):  
Dejan Pavlovic ◽  
Christopher Davison ◽  
Andrew Hamilton ◽  
Oskar Marko ◽  
Robert Atkinson ◽  
...  

Monitoring cattle behaviour is core to the early detection of health and welfare issues and to optimise the fertility of large herds. Accelerometer-based sensor systems that provide activity profiles are now used extensively on commercial farms and have evolved to identify behaviours such as the time spent ruminating and eating at an individual animal level. Acquiring this information at scale is central to informing on-farm management decisions. The paper presents the development of a Convolutional Neural Network (CNN) that classifies cattle behavioural states (`rumination’, `eating’ and `other’) using data generated from neck-mounted accelerometer collars. During three farm trials in the United Kingdom (Easter Howgate Farm, Edinburgh, UK), 18 steers were monitored to provide raw acceleration measurements, with ground truth data provided by muzzle-mounted pressure sensor halters. A range of neural network architectures are explored and rigorous hyper-parameter searches are performed to optimise the network. The computational complexity and memory footprint of CNN models are not readily compatible with deployment on low-power processors which are both memory and energy constrained. Thus, progressive reductions of the CNN were executed with minimal loss of performance in order to address the practical implementation challenges, defining the trade-off between model performance versus computation complexity and memory footprint to permit deployment on micro-controller architectures. The proposed methodology achieves a compression of 14.30 compared to the unpruned architecture but is nevertheless able to accurately classify cattle behaviours with an overall F1 score of 0.82 for both FP32 and FP16 precision while achieving a reasonable battery lifetime in excess of 5.7 years.


2021 ◽  
pp. 0021955X2110210
Author(s):  
Alejandro E Rodríguez-Sánchez ◽  
Héctor Plascencia-Mora

Traditional modeling of mechanical energy absorption due to compressive loadings in expanded polystyrene foams involves mathematical descriptions that are derived from stress/strain continuum mechanics models. Nevertheless, most of those models are either constrained using the strain as the only variable to work at large deformation regimes and usually neglect important parameters for energy absorption properties such as the material density or the rate of the applying load. This work presents a neural-network-based approach that produces models that are capable to map the compressive stress response and energy absorption parameters of an expanded polystyrene foam by considering its deformation, compressive loading rates, and different densities. The models are trained with ground-truth data obtained in compressive tests. Two methods to select neural network architectures are also presented, one of which is based on a Design of Experiments strategy. The results show that it is possible to obtain a single artificial neural networks model that can abstract stress and energy absorption solution spaces for the conditions studied in the material. Additionally, such a model is compared with a phenomenological model, and the results show than the neural network model outperforms it in terms of prediction capabilities, since errors around 2% of experimental data were obtained. In this sense, it is demonstrated that by following the presented approach is possible to obtain a model capable to reproduce compressive polystyrene foam stress/strain data, and consequently, to simulate its energy absorption parameters.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-25
Author(s):  
Srinivasan Iyengar ◽  
Stephen Lee ◽  
David Irwin ◽  
Prashant Shenoy ◽  
Benjamin Weil

Buildings consume over 40% of the total energy in modern societies, and improving their energy efficiency can significantly reduce our energy footprint. In this article, we present WattScale, a data-driven approach to identify the least energy-efficient buildings from a large population of buildings in a city or a region. Unlike previous methods such as least-squares that use point estimates, WattScale uses Bayesian inference to capture the stochasticity in the daily energy usage by estimating the distribution of parameters that affect a building. Further, it compares them with similar homes in a given population. WattScale also incorporates a fault detection algorithm to identify the underlying causes of energy inefficiency. We validate our approach using ground truth data from different geographical locations, which showcases its applicability in various settings. WattScale has two execution modes—(i) individual and (ii) region-based, which we highlight using two case studies. For the individual execution mode, we present results from a city containing >10,000 buildings and show that more than half of the buildings are inefficient in one way or another indicating a significant potential from energy improvement measures. Additionally, we provide probable cause of inefficiency and find that 41%, 23.73%, and 0.51% homes have poor building envelope, heating, and cooling system faults, respectively. For the region-based execution mode, we show that WattScale can be extended to millions of homes in the U.S. due to the recent availability of representative energy datasets.


Geosciences ◽  
2018 ◽  
Vol 8 (12) ◽  
pp. 446 ◽  
Author(s):  
Evangelos Alevizos ◽  
Jens Greinert

This study presents a novel approach, based on high-dimensionality hydro-acoustic data, for improving the performance of angular response analysis (ARA) on multibeam backscatter data in terms of acoustic class separation and spatial resolution. This approach is based on the hyper-angular cube (HAC) data structure which offers the possibility to extract one angular response from each cell of the cube. The HAC consists of a finite number of backscatter layers, each representing backscatter values corresponding to single-incidence angle ensonifications. The construction of the HAC layers can be achieved either by interpolating dense soundings from highly overlapping multibeam echo-sounder (MBES) surveys (interpolated HAC, iHAC) or by producing several backscatter mosaics, each being normalized at a different incidence angle (synthetic HAC, sHAC). The latter approach can be applied to multibeam data with standard overlap, thus minimizing the cost for data acquisition. The sHAC is as efficient as the iHAC produced by actual soundings, providing distinct angular responses for each seafloor type. The HAC data structure increases acoustic class separability between different acoustic features. Moreover, the results of angular response analysis are applied on a fine spatial scale (cell dimensions) offering more detailed acoustic maps of the seafloor. Considering that angular information is expressed through high-dimensional backscatter layers, we further applied three machine learning algorithms (random forest, support vector machine, and artificial neural network) and one pattern recognition method (sum of absolute differences) for supervised classification of the HAC, using a limited amount of ground truth data (one sample per seafloor type). Results from supervised classification were compared with results from an unsupervised method for inter-comparison of the supervised algorithms. It was found that all algorithms (regarding both the iHAC and the sHAC) produced very similar results with good agreement (>0.5 kappa) with the unsupervised classification. Only the artificial neural network required the total amount of ground truth data for producing comparable results with the remaining algorithms.


2020 ◽  
Vol 7 ◽  
Author(s):  
Arne Passon ◽  
Thomas Schauer ◽  
Thomas Seel

End-effector-based robotic systems provide easy-to-set-up motion support in rehabilitation of stroke and spinal-cord-injured patients. However, measurement information is obtained only about the motion of the limb segments to which the systems are attached and not about the adjacent limb segments. We demonstrate in one particular experimental setup that this limitation can be overcome by augmenting an end-effector-based robot with a wearable inertial sensor. Most existing inertial motion tracking approaches rely on a homogeneous magnetic field and thus fail in indoor environments and near ferromagnetic materials and electronic devices. In contrast, we propose a magnetometer-free sensor fusion method. It uses a quaternion-based algorithm to track the heading of a limb segment in real time by combining the gyroscope and accelerometer readings with position measurements of one point along that segment. We apply this method to an upper-limb rehabilitation robotics use case in which the orientation and position of the forearm and elbow are known, and the orientation and position of the upper arm and shoulder are estimated by the proposed method using an inertial sensor worn on the upper arm. Experimental data from five healthy subjects who performed 282 proper executions of a typical rehabilitation motion and 163 executions with compensation motion are evaluated. Using a camera-based system as a ground truth, we demonstrate that the shoulder position and the elbow angle are tracked with median errors around 4 cm and 4°, respectively; and that undesirable compensatory shoulder movements, which were defined as shoulder displacements greater ±10 cm for more than 20% of a motion cycle, are detected and classified 100% correctly across all 445 performed motions. The results indicate that wearable inertial sensors and end-effector-based robots can be combined to provide means for effective rehabilitation therapy with likewise detailed and accurate motion tracking for performance assessment, real-time biofeedback and feedback control of robotic and neuroprosthetic motion support.


2016 ◽  
Vol 2 (1) ◽  
pp. 711-714 ◽  
Author(s):  
Daniel Laidig ◽  
Sebastian Trimpe ◽  
Thomas Seel

AbstractWe examine the usefulness of event-based sampling approaches for reducing communication in inertial-sensor-based analysis of human motion. To this end we consider realtime measurement of the knee joint angle during walking, employing a recently developed sensor fusion algorithm. We simulate the effects of different event-based sampling methods on a large set of experimental data with ground truth obtained from an external motion capture system. This results in a reduced wireless communication load at the cost of a slightly increased error in the calculated angles. The proposed methods are compared in terms of best balance of these two aspects. We show that the transmitted data can be reduced by 66% while maintaining the same level of accuracy.


2018 ◽  
Author(s):  
Madeny Belkhiri ◽  
Duda Kvitsiani

AbstractUnderstanding how populations of neurons represent and compute internal or external variables requires precise and objective metrics for tracing the individual spikes that belong to a given neuron. Despite recent progress in the development of accurate and fast spike sorting tools, the scarcity of ground truth data makes it difficult to settle on the best performing spike sorting algorithm. Besides, the use of different configurations of electrodes and ways to acquire signal (e.g. anesthetized, head fixed, freely behaving animal recordings, tetrode vs. silicone probes, etc.) makes it even harder to develop a universal spike sorting tool that will perform well without human intervention. Some of the prevalent problems in spike sorting are: units separating due to drift, clustering bursting cells, and dealing with nonstationarity in background noise. The last is particularly problematic in freely behaving animals where the noises from the electrophysiological activity of hundreds or thousands of neurons are intermixed with noise arising from movement artifacts. We address these problems by developing a new spike sorting tool that is based on a template matching algorithm. The spike waveform templates are used to perform normalized cross correlation (NCC) with an acquired signal for spike detection. The normalization addresses problems with drift, bursting, and nonstationarity of noise and provides normative scoring to compare different units in terms of cluster quality. Our spike sorting algorithm, D.sort, runs on the graphic processing unit (GPU) to accelerate computations. D.sort is a freely available software package (https://github.com/1804MB/Kvistiani-lab_Dsort).


Author(s):  
Jufeng Yang ◽  
Dongyu She ◽  
Ming Sun

Visual sentiment analysis is attracting more and more attention with the increasing tendency to express emotions through visual contents. Recent algorithms in convolutional neural networks (CNNs) considerably advance the emotion classification, which aims to distinguish differences among emotional categories and assigns a single dominant label to each image. However, the task is inherently ambiguous since an image usually evokes multiple emotions and its annotation varies from person to person. In this work, we address the problem via label distribution learning (LDL) and develop a multi-task deep framework by jointly optimizing both classification and distribution prediction. While the proposed method prefers to the distribution dataset with annotations of different voters, the majority voting scheme is widely adopted as the ground truth in this area, and few dataset has provided multiple affective labels. Hence, we further exploit two weak forms of prior knowledge, which are expressed as similarity information between labels, to generate emotional distribution for each category. The experiments conducted on both distribution datasets, i.e., Emotion6, Flickr_LDL, Twitter_LDL, and the largest single emotion dataset, i.e., Flickr and Instagram, demonstrate the proposed method outperforms the state-of-the-art approaches.


Author(s):  
Thibault Laugel ◽  
Marie-Jeanne Lesot ◽  
Christophe Marsala ◽  
Xavier Renard ◽  
Marcin Detyniecki

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model. However, they create the risk of having explanations that are a result of some artifacts learned by the model instead of actual knowledge from the data. This paper focuses on the case of counterfactual explanations and asks whether the generated instances can be justified, i.e. continuously connected to some ground-truth data. We evaluate the risk of generating unjustified counterfactual examples by investigating the local neighborhoods of instances whose predictions are to be explained and show that this risk is quite high for several datasets. Furthermore, we show that most state of the art approaches do not differentiate justified from unjustified counterfactual examples, leading to less useful explanations.


2019 ◽  
Vol 11 (3) ◽  
pp. 286 ◽  
Author(s):  
Jiangqiao Yan ◽  
Hongqi Wang ◽  
Menglong Yan ◽  
Wenhui Diao ◽  
Xian Sun ◽  
...  

Recently, methods based on Faster region-based convolutional neural network (R-CNN)have been popular in multi-class object detection in remote sensing images due to their outstandingdetection performance. The methods generally propose candidate region of interests (ROIs) througha region propose network (RPN), and the regions with high enough intersection-over-union (IoU)values against ground truth are treated as positive samples for training. In this paper, we find thatthe detection result of such methods is sensitive to the adaption of different IoU thresholds. Specially,detection performance of small objects is poor when choosing a normal higher threshold, while alower threshold will result in poor location accuracy caused by a large quantity of false positives.To address the above issues, we propose a novel IoU-Adaptive Deformable R-CNN framework formulti-class object detection. Specially, by analyzing the different roles that IoU can play in differentparts of the network, we propose an IoU-guided detection framework to reduce the loss of small objectinformation during training. Besides, the IoU-based weighted loss is designed, which can learn theIoU information of positive ROIs to improve the detection accuracy effectively. Finally, the class aspectratio constrained non-maximum suppression (CARC-NMS) is proposed, which further improves theprecision of the results. Extensive experiments validate the effectiveness of our approach and weachieve state-of-the-art detection performance on the DOTA dataset.


Sign in / Sign up

Export Citation Format

Share Document