scholarly journals Tools for integrating inertial sensor data with video bio-loggers, including estimation of animal orientation, motion, and position

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
David E. Cade ◽  
William T. Gough ◽  
Max F. Czapanskiy ◽  
James A. Fahlbusch ◽  
Shirel R. Kahane-Rapport ◽  
...  

AbstractBio-logging devices equipped with inertial measurement units—particularly accelerometers, magnetometers, and pressure sensors—have revolutionized our ability to study animals as necessary electronics have gotten smaller and more affordable over the last two decades. These animal-attached tags allow for fine scale determination of behavior in the absence of direct observation, particularly useful in the marine realm, where direct observation is often impossible, and recent devices can integrate more power hungry and sensitive instruments, such as hydrophones, cameras, and physiological sensors. To convert the raw voltages recorded by bio-logging sensors into biologically meaningful metrics of orientation (e.g., pitch, roll and heading), motion (e.g., speed, specific acceleration) and position (e.g., depth and spatial coordinates), we developed a series of MATLAB tools and online instructional tutorials. Our tools are adaptable for a variety of devices, though we focus specifically on the integration of video, audio, 3-axis accelerometers, 3-axis magnetometers, 3-axis gyroscopes, pressure, temperature, light and GPS data that are the standard outputs from Customized Animal Tracking Solutions (CATS) video tags. Our tools were developed and tested on cetacean data but are designed to be modular and adaptable for a variety of marine and terrestrial species. In this text, we describe how to use these tools, the theories and ideas behind their development, and ideas and additional tools for applying the outputs of the process to biological research. We additionally explore and address common errors that can occur during processing and discuss future applications. All code is provided open source and is designed to be useful to both novice and experienced programmers.

Author(s):  
Magy Seif El-Nasr ◽  
Athanasios V. Vasilakos

With the evolution of intelligent devices, sensors, and ambient intelligent systems, it is not surprising to see many research projects starting to explore the design of intelligent artifacts in the area of art and technology; these projects take the form of art exhibits, interactive performances, and multi-media installations. In this paper, we seek to propose a new architecture for an ambient intelligent dance performance space. Dance is an art form that seeks to explore the use of gesture and body as means of artistic expression. This paper proposes an extension to the medium of expression currently used in dance—we seek to explore the use of the dance environment itself, including the stage lighting and music, as a medium for artistic reflection and expression. To materialize this vision, the performance space will be augmented with several sensors: physiological sensors worn by the dancers, as well as pressure sensor mats installed on the floor to track dancers’ movements. Data from these sensors will be passed into a three layered architecture: a layer analyzes sensor data collected from physiological and pressure sensors. Another layer intelligently adapts the lighting and music to portray the dancer’s physiological state given artistic patterns authored through specifically developed tools; and, lastly, a layer for presenting the music and lighting changes in the physical dance environment.


2016 ◽  
Vol 251 ◽  
pp. 61-67
Author(s):  
Harald Loose ◽  
Katja Orlowski

The paper deals with the determination of gait parameters using inertial measurement units (IMU). An IMU sensor incorporates three microelectromechanical sensors - triple-axis gyroscope, accelerometer and magnetometer. A standard experimental setup for the observation of the locomotion system using seven Xsens MTw sensors was developed. They are applied to the lower limbs and the pelvis of the subject. The synchronization of data from all sensor components (gyroscope, accelerometer and magnetometer) as well as the onboard estimation of the orientation is provided by the Xsens and Adwinda hard-and software. The strapped down data are received with a rate of 60 Hz. The output data of a single IMU sensor allow motion analysis of the sensor unit itself as well as the motion of the limb where the sensor is mounted to. Stable and reliable algorithms processing the gait data and calculating gait features of a single sensor are developed and evaluated. These algorithms are based on precise determination of each gait cycle. In the middle of stance phase the foot is not moving. It stands on the floor and, following, the initial conditions for the calculation of foot velocities and distances by integration are predetermined. Various features of the gait cycle as well as e.g. dependencies in between features or on the gait velocities are investigated. The application of seven sensors to the limbs of the locomotion system provides measurements of their 3D motion observed in an inertial coordinate system. The limbs are parts of skeleton and interconnected by joints. Introducing a skeleton model, the quality of measurements is evaluated and improved. Joint angles, symmetry ratios and other gait parameters are determined. These results can be used for analysis of the gait of any subject as well as of any cohort.


Author(s):  
Magy Seif El-Nasr ◽  
Athanasios V. Vasilakos

With the evolution of intelligent devices, sensors, and ambient intelligent systems, it is not surprising to see many research projects starting to explore the design of intelligent artifacts in the area of art and technology; these projects take the form of art exhibits, interactive performances, and multi-media installations. In this paper, we seek to propose a new architecture for an ambient intelligent dance performance space. Dance is an art form that seeks to explore the use of gesture and body as means of artistic expression. This paper proposes an extension to the medium of expression currently used in dance—we seek to explore the use of the dance environment itself, including the stage lighting and music, as a medium for artistic reflection and expression. To materialize this vision, the performance space will be augmented with several sensors: physiological sensors worn by the dancers, as well as pressure sensor mats installed on the floor to track dancers’ movements. Data from these sensors will be passed into a three layered architecture: a layer analyzes sensor data collected from physiological and pressure sensors. Another layer intelligently adapts the lighting and music to portray the dancer’s physiological state given artistic patterns authored through specifically developed tools; and, lastly, a layer for presenting the music and lighting changes in the physical dance environment.


Author(s):  
Mircea Fotino

The use of thick specimens (0.5 μm to 5.0 μm or more) is one of the most resourceful applications of high-voltage electron microscopy in biological research. However, the energy loss experienced by the electron beam in the specimen results in chromatic aberration and thus in a deterioration of the effective resolving power. This sets a limit to the maximum usable specimen thickness when investigating structures requiring a certain resolution level.An experimental approach is here described in which the deterioration of the resolving power as a function of specimen thickness is determined. In a manner similar to the Rayleigh criterion in which two image points are considered resolved at the resolution limit when their profiles overlap such that the minimum of one coincides with the maximum of the other, the resolution attainable in thick sections can be measured by the distance from minimum to maximum (or, equivalently, from 10% to 90% maximum) of the broadened profile of a well-defined step-like object placed on the specimen.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 194
Author(s):  
Sarah Gonzalez ◽  
Paul Stegall ◽  
Harvey Edwards ◽  
Leia Stirling ◽  
Ho Chit Siu

The field of human activity recognition (HAR) often utilizes wearable sensors and machine learning techniques in order to identify the actions of the subject. This paper considers the activity recognition of walking and running while using a support vector machine (SVM) that was trained on principal components derived from wearable sensor data. An ablation analysis is performed in order to select the subset of sensors that yield the highest classification accuracy. The paper also compares principal components across trials to inform the similarity of the trials. Five subjects were instructed to perform standing, walking, running, and sprinting on a self-paced treadmill, and the data were recorded while using surface electromyography sensors (sEMGs), inertial measurement units (IMUs), and force plates. When all of the sensors were included, the SVM had over 90% classification accuracy using only the first three principal components of the data with the classes of stand, walk, and run/sprint (combined run and sprint class). It was found that sensors that were placed only on the lower leg produce higher accuracies than sensors placed on the upper leg. There was a small decrease in accuracy when the force plates are ablated, but the difference may not be operationally relevant. Using only accelerometers without sEMGs was shown to decrease the accuracy of the SVM.


2020 ◽  
Vol 53 (2) ◽  
pp. 15990-15997
Author(s):  
Felix Laufer ◽  
Michael Lorenz ◽  
Bertram Taetz ◽  
Gabriele Bleser

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


AIP Advances ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 085005
Author(s):  
Kevin Lauer ◽  
Geert Brokmann ◽  
Mario Bähr ◽  
Thomas Ortlepp
Keyword(s):  

2021 ◽  
Vol 185 ◽  
pp. 282-291
Author(s):  
Nizam U. Ahamed ◽  
Kellen T. Krajewski ◽  
Camille C. Johnson ◽  
Adam J. Sterczala ◽  
Julie P. Greeves ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2480
Author(s):  
Isidoro Ruiz-García ◽  
Ismael Navarro-Marchal ◽  
Javier Ocaña-Wilhelmi ◽  
Alberto J. Palma ◽  
Pablo J. Gómez-López ◽  
...  

In skiing it is important to know how the skier accelerates and inclines the skis during the turn to avoid injuries and improve technique. The purpose of this pilot study with three participants was to develop and evaluate a compact, wireless, and low-cost system for detecting the inclination and acceleration of skis in the field based on inertial measurement units (IMU). To that end, a commercial IMU board was placed on each ski behind the skier boot. With the use of an attitude and heading reference system algorithm included in the sensor board, the orientation and attitude data of the skis were obtained (roll, pitch, and yaw) by IMU sensor data fusion. Results demonstrate that the proposed IMU-based system can provide reliable low-drifted data up to 11 min of continuous usage in the worst case. Inertial angle data from the IMU-based system were compared with the data collected by a video-based 3D-kinematic reference system to evaluate its operation in terms of data correlation and system performance. Correlation coefficients between 0.889 (roll) and 0.991 (yaw) were obtained. Mean biases from −1.13° (roll) to 0.44° (yaw) and 95% limits of agreements from 2.87° (yaw) to 6.27° (roll) were calculated for the 1-min trials. Although low mean biases were achieved, some limitations arose in the system precision for pitch and roll estimations that could be due to the low sampling rate allowed by the sensor data fusion algorithm and the initial zeroing of the gyroscope.


Sign in / Sign up

Export Citation Format

Share Document