scholarly journals Gait Phase Recognition Using Fuzzy Logic Regulation with Multisensor Data Fusion

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Gao Weidong ◽  
Zhao Zhenwei

The health challenges brought by aging population and chronic noncommunicable diseases are increasingly severe. Scientific physical exercise is of great significance to prevent the occurrence of chronic diseases and subhealth intervention and promote health. However, improper or excessive exercise can cause injury. Research shows that the sports injury rate of people who often exercise is as high as 85%. Aiming at the problem of low accuracy of single sensor gait analysis, a real-time gait detection algorithm based on piezoelectric film and motion sensor is proposed. On this basis, a gait phase recognition method based on fuzzy logic is proposed, which enhances the ability of gait space-time measurement. Experimental results show that the proposed gait modeling method based on ground reaction force (GRF) signal can effectively recognize and quantify various gait patterns. At the same time, the introduction of heterogeneous sensor data fusion technology can effectively make up for the accuracy defects of single sensor measurement and improve the estimation accuracy of gait space-time measurement.

Biosensors ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 109
Author(s):  
Binbin Su ◽  
Christian Smith ◽  
Elena Gutierrez Farewik

Gait phase recognition is of great importance in the development of assistance-as-needed robotic devices, such as exoskeletons. In order for a powered exoskeleton with phase-based control to determine and provide proper assistance to the wearer during gait, the user’s current gait phase must first be identified accurately. Gait phase recognition can potentially be achieved through input from wearable sensors. Deep convolutional neural networks (DCNN) is a machine learning approach that is widely used in image recognition. User kinematics, measured from inertial measurement unit (IMU) output, can be considered as an ‘image’ since it exhibits some local ‘spatial’ pattern when the sensor data is arranged in sequence. We propose a specialized DCNN to distinguish five phases in a gait cycle, based on IMU data and classified with foot switch information. The DCNN showed approximately 97% accuracy during an offline evaluation of gait phase recognition. Accuracy was highest in the swing phase and lowest in terminal stance.


Author(s):  
Zude Zhou ◽  
Huaiqing Wang ◽  
Ping Lou

In previous chapters, the engineering scientific foundations of manufacturing intelligence (such as the knowledge-based system, Multi-Agent system, data mining and knowledge discovery, and computing intelligence) have been discussed in detail. Sensor integration and data fusion is another important theory of manufacturing intelligence. With the development of integrated systems, there is an urgent requirement for improving system automaticity and intelligence. Without improvement, the complexity and scale of systems are increased. Such systems need to be more sensitive to their work environment and independent state, and obviously, single sensor technology hardly meets these requirements. Multi-sensor and data fusion technology are therefore employed in automatic and intelligent manufacturing as it is more comprehensive and accurate than traditional single sensor technology if the information redundancy and complementarity are used reasonably. In theory, the outputs of multi-sensors are mutually validated. Multi-sensor integration is a brand new concept for intelligent manufacturing, and without doubt, sensor integration-based intelligent manufacturing is the development orientation of manufacturing in the future. With reference to the information fusion problem of the multi-sensor integration system, the development state, technical background, application scope and basic meaning of the multi-sensor integration and the data fusion are first reviewed in this chapter. Secondly the classification, level, system structure and function model of the data fusion system is discussed. The theoretical method of the data fusion is then introduced, and finally, attention is paid to cutting tool condition detection, machine thermal error compensation and online detection and error compensation because those are the main applications of multi-sensor data fusion technology in intelligent manufacturing.


2012 ◽  
Vol 241-244 ◽  
pp. 993-997
Author(s):  
Xiu Ying Xu ◽  
Cao Jun Huang

Against the shortcomings such as high price and low precision , big error leading with land wheel exist shift phenomenon in traditional tachometry, raised that the idea of using multisensor data fusion handles operating machinery speed in the field, introduced a fusion progress system structure and applied the kalman filter data fusion algorithm.This system which is based on ATS665 and GPS and mcu MSP430F1121 can work in operating machinery speed measurement and in car navigation and positioning system .The above_mentioned determines the high reliability,high cost perfomance and using easily.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142091176
Author(s):  
Raul Dominguez ◽  
Mark Post ◽  
Alexander Fabisch ◽  
Romain Michalec ◽  
Vincent Bissonnette ◽  
...  

Multisensor data fusion plays a vital role in providing autonomous systems with environmental information crucial for reliable functioning. In this article, we summarize the modular structure of the newly developed and released Common Data Fusion Framework and explain how it is used. Sensor data are registered and fused within the Common Data Fusion Framework to produce comprehensive 3D environment representations and pose estimations. The proposed software components to model this process in a reusable manner are presented through a complete overview of the framework, then the provided data fusion algorithms are listed, and through the case of 3D reconstruction from 2D images, the Common Data Fusion Framework approach is exemplified. The Common Data Fusion Framework has been deployed and tested in various scenarios that include robots performing operations of planetary rover exploration and tracking of orbiting satellites.


2012 ◽  
Vol 466-467 ◽  
pp. 1222-1226
Author(s):  
Bin Ma ◽  
Lin Chong Hao ◽  
Wan Jiang Zhang ◽  
Jing Dai ◽  
Zhong Hua Han

In this paper, we presented an equipment fault diagnosis method based on multi-sensor data fusion, in order to solve the problems such as uncertainty, imprecision and low reliability caused by using a single sensor to diagnose the equipment faults. We used a variety of sensors to collect the data for diagnosed objects and fused the data by using D-S evidence theory, according to the change of confidence and uncertainty, diagnosed whether the faults happened. Experimental results show that, the D-S evidence theory algorithm can reduce the uncertainty of the results of fault diagnosis, improved diagnostic accuracy and reliability, and compared with the fault diagnosis using a single sensor, this method has a better effect.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2180 ◽  
Author(s):  
Prasanna Kolar ◽  
Patrick Benavidez ◽  
Mo Jamshidi

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.


Sign in / Sign up

Export Citation Format

Share Document