scholarly journals Evaluation of 2D-/3D-Feet-Detection Methods for Semi-Autonomous Powered Wheelchair Navigation

2021 ◽  
Vol 7 (12) ◽  
pp. 255
Author(s):  
Cristian Vilar Giménez ◽  
Silvia Krug ◽  
Faisal Z. Qureshi ◽  
Mattias O’Nils

Powered wheelchairs have enhanced the mobility and quality of life of people with special needs. The next step in the development of powered wheelchairs is to incorporate sensors and electronic systems for new control applications and capabilities to improve their usability and the safety of their operation, such as obstacle avoidance or autonomous driving. However, autonomous powered wheelchairs require safe navigation in different environments and scenarios, making their development complex. In our research, we propose, instead, to develop contactless control for powered wheelchairs where the position of the caregiver is used as a control reference. Hence, we used a depth camera to recognize the caregiver and measure at the same time their relative distance from the powered wheelchair. In this paper, we compared two different approaches for real-time object recognition using a 3DHOG hand-crafted object descriptor based on a 3D extension of the histogram of oriented gradients (HOG) and a convolutional neural network based on YOLOv4-Tiny. To evaluate both approaches, we constructed Miun-Feet—a custom dataset of images of labeled caregiver’s feet in different scenarios, with backgrounds, objects, and lighting conditions. The experimental results showed that the YOLOv4-Tiny approach outperformed 3DHOG in all the analyzed cases. In addition, the results showed that the recognition accuracy was not improved using the depth channel, enabling the use of a monocular RGB camera only instead of a depth camera and reducing the computational cost and heat dissipation limitations. Hence, the paper proposes an additional method to compute the caregiver’s distance and angle from the Powered Wheelchair (PW) using only the RGB data. This work shows that it is feasible to use the location of the caregiver’s feet as a control signal for the control of a powered wheelchair and that it is possible to use a monocular RGB camera to compute their relative positions.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hoanh Nguyen

Vehicle detection is a crucial task in autonomous driving systems. Due to large variance of scales and heavy occlusion of vehicle in an image, this task is still a challenging problem. Recent vehicle detection methods typically exploit feature pyramid to detect vehicles at different scales. However, the drawbacks in the design prevent the multiscale features from being completely exploited. This paper introduces a feature pyramid architecture to address this problem. In the proposed architecture, an improving region proposal network is designed to generate intermediate feature maps which are then used to add more discriminative representations to feature maps generated by the backbone network, as well as improving the computational cost of the network. To generate more discriminative feature representations, this paper introduces multilayer enhancement module to reweight feature representations of feature maps generated by the backbone network to increase the discrimination of foreground objects and background regions in each feature map. In addition, an adaptive RoI pooling module is proposed to pool features from all pyramid levels for each proposal and fuse them for the detection network. Experimental results on the KITTI vehicle detection benchmark and the PASCAL VOC 2007 car dataset show that the proposed approach obtains better detection performance compared with recent methods on vehicle detection.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


2021 ◽  
pp. 1-13
Author(s):  
Jonghyuk Kim ◽  
Jose Guivant ◽  
Martin L. Sollie ◽  
Torleiv H. Bryne ◽  
Tor Arne Johansen

Abstract This paper addresses the fusion of the pseudorange/pseudorange rate observations from the global navigation satellite system and the inertial–visual simultaneous localisation and mapping (SLAM) to achieve reliable navigation of unmanned aerial vehicles. This work extends the previous work on a simulation-based study [Kim et al. (2017). Compressed fusion of GNSS and inertial navigation with simultaneous localisation and mapping. IEEE Aerospace and Electronic Systems Magazine, 32(8), 22–36] to a real-flight dataset collected from a fixed-wing unmanned aerial vehicle platform. The dataset consists of measurements from visual landmarks, an inertial measurement unit, and pseudorange and pseudorange rates. We propose a novel all-source navigation filter, termed a compressed pseudo-SLAM, which can seamlessly integrate all available information in a computationally efficient way. In this framework, a local map is dynamically defined around the vehicle, updating the vehicle and local landmark states within the region. A global map includes the rest of the landmarks and is updated at a much lower rate by accumulating (or compressing) the local-to-global correlation information within the filter. It will show that the horizontal navigation error is effectively constrained with one satellite vehicle and one landmark observation. The computational cost will be analysed, demonstrating the efficiency of the method.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142110087
Author(s):  
Qiao Huang ◽  
Jinlong Liu

The vision-based road lane detection technique plays a key role in driver assistance system. While existing lane recognition algorithms demonstrated over 90% detection rate, the validation test was usually conducted on limited scenarios. Significant gaps still exist when applied in real-life autonomous driving. The goal of this article was to identify these gaps and to suggest research directions that can bridge them. The straight lane detection algorithm based on linear Hough transform (HT) was used in this study as an example to evaluate the possible perception issues under challenging scenarios, including various road types, different weather conditions and shades, changed lighting conditions, and so on. The study found that the HT-based algorithm presented an acceptable detection rate in simple backgrounds, such as driving on a highway or conditions showing distinguishable contrast between lane boundaries and their surroundings. However, it failed to recognize road dividing lines under varied lighting conditions. The failure was attributed to the binarization process failing to extract lane features before detections. In addition, the existing HT-based algorithm would be interfered by lane-like interferences, such as guardrails, railways, bikeways, utility poles, pedestrian sidewalks, buildings and so on. Overall, all these findings support the need for further improvements of current road lane detection algorithms to be robust against interference and illumination variations. Moreover, the widely used algorithm has the potential to raise the lane boundary detection rate if an appropriate search range restriction and illumination classification process is added.


2018 ◽  
Vol 144 ◽  
pp. 04010
Author(s):  
Bobin Saji George ◽  
M. Ajmal ◽  
S. R. Deepu ◽  
M. Aswin ◽  
D. Ribin ◽  
...  

Intensifying electronic component power dissipation levels, shortening product design cycle times, and greater than before requirement for more compact and reliable electronic systems with greater functionality, has heightened the need for thermal design tools that enable accurate solutions to be generated and quickly assessed. The present numerical study aims at developing a computational tool in OpenFOAM that can predict the heat dissipation rate and temperature profile of any electronic component in operation. A suitable computational domain with defined aspect ratio is chosen. For analyzing, “buoyant Boussinesq Simple Foam“ solver available with OpenFOAM is used. It was modified for adapting to the investigation with specified initial and boundary conditions. The experimental setup was made with the dimensions taken up for numerical study. Thermocouples were calibrated and placed in specified locations. For different heat input, the temperatures are noted down at steady state and compared with results from the numerical study.


Author(s):  
Arman Khalighi ◽  
Matthew Blomquist ◽  
Abhijit Mukherjee

In recent years, heat dissipation in micro-electronic systems has become a significant design limitation for many component manufactures. As electronic devices become smaller, the amount of heat generation per unit area increases significantly. Current heat dissipation systems have implemented forced convection with both air and fluid media. However, nanofluids may present an advantageous and ideal cooling solution. In the present study, a model has been developed to estimate the enhancement of the heat transfer when nanoparticles are added to a base fluid, in a single microchannel. The model assumes a homogeneous nanofluid mixture, with thermo-physical properties based on previous experimental and simulation based data. The effect of nanofluid concentration on the dynamics of the bubble has been simulated. The results show the change in bubble contact angles due to deposition of the nanoparticles has more effect on the wall heat transfer compared to the effect of thermo-physical properties change by using nanofluid.


2020 ◽  
pp. 127-135 ◽  
Author(s):  
Sakir Parlakyıldız ◽  
Muhsin Tunay Gencoglu ◽  
Mehmet Sait Cengiz

The main purpose of new studies investigating pantograph catenary interaction in electric rail systems is to detect malfunctions. In the pantograph catenary interaction studies, cameras with non-contact error detection methods are used extensively in the literature. However, none of these studies analyse lighting conditions that improve visual function for cameras. The main subject of this study is to increase the visibility of cameras used in railway systems. In this context, adequate illuminance of the test environment is one of the most important parameters that affect the failure detection success. With optimal lighting, the rate of fault detection increases. For this purpose, a camera, and a LED luminaire 18 W was placed on a wagon, one of the electric rail system elements. This study considered CIE140–2019 (2nd edition) standards. Thanks to the lighting made, it is easier for cameras to detect faults in the electric trains on the move. As a result, in scientific studies, especially in rail systems, the lighting of mobile test environments, such as pantograph-catenary, should be optimal. In environments where visibility conditions improve, the rate of fault detection increases.


2020 ◽  
pp. 123-145
Author(s):  
Sushma Jaiswal ◽  
Tarun Jaiswal

In computer vision, object detection is a very important, exciting and mind-blowing study. Object detection work in numerous fields such as observing security, independently/autonomous driving and etc. Deep-learning based object detection techniques have developed at a very fast pace and have attracted the attention of many researchers. The main focus of the 21st century is the development of the object-detection framework, comprehensively and genuinely. In this investigation, we initially investigate and evaluate the various object detection approaches and designate the benchmark datasets. We also delivered the wide-ranging general idea of object detection approaches in an organized way. We covered the first and second stage detectors of object detection methods. And lastly, we consider the construction of these object detection approaches to give dimensions for further research.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7921
Author(s):  
Toshiya Arakawa

Drowsiness is among the important factors that cause traffic accidents; therefore, a monitoring system is necessary to detect the state of a driver’s drowsiness. Driver monitoring systems usually detect three types of information: biometric information, vehicle behavior, and driver’s graphic information. This review summarizes the research and development trends of drowsiness detection systems based on various methods. Drowsiness detection methods based on the three types of information are discussed. A prospect for arousal level detection and estimation technology for autonomous driving is also presented. In the case of autonomous driving levels 4 and 5, where the driver is not the primary driving agent, the technology will not be used to detect and estimate wakefulness for accident prevention; rather, it can be used to ensure that the driver has enough sleep to arrive comfortably at the destination.


Author(s):  
Baoquan Wang ◽  
Tonghai Jiang ◽  
Xi Zhou ◽  
Bo Ma ◽  
Fan Zhao ◽  
...  

For abnormal detection of time series data, the supervised anomaly detection methods require labeled data. While the range of outlier factors used by the existing semi-supervised methods varies with data, model and time, the threshold for determining abnormality is difficult to obtain, in addition, the computational cost of the way to calculate outlier factors from other data points in the data set is also very large. These make such methods difficult to practically apply. This paper proposes a framework named LSTM-VE which uses clustering combined with visualization method to roughly label normal data, and then uses the normal data to train long short-term memory (LSTM) neural network for semi-supervised anomaly detection. The variance error (VE) of the normal data category classification probability sequence is used as outlier factor. The framework enables anomaly detection based on deep learning to be practically applied and using VE avoids the shortcomings of existing outlier factors and gains a better performance. In addition, the framework is easy to expand because the LSTM neural network can be replaced with other classification models. Experiments on the labeled and real unlabeled data sets prove that the framework is better than replicator neural networks with reconstruction error (RNN-RS) and has good scalability as well as practicability.


Sign in / Sign up

Export Citation Format

Share Document