scholarly journals Control Systems and Electronic Instrumentation Applied to Autonomy in Wheelchair Mobility: The State of the Art

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6326
Author(s):  
Mauro Callejas-Cuervo ◽  
Aura Ximena González-Cely ◽  
Teodiano Bastos-Filho

Automatic wheelchairs have evolved in terms of instrumentation and control, solving the mobility problems of people with physical disabilities. With this work it is intended to establish the background of the instrumentation and control methods of automatic wheelchairs and prototypes, as well as a classification in each category. To this end a search of specialised databases was carried out for articles published between 2012 and 2019. Out of these, 97 documents were selected based on the inclusion and exclusion criteria. The following categories were proposed for these articles: (a) wheelchair instrumentation and control methods, among which there are systems that implement micro-electromechanical sensors (MEMS), surface electromyography (sEMG), electrooculography (EOG), electroencephalography (EEG), and voice recognition systems; (b) wheelchair instrumentation, among which are found obstacle detection systems, artificial vision (image and video), as well as navigation systems (GPS and GSM). The results found in this review tend towards the use of EEG signals, head movements, voice commands, and algorithms to avoid obstacles. The most used techniques involve the use of a classic control and thresholding to move the wheelchair. In addition, the discussion was mainly based on the characteristics of the user and the types of control. To conclude, the articles exhibited the existing limitations and possible solutions in their designs, as well as informing the physically disabled community about the technological developments in this field.

2020 ◽  
Author(s):  
Saraswathi Shanmugam ◽  
Eduardo Assunção ◽  
Ricardo Mesquita ◽  
André Veiros ◽  
Pedro D. Gaspar

A weed plant can be described as a plant that is unwanted at a specific location at a given time. Farmers have fought against the weed populations for as long as land has been used for food production. In conventional agriculture this weed control contributes a considerable amount to the overall cost of the produce. Automatic weed detection is one of the viable solutions for efficient reduction or exclusion of chemicals in crop production. Research studies have been focusing and combining modern approaches and proposed techniques which automatically analyze and evaluate segmented weed images. This study discusses and compares the weed control methods and gives special attention in describing the current research in automating the weed detection and control. Keywords: Detection, Weed, Agriculture 4.0, Computational vision, Robotics


Author(s):  
Joel Z. Leibo ◽  
Tomaso Poggio

This chapter provides an overview of biological perceptual systems and their underlying computational principles focusing on the sensory sheets of the retina and cochlea and exploring how complex feature detection emerges by combining simple feature detectors in a hierarchical fashion. We also explore how the microcircuits of the neocortex implement such schemes pointing out similarities to progress in the field of machine vision driven deep learning algorithms. We see signs that engineered systems are catching up with the brain. For example, vision-based pedestrian detection systems are now accurate enough to be installed as safety devices in (for now) human-driven vehicles and the speech recognition systems embedded in smartphones have become increasingly impressive. While not being entirely biologically based, we note that computational neuroscience, as described in this chapter, makes up a considerable portion of such systems’ intellectual pedigree.


2021 ◽  
Vol 226 ◽  
pp. 108826
Author(s):  
Chenguang Liu ◽  
Junlin Qi ◽  
Xiumin Chu ◽  
Mao Zheng ◽  
Wei He

2021 ◽  
Vol 787 (1) ◽  
pp. 012027
Author(s):  
Yudian Li ◽  
Jiajie Dong ◽  
Kai Fei ◽  
Hao Song ◽  
Zeyi Li ◽  
...  

2021 ◽  
Vol 7 (4) ◽  
pp. 61
Author(s):  
David Urban ◽  
Alice Caplier

As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.


Sign in / Sign up

Export Citation Format

Share Document