scholarly journals Robustness of Bio-Inspired Visual Systems for Collision Prediction in Critical Robot Traffic

2021 ◽  
Vol 8 ◽  
Author(s):  
Qinbing Fu ◽  
Xuelong Sun ◽  
Tian Liu ◽  
Cheng Hu ◽  
Shigang Yue

Collision prevention sets a major research and development obstacle for intelligent robots and vehicles. This paper investigates the robustness of two state-of-the-art neural network models inspired by the locust’s LGMD-1 and LGMD-2 visual pathways as fast and low-energy collision alert systems in critical scenarios. Although both the neural circuits have been studied and modelled intensively, their capability and robustness against real-time critical traffic scenarios where real-physical crashes will happen have never been systematically investigated due to difficulty and high price in replicating risky traffic with many crash occurrences. To close this gap, we apply a recently published robotic platform to test the LGMDs inspired visual systems in physical implementation of critical traffic scenarios at low cost and high flexibility. The proposed visual systems are applied as the only collision sensing modality in each micro-mobile robot to conduct avoidance by abrupt braking. The simulated traffic resembles on-road sections including the intersection and highway scenes wherein the roadmaps are rendered by coloured, artificial pheromones upon a wide LCD screen acting as the ground of an arena. The robots with light sensors at bottom can recognise the lanes and signals, tightly follow paths. The emphasis herein is laid on corroborating the robustness of LGMDs neural systems model in different dynamic robot scenes to timely alert potential crashes. This study well complements previous experimentation on such bio-inspired computations for collision prediction in more critical physical scenarios, and for the first time demonstrates the robustness of LGMDs inspired visual systems in critical traffic towards a reliable collision alert system under constrained computation power. This paper also exhibits a novel, tractable, and affordable robotic approach to evaluate online visual systems in dynamic scenes.

2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


2020 ◽  
Author(s):  
Vysakh S Mohan

Edge processing for computer vision systems enable incorporating visual intelligence to mobile robotics platforms. Demand for low power, low cost and small form factor devices are on the rise.This work proposes a unified platform to generate deep learning models compatible on edge devices from Intel, NVIDIA and XaLogic. The platform enables users to create custom data annotations,train neural networks and generate edge compatible inference models. As a testimony to the tools ease of use and flexibility, we explore two use cases — vision powered prosthetic hand and drone vision. Neural network models for these use cases will be built using the proposed pipeline and will be open-sourced. Online and offline versions of the tool and corresponding inference modules for edge devices will also be made public for users to create custom computer vision use cases.


2019 ◽  
Author(s):  
Courtney J Spoerer ◽  
Tim C Kietzmann ◽  
Johannes Mehrer ◽  
Ian Charest ◽  
Nikolaus Kriegeskorte

AbstractDeep feedforward neural network models of vision dominate in both computational neuroscience and engineering. The primate visual system, by contrast, contains abundant recurrent connections. Recurrent signal flow enables recycling of limited computational resources over time, and so might boost the performance of a physically finite brain or model. Here we show: (1) Recurrent convolutional neural network models outperform feedforward convolutional models matched in their number of parameters in large-scale visual recognition tasks on natural images. (2) Setting a confidence threshold, at which recurrent computations terminate and a decision is made, enables flexible trading of speed for accuracy. At a given confidence threshold, the model expends more time and energy on images that are harder to recognise, without requiring additional parameters for deeper computations. (3) The recurrent model’s reaction time for an image predicts the human reaction time for the same image better than several parameter-matched and state-of-the-art feedforward models. (4) Across confidence thresholds, the recurrent model emulates the behaviour of feedforward control models in that it achieves the same accuracy at approximately the same computational cost (mean number of floating-point operations). However, the recurrent model can be run longer (higher confidence threshold) and then outperforms parameter-matched feedforward comparison models. These results suggest that recurrent connectivity, a hallmark of biological visual systems, may be essential for understanding the accuracy, flexibility, and dynamics of human visual recognition.Author summaryDeep neural networks provide the best current models of biological vision and achieve the highest performance in computer vision. Inspired by the primate brain, these models transform the image signals through a sequence of stages, leading to recognition. Unlike brains in which outputs of a given computation are fed back into the same computation, these models do not process signals recurrently. The ability to recycle limited neural resources by processing information recurrently could explain the accuracy and flexibility of biological visual systems, which computer vision systems cannot yet match. Here we report that recurrent processing can improve recognition performance compared to similarly complex feedforward networks. Recurrent processing also enabled models to behave more flexibly and trade off speed for accuracy. Like humans, the recurrent network models can compute longer when an object is hard to recognise, which boosts their accuracy. The model’s recognition times predicted human recognition times for the same images. The performance and flexibility of recurrent neural network models illustrates that modeling biological vision can help us improve computer vision.


Water ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 2088
Author(s):  
Minxue He ◽  
Liheng Zhong ◽  
Prabhjot Sandhu ◽  
Yu Zhou

Salinity management is a subject of particular interest in estuarine environments because of the underlying biological significance of salinity and its variations in time and space. The foremost step in such management practices is understanding the spatial and temporal variations of salinity and the principal drivers of these variations. This has traditionally been achieved with the assistance of empirical or process-based models, but these can be computationally expensive for complex environmental systems. Model emulation based on data-driven methods offers a viable alternative to traditional modeling in terms of computational efficiency and improving accuracy by recognizing patterns and processes that are overlooked or underrepresented (or overrepresented) by traditional models. This paper presents a case study of emulating a process-based boundary salinity generator via deep learning for the Sacramento–San Joaquin Delta (Delta), an estuarine environment with significant economic, ecological, and social value on the Pacific coast of northern California, United States. Specifically, the study proposes a range of neural network models: (a) multilayer perceptron, (b) long short-term memory network, and (c) convolutional neural network-based models in estimating the downstream boundary salinity of the Delta on a daily time-step. These neural network models are trained and validated using half of the dataset from water year 1991 to 2002. They are then evaluated for performance in the remaining record period from water year 2003 to 2014 against the process-based boundary salinity generation model across different ranges of salinity in different types of water years. The results indicate that deep learning neural networks provide competitive or superior results compared with the process-based model, particularly when the output of the latter are incorporated as an input to the former. The improvements are generally more noticeable during extreme (i.e., wet, dry, and critical) years rather than in near-normal (i.e., above-normal and below-normal) years and during low and medium ranges of salinity rather than high range salinity. Overall, this study indicates that deep learning approaches have the potential to supplement the current practices in estimating salinity at the downstream boundary and other locations across the Delta, and thus guide real-time operations and long-term planning activities in the Delta.


IoT ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 688-716
Author(s):  
Rachel M. Billings ◽  
Alan J. Michaels

While a variety of image processing studies have been performed to quantify the potential performance of neural network-based models using high-quality still images, relatively few studies seek to apply those models to a real-time operational context. This paper seeks to extend prior work in neural-network-based mask detection algorithms to a real-time, low-power deployable context that is conducive to immediate installation and use. Particularly relevant in the COVID-19 era with varying rules on mask mandates, this work applies two neural network models to inference of mask detection in both live (mobile) and recorded scenarios. Furthermore, an experimental dataset was collected where individuals were encouraged to use presentation attacks against the algorithm to quantify how perturbations negatively impact model performance. The results from evaluation on the experimental dataset are further investigated to identify the degradation caused by poor lighting and image quality, as well as to test for biases within certain demographics such as gender and ethnicity. In aggregate, this work validates the immediate feasibility of a low-power and low-cost real-time mask recognition system.


Author(s):  
N Lehrasab ◽  
H. P. B. Dassanayake ◽  
C Roberts ◽  
S Fararooy ◽  
C. J. Goodman

A practical, robust method of fault detection and diagnosis of a class of pneumatic train door commonly found in rapid transit systems is presented. The methodology followed is intended to be applied within a practical system where computation is distributed across a local data network for economic reasons. The health of the system is ascertained by extracting features from the trajectory profiles of the train door. This is incorporated into a low-level fault detection scheme, which relies upon using simple parity equations. Detailed diagnostics are carried out once a fault has been detected; for this purpose neural network models are utilized. This method of detection and diagnosis is implemented in a distributed architecture resulting in a practical, low-cost industrial solution. It is feasible to integrate the results of the diagnosis process directly into an operator's maintenance information system (MIS), thus producing a proactive maintenance regime.


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document