vision processing
Recently Published Documents


TOTAL DOCUMENTS

189
(FIVE YEARS 28)

H-INDEX

13
(FIVE YEARS 2)

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2989
Author(s):  
Peng Liu ◽  
Yan Song

Vision processing chips have been widely used in image processing and recognition tasks. They are conventionally designed based on the image signal processing (ISP) units directly connected with the sensors. In recent years, convolutional neural networks (CNNs) have become the dominant tools for many state-of-the-art vision processing tasks. However, CNNs cannot be processed by a conventional vision processing unit (VPU) with a high speed. On the other side, the CNN processing units cannot process the RAW images from the sensors directly and an ISP unit is required. This makes a vision system inefficient with a lot of data transmission and redundant hardware resources. Additionally, many CNN processing units suffer from a low flexibility for various CNN operations. To solve this problem, this paper proposed an efficient vision processing unit based on a hybrid processing elements array for both CNN accelerating and ISP. Resources are highly shared in this VPU, and a pipelined workflow is introduced to accelerate the vision tasks. We implement the proposed VPU on the Field-Programmable Gate Array (FPGA) platform and various vision tasks are tested on it. The results show that this VPU achieves a high efficiency for both CNN processing and ISP and shows a significant reduction in energy consumption for vision tasks consisting of CNNs and ISP. For various CNN tasks, it maintains an average multiply accumulator utilization of over 94% and achieves a performance of 163.2 GOPS with a frequency of 200 MHz.


2021 ◽  
Vol 21 (9) ◽  
pp. 2136
Author(s):  
Yiwen Wang ◽  
Alejandro Lleras ◽  
Simona Buetti

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256211
Author(s):  
Feng Tian ◽  
Minlei Hua ◽  
Wenrui Zhang ◽  
Yingjie Li ◽  
Xiaoli Yang

Previous studies have suggested that virtual reality (VR) can elicit emotions in different visual modes using 2D or 3D headsets. However, the effects on emotional arousal by using these two visual modes have not been comprehensively investigated, and the underlying neural mechanisms are not yet clear. This paper presents a cognitive psychological experiment that was conducted to analyze how these two visual modes impact emotional arousal. Forty volunteers were recruited and were randomly assigned to two groups. They were asked to watch a series of positive, neutral and negative short VR videos in 2D and 3D. Multichannel electroencephalograms (EEG) and skin conductance responses (SCR) were recorded simultaneously during their participation. The results indicated that emotional stimulation was more intense in the 3D environment due to the improved perception of the environment; greater emotional arousal was generated; and higher beta (21–30 Hz) EEG power was identified in 3D than in 2D. We also found that both hemispheres were involved in stereo vision processing and that brain lateralization existed in the processing.


2021 ◽  
Author(s):  
Dhimiter Qendri

This project details the design and implementation of an image processing pipeline that targets real time video-stitching for semi-panoramic video synthesis. The scope of the project includes the analysis of possible approaches, selection of processing algorithms and procedures, design of experimental hardware set-up (including the schematic capture design of a custom catadioptric panoramic imaging system) and firmware/software development of the vision processing system components. The goal of the project is to develop a frame-stitching IP module as well as an efficient video registration algorithm capable for synthesis of a semi-panoramic video-stream at 30 frames-per-second (fps) rate with minimal FPGA resource utilization. The developed components have been validated in hardware. Finally, a number of hybrid architectures that make use of the synergy between the CPU and FPGA section of the ZYNQ SoC have been investigated and prototyped as alternatives to a complete hardware solution. Keyword: Video stitching, Panoramic vision, FPGA, SoC, vision system, registration


2021 ◽  
Author(s):  
Dhimiter Qendri

This project details the design and implementation of an image processing pipeline that targets real time video-stitching for semi-panoramic video synthesis. The scope of the project includes the analysis of possible approaches, selection of processing algorithms and procedures, design of experimental hardware set-up (including the schematic capture design of a custom catadioptric panoramic imaging system) and firmware/software development of the vision processing system components. The goal of the project is to develop a frame-stitching IP module as well as an efficient video registration algorithm capable for synthesis of a semi-panoramic video-stream at 30 frames-per-second (fps) rate with minimal FPGA resource utilization. The developed components have been validated in hardware. Finally, a number of hybrid architectures that make use of the synergy between the CPU and FPGA section of the ZYNQ SoC have been investigated and prototyped as alternatives to a complete hardware solution. Keyword: Video stitching, Panoramic vision, FPGA, SoC, vision system, registration


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3488
Author(s):  
Nafisa Mostofa ◽  
Christopher Feltner ◽  
Kelly Fullin ◽  
Jonathan Guilbe ◽  
Sharare Zehtabian ◽  
...  

In recent years, significant work has been done in technological enhancements for mobility aids (smart walkers). However, most of this work does not cover the millions of people who have both mobility and visual impairments. In this paper, we design and study four different configurations of smart walkers that are specifically targeted to the needs of this population. We investigated different sensing technologies (ultrasound-based, infrared depth cameras and RGB cameras with advanced computer vision processing), software configurations, and user interface modalities (haptic and audio signal based). Our experiments show that there are several engineering choices that can be used in the design of such assistive devices. Furthermore, we found that a holistic evaluation of the end-to-end performance of the systems is necessary, as the quality of the user interface often has a larger impact on the overall performance than increases in the sensing accuracy beyond a certain point.


2021 ◽  
Author(s):  
Matthew J. Bolton ◽  
H. Michael Mogil ◽  
Stacie H. Hanes

Weather renders all people vulnerable, but due to various factors some are more naturally vulnerable than others. What of those vulnerable populations and individuals who cannot take, or are limited in taking, protective actions? This paper contributes to the mission of the Weather Ready Nation initiative (WRN), established by the U.S. National Weather Service, by shining light on weather communication considerations for those on the autism spectrum and those with color vision differences. It also discusses ongoing efforts centered around the Deaf and hard-of-hearing and those who are blind or who have limited vision, and discusses problems existing in weather communication for vulnerable populations at-large. The first section defines vulnerability and clarifies associated concepts while the second section, on ways to improve weather communication practices for autistic people and those who are blind, vision-limited, Deaf, hard-of-hearing, and those with color vision-processing differences, focuses on the importance of recognizing lived, vulnerable population experience in weather messaging efforts and the use of language in communicating with the aforementioned populations.


2021 ◽  
Author(s):  
Tanya Amert ◽  
Michael Balszun ◽  
Martin Geier ◽  
F. Donelson Smith ◽  
James H. Anderson ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 926
Author(s):  
Venkatesh Kodukula ◽  
Saad Katrawala ◽  
Britton Jones ◽  
Carole-Jean Wu ◽  
Robert LiKamWa

Vision processing on traditional architectures is inefficient due to energy-expensive off-chip data movement. Many researchers advocate pushing processing close to the sensor to substantially reduce data movement. However, continuous near-sensor processing raises sensor temperature, impairing imaging/vision fidelity. We characterize the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. Our characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, our characterization also identifies opportunities—unique to the needs of near-sensor processing—to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand. Based on our characterization, we propose and investigate two thermal management strategies—stop-capture-go and seasonal migration—for imaging-aware thermal management. For our evaluated tasks, our policies save up to 53% of system power with negligible performance impact and sustained image fidelity.


Sign in / Sign up

Export Citation Format

Share Document