scholarly journals Single-Image Visibility Restoration: A Machine Learning Approach and Its 4K-Capable Hardware Accelerator

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5795 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

In recent years, machine vision algorithms have played an influential role as core technologies in several practical applications, such as surveillance, autonomous driving, and object recognition/localization. However, as almost all such algorithms are applicable to clear weather conditions, their performance is severely affected by any atmospheric turbidity. Several image visibility restoration algorithms have been proposed to address this issue, and they have proven to be a highly efficient solution. This paper proposes a novel method to recover clear images from degraded ones. To this end, the proposed algorithm uses a supervised machine learning-based technique to estimate the pixel-wise extinction coefficients of the transmission medium and a novel compensation scheme to rectify the post-dehazing false enlargement of white objects. Also, a corresponding hardware accelerator implemented on a Field Programmable Gate Array chip is in order for facilitating real-time processing, a critical requirement of practical camera-based systems. Experimental results on both synthetic and real image datasets verified the proposed method’s superiority over existing benchmark approaches. Furthermore, the hardware synthesis results revealed that the accelerator exhibits a processing rate of nearly 271.67 Mpixel/s, enabling it to process 4K videos at 30.7 frames per second in real time.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5170 ◽  
Author(s):  
Dat Ngo ◽  
Seungmin Lee ◽  
Quoc-Hieu Nguyen ◽  
Tri Minh Ngo ◽  
Gi-Dong Lee ◽  
...  

Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.


2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.


2021 ◽  
Vol 73 (10) ◽  
pp. 45-45
Author(s):  
Martin Rylance

Communication and prediction are symmetrical. Communication, in effect, is prediction about what has happened. And prediction is communication about what is going to happen. Few industries contain as many phases, steps, and levels of interface between the start and end product as the oil and gas industry—field, office, offshore, plant, subsea, downhole, not to mention the disciplinary, functional, managerial, logistics handovers, and boundaries that exist. It therefore is hardly surprising that communication, in all its varied forms, is at the very heart of our business. The papers selected this month demonstrate how improved communication can deliver the prediction required for a variety of reasons, including safety, efficiency, and informational purposes. The application of new and exciting ways of working, partially accelerated by recent events, is leading to breakthrough improvements on all levels. Real-time processing, improved visualization, and predictive and machine-learning methods, as well as improvements in all forms of data communication, are all contributing to incremental enhancements across the board. This month, I encourage the reader to review the selected articles and determine where and how the communication and prediction are occurring and what they are delivering. Then perhaps consider performing an exercise wherein your own day-to-day roles—your own areas of communication, interfacing, and cooperation—are reviewed to see what enhancements you can make as an individual. You may be pleasantly surprised that some simple tweaks to your communication style, frequency, and format can deliver quick wins. In an era of remote working for many individuals, it is an exercise that has some value. Recommended additional reading at OnePetro: www.onepetro.org. OTC 30184 - Augmented Machine-Learning Approach of Rate-of-Penetration Prediction for North Sea Oil Field by Youngjun Hong, Seoul National University, et al. OTC 31278 - A Digital Twin for Real-Time Drilling Hydraulics Simulation Using a Hybrid Approach of Physics and Machine Learning by Prasanna Amur Varadarajan, Schlumberger, et al. OTC 31092 - Integrated Underreamer Technology With Real-Time Communication Helped Eliminate Rathole in Exploratory Operation Offshore Nigeria by Raphael Chidiogo Ozioko, Baker Hughes, et al.


Author(s):  
David R. Selviah ◽  
Janti Shawash

This chapter celebrates 50 years of first and higher order neural network (HONN) implementations in terms of the physical layout and structure of electronic hardware, which offers high speed, low latency, compact, low cost, low power, mass produced systems. Low latency is essential for practical applications in real time control for which software implementations running on CPUs are too slow. The literature review chapter traces the chronological development of electronic neural networks (ENN) discussing selected papers in detail from analog electronic hardware, through probabilistic RAM, generalizing RAM, custom silicon Very Large Scale Integrated (VLSI) circuit, Neuromorphic chips, pulse stream interconnected neurons to Application Specific Integrated circuits (ASICs) and Zero Instruction Set Chips (ZISCs). Reconfigurable Field Programmable Gate Arrays (FPGAs) are given particular attention as the most recent generation incorporate Digital Signal Processing (DSP) units to provide full System on Chip (SoC) capability offering the possibility of real-time, on-line and on-chip learning.


2021 ◽  
pp. 83-94
Author(s):  
K. Saad ◽  
A. Sligar ◽  
R. Kipp ◽  
J. Decker ◽  
D. Rey ◽  
...  

2009 ◽  
Vol 36 (2) ◽  
pp. 307-311
Author(s):  
罗凤武 Luo Fengwu ◽  
王利颖 Wang Liying ◽  
涂霞 Tu Xia ◽  
陈厚来 Chen Houlai

Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 800 ◽  
Author(s):  
Irshad Khan ◽  
Seonhwa Choi ◽  
Young-Woo Kwon

Detecting earthquakes using smartphones or IoT devices in real-time is an arduous and challenging task, not only because it is constrained with the hard real-time issue but also due to the similarity of earthquake signals and the non-earthquake signals (i.e., noise or other activities). Moreover, the variety of human activities also makes it more difficult when a smartphone is used as an earthquake detecting sensor. To that end, in this article, we leverage a machine learning technique with earthquake features rather than traditional seismic methods. First, we split the detection task into two categories including static environment and dynamic environment. Then, we experimentally evaluate different features and propose the most appropriate machine learning model and features for the static environment to tackle the issue of noisy components and detect earthquakes in real-time with less false alarm rates. The experimental result of the proposed model shows promising results not only on the given dataset but also on the unseen data pointing to the generalization characteristics of the model. Finally, we demonstrate that the proposed model can be also used in the dynamic environment if it is trained with different dataset.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 143 ◽  
Author(s):  
Ruidong Wu ◽  
Bing Liu ◽  
Ping Fu ◽  
Junbao Li ◽  
Shou Feng

Matrix multiplication is a critical time-consuming processing step in many machine learning applications. Due to the diversity of practical applications, the matrix dimensions are generally not fixed. However, most matrix calculation methods, based on field programmable gate array (FPGA) currently use fixed matrix dimensions, which limit the flexibility of machine learning algorithms in a FPGA. The bottleneck lies in the limited FPGA resources. Therefore, this paper proposes an accelerator architecture for matrix computing method with changeable dimensions. Multi-matrix synchronous calculation concept allows matrix data to be processed continuously, which improves the parallel computing characteristics of FPGA and optimizes the computational efficiency. This paper tests matrix multiplication using support vector machine (SVM) algorithm to verify the performance of proposed architecture on the ZYNQ platform. The experimental results show that, compared to the software processing method, the proposed architecture increases the performance by 21.18 times with 9947 dimensions. The dimension is changeable with a maximum value of 2,097,151, without changing hardware design. This method is also applicable to matrix multiplication processing with other machine learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document