How Might Wet Retroreflective Pavement Markings Enable More Robust Machine Vision?

Author(s):  
Adam M. Pike ◽  
Jordan Whitney ◽  
Thomas Hedblom ◽  
Susannah Clear

This study is a preliminary investigation of the effects of levels of wet retroreflectivity of pavement markings on factors that determine robust feature detection in machine vision and light detection and ranging (LiDAR) systems in continuously wet road conditions. Luminance and Weber contrast of a range of pavement markings were characterized as functions of wet retroreflectivity and distance based on calibrated charge-coupled device (CCD) camera measurements. Both were found to trend with wet retroflectivity over the range of distances considered in this study. Artifacts arising from glare sources in wet conditions and their intensities relative to pavement markings of different wet retroreflectivity levels were demonstrated. Image data suggests that markings with high wet retroreflectivity may help to mitigate identification of these artifacts as false positives in lane awareness/lane detection algorithms. As LiDAR presents a viable sensor fusion approach to identifying and avoiding these false positives and artifacts in both nighttime wet and daytime wet road conditions, LiDAR return was characterized on pavement markings comprising both optics designed only for dry retroreflectivity and optics designed to be retroreflective in both dry and wet conditions. Preliminary results suggest that for common pavement marking constructions based on exposed beaded optics that might be completely immersed by a rainstorm or puddling, incorporation of high index (n~2.4) wet retroreflective beaded optics is likely to be advantageous to both visible machine vision systems and LiDAR for detection of those retroreflective markings in both night and day.

2011 ◽  
Vol 230-232 ◽  
pp. 1190-1194 ◽  
Author(s):  
Min Kang ◽  
Hou Shang Li ◽  
Xiu Qing Fu

In order to measure the initial gap between the workpiece and tool-cathode in electrochemical machining, the measurement method based on machine vision was studied in this paper. First, the measurement system based on machine vision was established. The hardware of the system consisted of CCD camera, image data acquisition card, light source and computer. The software of the system was developed by VC++6.0. Then, the original digital image of electrochemical machining initial gap collected by the CCD camera system was changed into the contour of image through graying, bivalency, edge detection and segmentation. Through system calibration, the physical size of the gap was calculated. Finally, relative experiments were carried out. The experimental results validated the feasibility of the method which measures the electrochemical machining initial gap based on machine vision.


Author(s):  
Robert W. Mackin

This paper presents two advances towards the automated three-dimensional (3-D) analysis of thick and heavily-overlapped regions in cytological preparations such as cervical/vaginal smears. First, a high speed 3-D brightfield microscope has been developed, allowing the acquisition of image data at speeds approaching 30 optical slices per second. Second, algorithms have been developed to detect and segment nuclei in spite of the extremely high image variability and low contrast typical of such regions. The analysis of such regions is inherently a 3-D problem that cannot be solved reliably with conventional 2-D imaging and image analysis methods.High-Speed 3-D imaging of the specimen is accomplished by moving the specimen axially relative to the objective lens of a standard microscope (Zeiss) at a speed of 30 steps per second, where the stepsize is adjustable from 0.2 - 5μm. The specimen is mounted on a computer-controlled, piezoelectric microstage (Burleigh PZS-100, 68/μm displacement). At each step, an optical slice is acquired using a CCD camera (SONY XC-11/71 IP, Dalsa CA-D1-0256, and CA-D2-0512 have been used) connected to a 4-node array processor system based on the Intel i860 chip.


2021 ◽  
Vol 11 (15) ◽  
pp. 6721
Author(s):  
Jinyeong Wang ◽  
Sanghwan Lee

In increasing manufacturing productivity with automated surface inspection in smart factories, the demand for machine vision is rising. Recently, convolutional neural networks (CNNs) have demonstrated outstanding performance and solved many problems in the field of computer vision. With that, many machine vision systems adopt CNNs to surface defect inspection. In this study, we developed an effective data augmentation method for grayscale images in CNN-based machine vision with mono cameras. Our method can apply to grayscale industrial images, and we demonstrated outstanding performance in the image classification and the object detection tasks. The main contributions of this study are as follows: (1) We propose a data augmentation method that can be performed when training CNNs with industrial images taken with mono cameras. (2) We demonstrate that image classification or object detection performance is better when training with the industrial image data augmented by the proposed method. Through the proposed method, many machine-vision-related problems using mono cameras can be effectively solved by using CNNs.


Robotica ◽  
2000 ◽  
Vol 18 (3) ◽  
pp. 299-303 ◽  
Author(s):  
Carl-Henrik Oertel

Machine vision-based sensing enables automatic hover stabilization of helicopters. The evaluation of image data, which is produced by a camera looking straight to the ground, results in a drift free autonomous on-board position measurement system. No additional information about the appearance of the scenery seen by the camera (e.g. landmarks) is needed. The technique being applied is a combination of the 4D-approach with two dimensional template tracking of a priori unknown features.


2014 ◽  
Vol 711 ◽  
pp. 333-337 ◽  
Author(s):  
Fang Wang ◽  
Chao Kun Ma ◽  
Shan Qiang Dai ◽  
Chang Chun Li

The author has designed a visual workbench to realize that the rim can rotate with the workbench. Meanwhile, the linear CCD camera records the rim’s circumference. The paper has explored the measurement method of getting the rim valve hole position using machine vision. The system can calculate the position of rim-hole and control the servo motor by image processing, characteristic recognizing and measuring to locate the position of wheel-hole automatically. And the paper has verified the accuracy of the method by experiments.


2021 ◽  
pp. 51-64
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Feature detection, description and matching are essential components of various computer vision applications; thus, they have received a considerable attention in the last decades. Several feature detectors and descriptors have been proposed in the literature with a variety of definitions for what kind of points in an image is potentially interesting (i.e., a distinctive attribute). This chapter introduces basic notation and mathematical concepts for detecting and describing image features. Then, it discusses properties of perfect features and gives an overview of various existing detection and description methods. Furthermore, it explains some approaches to feature matching. Finally, the chapter discusses the most used techniques for performance evaluation of detection algorithms.


Author(s):  
Stylianos Asteriadis ◽  
Stylianos Asteriadis ◽  
Nikos Nikolaidis ◽  
Nikos Nikolaidis ◽  
Ioannis Pitas ◽  
...  

Facial feature localization is an important task in numerous applications of face image analysis that include face recognition and verification, facial expression recognition, driver‘s alertness estimation, head pose estimation etc. Thus, the area has been a very active research field for many years and a multitude of methods appear in the literature. Depending on the targeted application, the proposed methods have different characteristics and are designed to perform in different setups. Thus, a method of general applicability seems to be away from the current state of the art. This chapter intends to offer an up-to-date literature review of facial feature detection algorithms. A review of the image databases and performance metrics that are used to benchmark these algorithms is also provided.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2457 ◽  
Author(s):  
Jinhan Jeong ◽  
Yook Hyun Yoon ◽  
Jahng Hyon Park

Lane detection and tracking in a complex road environment is one of the most important research areas in highly automated driving systems. Studies on lane detection cover a variety of difficulties, such as shadowy situations, dimmed lane painting, and obstacles that prohibit lane feature detection. There are several hard cases in which lane candidate features are not easily extracted from image frames captured by a driving vehicle. We have carefully selected typical scenarios in which the extraction of lane candidate features can be easily corrupted by road vehicles and road markers that lead to degradations in the understanding of road scenes, resulting in difficult decision making. We have introduced two main contributions to the interpretation of road scenes in dense traffic environments. First, to obtain robust road scene understanding, we have designed a novel framework combining a lane tracker method integrated with a camera and a radar forward vehicle tracker system, which is especially useful in dense traffic situations. We have introduced an image template occupancy matching method with the integrated vehicle tracker that makes it possible to avoid extracting irrelevant lane features caused by forward target vehicles and road markers. Second, we present a robust multi-lane detection by a tracking algorithm that incudes adjacent lanes as well as ego lanes. We verify a comprehensive experimental evaluation with a real dataset comprised of problematic road scenarios. Experimental result shows that the proposed method is very reliable for multi-lane detection at the presented difficult situations.


Electronics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 33 ◽  
Author(s):  
George K. Adam ◽  
Panagiotis A. Kontaxis ◽  
Lambros T. Doulos ◽  
Evangelos-Nikolaos D. Madias ◽  
Constantinos A. Bouroussis ◽  
...  

Although with the advent of the LEDs the energy consumption in buildings can be reduced by 50%, there exists a potential for energy savings due to lighting controls. Moreover, lighting controls can ensure that the near zero energy requirements by EU can be achieved for near zero energy buildings (nZEBs). For this reason, more sophisticated lighting controls must be proposed in order to take full advantage of LEDs and their flexibility concerning dimming. This paper proposes the architecture of an embedded computer camera controller for monitoring and management of image data, which is applied in various control cases, and particularly in digitally controlled lighting devices. The proposed system deals with real-time monitoring and management of a GigE camera input. An in-house developed algorithm using MATLAB enables the identification of areas in luminance values. The embedded microcontroller is part of a complete lighting control system with an imaging sensor in order to measure and control the illumination of several working areas of a room. The power consumption of the proposed lighting system was measured and was compared with the power consumption of a typical photosensor. The functional performance and operation of the proposed camera control system architecture was evaluated based upon a BeagleBone Black microcontroller board.


Sign in / Sign up

Export Citation Format

Share Document