scholarly journals Large-Scale Object Detection of Images from Network Cameras in Variable Ambient Lighting Conditions

Author(s):  
Caleb Tung ◽  
Matthew R. Kelleher ◽  
Ryan J. Schlueter ◽  
Binhan Xu ◽  
Yung-Hsiang Lu ◽  
...  
2020 ◽  
Vol 2020 (28) ◽  
pp. 94-99
Author(s):  
Mingkai Cao ◽  
Ming Ronnier Luo ◽  
Guoxiang Liu

A large-scale experiment was conducted to investigate facial image quality on mobile phones. There were 8 original facial images from 4 skin tone types, each included a male and a female image. Each image was captured at 6500K and they were rendered to have 5 CCT (correlated colour temperature) and 5 Duv (the shifts away from the Blackbody locus) levels via CAT02 chromatic adaptation transform to simulate the effect of the images captured under 25 different lighting conditions. Each image was assessed under 9 ambient lighting conditions( including one dark condition) by 90 observers from 3 ethnic groups (Caucasian, Chinese and South Asian), each 30 observers. Preferred facial skin tone ellipse was established by maximizing the correlation coefficient between the model predicted probability and the preference percentage from the visual results. Four types of preferred skin tones had small differences in hue angle and chroma, but concentrated into a small colour region, about [24.7, 46.1°] for Cab* and hab values respectively. All ethnic group preferred images taken under illuminants having high CCT (6500-8000 K). It was also found that the chroma of the preferred skin tones will slightly increase as the ambient lighting CCT decrease.


Author(s):  
Stanley N. Roscoe ◽  
Scott G. Hasler ◽  
Dora J. Dougherty

The proficiency with which pilots can make takeoffs and landings using a periscope as the only source of outside visibility was studied under various conditions of flight. A detailed determination was made of the effects of variations in image magnification upon landing accuracy. Speed of transition to flight by periscope was related to flight experience. Effects of various weather, runway surface, and ambient lighting conditions upon flight by periscope were investigated.


Author(s):  
Hilary Lam ◽  
Sayf Gani ◽  
Randy Mawson ◽  
Jason Young ◽  
Erin Potma

Nighttime visibility is an important consideration in collision reconstruction and personal injury investigation. Decreased contrast in low ambient lighting conditions can greatly affect human perception and response. Because ambient lighting levels change rapidly at dawn and dusk, forensic investigators must have an accurate knowledge of the time of day and the cloud conditions at the time of the incident before initiating a nighttime visibility assessment. Previously, human factors experts attempting re-enactments at dawn or dusk have had to wait for sky conditions that match those at the time of the incident, making the investigation of those cases extremely difficult, if not unfeasible. In this study, an ambient illumination equivalency tool has been developed based on a database of time-lapse light meter readings collected by the authors. This new tool can be used to facilitate nighttime visibility assessments on any day by providing a time adjustment factor to account for the changes in ambient illuminance due to differences in the cloud conditions between the day of the incident and the day of the re-enactment.


2021 ◽  
Vol 2057 (1) ◽  
pp. 012087
Author(s):  
S V Dvoynishnikov ◽  
V O Zuev ◽  
I K Kabardin ◽  
D V Kulikov ◽  
V V Rahmanov

Abstract This work aims at creating a universal software package for the development and testing of triangulation methods using structured lighting for measuring the three-dimensional geometry of objects in difficult ambient lighting conditions. As a result, a software package meeting the stated requirements is created. Lighting is based on the Fong model. A method for preloading objects is implemented to optimize the operation of the software package. An accelerated method for creating shadow maps is proposed and implemented. The developed software package is shown to successfully perform all required functions.


2021 ◽  
Author(s):  
Da-Ren Chen ◽  
Wei-Min Chiu

Abstract Machine learning techniques have been used to increase detection accuracy of cracks in road surfaces. Most studies failed to consider variable illumination conditions on the target of interest (ToI), and only focus on detecting the presence or absence of road cracks. This paper proposes a new road crack detection method, IlumiCrack, which integrates Gaussian mixture models (GMM) and object detection CNN models. This work provides the following contributions: 1) For the first time, a large-scale road crack image dataset with a range of illumination conditions (e.g., day and night) is prepared using a dashcam. 2) Based on GMM, experimental evaluations on 2 to 4 levels of brightness are conducted for optimal classification. 3) the IlumiCrack framework is used to integrate state-of-the-art object detecting methods with CNN to classify the road crack images into eight types with high accuracy. Experimental results show that IlumiCrack outperforms the state-of-the-art R-CNN object detection frameworks.


Author(s):  
Limu Chen ◽  
Ye Xia ◽  
Dexiong Pan ◽  
Chengbin Wang

<p>Deep-learning based navigational object detection is discussed with respect to active monitoring system for anti-collision between vessel and bridge. Motion based object detection method widely used in existing anti-collision monitoring systems is incompetent in dealing with complicated and changeable waterway for its limitations in accuracy, robustness and efficiency. The video surveillance system proposed contains six modules, including image acquisition, detection, tracking, prediction, risk evaluation and decision-making, and the detection module is discussed in detail. A vessel-exclusive dataset with tons of image samples is established for neural network training and a SSD (Single Shot MultiBox Detector) based object detection model with both universality and pertinence is generated attributing to tactics of sample filtering, data augmentation and large-scale optimization, which make it capable of stable and intelligent vessel detection. Comparison results with conventional methods indicate that the proposed deep-learning method shows remarkable advantages in robustness, accuracy, efficiency and intelligence. In-situ test is carried out at Songpu Bridge in Shanghai, and the results illustrate that the method is qualified for long-term monitoring and providing information support for further analysis and decision making.</p>


Biology Open ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. bio054452 ◽  
Author(s):  
Evgenia K. Karpova ◽  
Evgenii G. Komyshev ◽  
Mikhail A. Genaev ◽  
Natalya V. Adonyeva ◽  
Dmitry A. Afonnikov ◽  
...  

ABSTRACTA method for automation of imago quantifying and fecundity assessment in Drosophila with the use of mobile devices running Android operating system is proposed. The traditional manual method of counting the progeny takes a long time and limits the opportunity of making large-scale experiments. Thus, the development of computerized methods that would allow us to automatically make a quantitative estimate of Drosophilamelanogaster fecundity is an urgent requirement. We offer a modification of the mobile application SeedCounter that analyzes images of objects placed on a standard sheet of paper for an automatic calculation of D. melanogaster offspring or quantification of adult flies in any other kind of experiment. The relative average error in estimates of the number of flies by mobile app is about 2% in comparison with the manual counting and the processing time is six times shorter. Study of the effects of imaging conditions on accuracy of flies counting showed that lighting conditions do not significantly affect this parameter, and higher accuracy can be achieved using high-resolution smartphone cameras (8 Mpx and more). These results indicate the high accuracy and efficiency of the method suggested.This article has an associated First Person interview with the first author of the paper.


2020 ◽  
Vol 12 (18) ◽  
pp. 3053 ◽  
Author(s):  
Thorsten Hoeser ◽  
Felix Bachofer ◽  
Claudia Kuenzer

In Earth observation (EO), large-scale land-surface dynamics are traditionally analyzed by investigating aggregated classes. The increase in data with a very high spatial resolution enables investigations on a fine-grained feature level which can help us to better understand the dynamics of land surfaces by taking object dynamics into account. To extract fine-grained features and objects, the most popular deep-learning model for image analysis is commonly used: the convolutional neural network (CNN). In this review, we provide a comprehensive overview of the impact of deep learning on EO applications by reviewing 429 studies on image segmentation and object detection with CNNs. We extensively examine the spatial distribution of study sites, employed sensors, used datasets and CNN architectures, and give a thorough overview of applications in EO which used CNNs. Our main finding is that CNNs are in an advanced transition phase from computer vision to EO. Upon this, we argue that in the near future, investigations which analyze object dynamics with CNNs will have a significant impact on EO research. With a focus on EO applications in this Part II, we complete the methodological review provided in Part I.


Sign in / Sign up

Export Citation Format

Share Document