scholarly journals UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3531
Author(s):  
Pawel Burdziakowski ◽  
Katarzyna Bobkowska

The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw attention to the potential and possible use of UAV photogrammetry during the darker time of the day. The potential of night-time images has not been yet widely recognized, since correct scenery lighting or lack of scenery light sources is an obvious issue. The authors have developed typical day- and night-time photogrammetric models. They have also presented an extensive analysis of the geometry, indicated which process element had the greatest impact on degrading night-time photogrammetric product, as well as which measurable factor directly correlated with image accuracy. The reduction in geometry during night-time tests was greatly impacted by the non-uniform distribution of GCPs within the study area. The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error for each intrinsic orientation and distortion parameter. As evidenced, uniformly illuminated photos can be used to construct a model with lower reprojection error, and each tie point exhibits greater precision. Furthermore, they have evaluated whether commercial photogrammetric software enabled reaching acceptable image quality and whether the digital camera type impacted interpretative quality. The research paper is concluded with an extended discussion, conclusions, and recommendation on night-time studies.

Chronos ◽  
2018 ◽  
Vol 32 ◽  
pp. 119-132 ◽  
Author(s):  
Dorina Moullou ◽  
Lambros T Doulos ◽  
Frangiskos V Topalis

The assessment of the performance of ancient lighting devices may provide scholars with valuable information on the lighting conditions that existed in ancient houses, and on the level of optical comfort created by the use of those devices. Consequently, it also provides scholars with valuable information on the feasibility of activities performed during night time. This paper focuses on the investigation of the performance of lighting devices, namely lamps and candles, used in Greece during the Roman, Byzantine, and Post-Byzantine eras. At the same time, we provide non-lighting specialists (e.g. archaeologists) with the tool to assess the performance of the lighting devices they study, as well as to estimate the amount of light emitted on a surface of interest.


Author(s):  
Xiaolin Tang ◽  
Xiaogang Wang ◽  
Jin Hou ◽  
Huafeng Wu ◽  
Ping He

Introduction: Under complex illumination conditions such as poor light sources and light changes rapidly, there are two disadvantages of current gamma transform in preprocessing face image: one is that the parameters of transformation need to be set based on experience; the other is the details of the transformed image are not obvious enough. Objective: Improve the current gamma transform. Methods: This paper proposes a weighted fusion algorithm of adaptive gamma transform and edge feature extraction. First, this paper proposes an adaptive gamma transform algorithm for face image preprocessing, that is, the parameter of transformation generated by calculation according to the specific gray value of the input face image. Secondly, this paper uses Sobel edge detection operator to extract the edge information of the transformed image to get the edge detection image. Finally, this paper uses the adaptively transformed image and the edge detection image to obtain the final processing result through a weighted fusion algorithm. Results: The contrast of the face image after preprocessing is appropriate, and the details of the image are obvious. Conclusion: The method proposed in this paper can enhance the face image while retaining more face details, without human-computer interaction, and has lower computational complexity degree.


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


2012 ◽  
Vol 182-183 ◽  
pp. 2080-2084
Author(s):  
Jie Li ◽  
Xue Xiang Wang ◽  
Hao Liu

Auto white balance (AWB) is an important function of digital camera. The purpose of white balance is to adjust the image to make it look like taken under standard light conditions. We present a new technique to detect the reference white point of image in this paper. This technique detects the white point of image by using dynamic threshold method, thus making it more flexible and more applicable compared to other algorithms. We test 50 images which were taken under different light sources, and find that this algorithm is better than or comparable to other algorithms both in subjective and objective aspects. At the same time, this algorithm has low complexity, and it can be easily applied to hardware implementation.


Author(s):  
Michael D. Kutzer ◽  
Levi D. DeVries ◽  
Cooper D. Blas

Additive manufacturing (AM) technologies have become almost universal in concept development, prototyping, and education. Advances in materials and methods continue to extend this technology to small batch and complex part manufacturing for the public and private sectors. Despite the growing popularity of digital cameras in AM systems, use of image data for part monitoring is largely unexplored. This paper presents a new method for estimating the 3D internal structure of fused deposition modeling (FDM) processes using image data from a single digital camera. Relative transformations are established using motion capture, and the 3D model is created using knowledge of the deposition path coupled with assumptions about the deposition cross-section. Results show that part geometry can be estimated and visualized using the methods presented in this work.


2011 ◽  
Vol 383-390 ◽  
pp. 5193-5199 ◽  
Author(s):  
Jian Ying Yuan ◽  
Xian Yong Liu ◽  
Zhi Qiang Qiu

In optical measuring system with a handheld digital camera, image points matching is very important for 3-dimensional(3D) reconstruction. The traditional matching algorithms are usually based on epipolar geometry or multi-base lines. Mistaken matching points can not be eliminated by epipolar geometry and many matching points will be lost by multi-base lines. In this paper, a robust algorithm is presented to eliminate mistaken matching feature points in the process of 3D reconstruction from multiple images. The algorithm include three steps: (1) pre-matching the feature points using constraints of epipolar geometry and image topological structure firstly; (2) eliminating the mistaken matching points by the principle of triangulation in multi-images; (3) refining camera external parameters by bundle adjustment. After the external parameters of every image refined, repeat step (1) to step (3) until all the feature points been matched. Comparative experiments with real image data have shown that mistaken matching feature points can be effectively eliminated, and nearly no matching points have been lost, which have a better performance than traditonal matching algorithms do.


2021 ◽  
Author(s):  
Zhibin Gao ◽  
Sheng Zhang ◽  
Hsiaohsuan Fang ◽  
Lizhong Li ◽  
Lianfen Huang

2021 ◽  
pp. 30-38
Author(s):  
Sangita Sahana ◽  
Biswanath Roy

This paper presents variations in mesopic adaptation luminance in the presence of ambient light sources along with main light source for outdoor lighting applications. Mesopic photometry system is based on peripheral task, and adaptation luminance is required to compute the effective mesopic radiance for the measured area. Different lighting conditions were considered to determine the effect of chromaticity of bright surrounding sources other than the main light sources to the state of observer adaptation. The veiling luminance caused by the surrounding sources increases the state of observer adaptation, but not the luminance within the measurement field. It has also been observed that in case of cool white surrounding sources, adaptation luminance increases significantly than that of warm white sources.


2020 ◽  
Vol 2020 (11) ◽  
pp. 234-1-234-6
Author(s):  
Nicolai Behmann ◽  
Holger Blume

LED flicker artefacts, caused by unsynchronized irradiation from a pulse-width modulated LED light source captured by a digital camera sensor with discrete exposure times, place new requirements for both visual and machine vision systems. While latter need to capture relevant information from the light source only in a limited number of frames (e.g. a flickering traffic light), human vision is sensitive to illumination modulation in viewing applications, e.g. digital mirror replacement systems. In order to quantify flicker in viewing applications with KPIs related to human vision, we present a novel approach and results of a psychophysics study on the effect of LED flicker artefacts. Diverse real-world driving sequences have been captured with both mirror replacement cameras and a front viewing camera and potential flicker light sources have been masked manually. Synthetic flicker with adjustable parameters is then overlaid on these areas and the flickering sequences are presented to test persons in a driving environment. Feedback from the testers on flicker perception in different viewing areas, sizes and frequencies are collected and evaluated.


2018 ◽  
Vol 51 (7) ◽  
pp. 1128-1138
Author(s):  
R Lasauskaite ◽  
EM Hazelhoff ◽  
C Cajochen

Light exerts a number of non-image-forming effects that are mostly apparent during night-time but can also been seen during daytime. Recently, we have shown that exposure to light of higher colour temperature prior to performing a cognitive task leads to a weaker effort-related cardiovascular response compared to exposure to light of lower colour temperature. This present study tested if presenting light of different colour temperatures during rather than before the task performance would lead to equivalent changes in effort mobilization. Participants performed a modified Sternberg short-memory task for eight minutes as lighting conditions were adjusted to one of four experimental lighting conditions (2800 K, 4000 K, 5000 K, or 6500 K) after the first four minutes, for the remaining four minutes. We predicted that effort-related cardiovascular response would strengthen with decreasing colour temperature. The results, however, did not follow this predicted pattern. No significant effects of lighting conditions on subjective measures were observed. Therefore, we conclude that four minutes might not be enough for light colour temperature to induce changes in effort-related cardiovascular response or affect subjective ratings of sleepiness and lighting.


Sign in / Sign up

Export Citation Format

Share Document