camera sensors
Recently Published Documents


TOTAL DOCUMENTS

113
(FIVE YEARS 51)

H-INDEX

11
(FIVE YEARS 3)

Author(s):  
Evangelos Alevizos ◽  
Athanasios V Argyriou ◽  
Dimitris Oikonomou ◽  
Dimitrios D Alexakis

Shallow bathymetry inversion algorithms have long been applied in various types of remote sensing imagery with relative success. However, this approach requires that imagery with increased radiometric resolution in the visible spectrum is available. The recent developments in drones and camera sensors allow for testing current inversion techniques on new types of datasets. This study explores the bathymetric mapping capabilities of fused RGB and multispectral imagery, as an alternative to costly hyperspectral sensors. Combining drone-based RGB and multispectral imagery into a single cube dataset, provides the necessary radiometric detail for shallow bathymetry inversion applications. This technique is based on commercial and open-source software and does not require input of reference depth measurements in contrast to other approaches. The robustness of this method was tested on three different coastal sites with contrasting seafloor types. The use of suitable end-member spectra which are representative of the seafloor types of the study area and the sun zenith angle are important parameters in model tuning. The results of this study show good correlation (R2>0.7) and less than half a meter error when they are compared with sonar depth data. Consequently, integration of various drone-based imagery may be applied for producing centimetre resolution bathymetry maps at low cost for small-scale shallow areas.


Author(s):  
Evangelos Alevizos ◽  
Athanasios V Argyriou ◽  
Dimitris Oikonomou ◽  
Dimitrios D Alexakis

Shallow bathymetry inversion algorithms have long been applied in various types of remote sensing imagery with relative success. However, this approach requires that imagery with increased radiometric resolution in the visible spectrum is available. The recent developments in drones and camera sensors allow for testing current inversion techniques on new types of datasets. This study explores the bathymetric mapping capabilities of fused RGB and multispectral imagery, as an alternative to costly hyperspectral sensors. Combining drone-based RGB and multispectral imagery into a single cube dataset, provides the necessary radiometric detail for shallow bathymetry inversion applications. This technique is based on commercial and open-source software and does not require input of reference depth measurements in contrast to other approaches. The robustness of this method was tested on three different coastal sites with contrasting seafloor types. The use of suitable end-member spectra which are representative of the seafloor types of the study area and the sun zenith angle are important parameters in model tuning. The results of this study show good correlation (R2>0.7) and less than half a meter error when they are compared with sonar depth data. Consequently, integration of various drone-based imagery may be applied for producing centimetre resolution bathymetry maps at low cost for small-scale shallow areas.


2021 ◽  
Author(s):  
Sabeeha Mehtab ◽  
Wei Qi Yan ◽  
Ajit Narayanan

2021 ◽  
Vol 2120 (1) ◽  
pp. 012026
Author(s):  
J C Ho ◽  
S K Phang ◽  
H K Mun

Abstract Unmanned aerial vehicle (UAV) is widely used by many industries these days such as militaries, agriculture, and surveillance. However, one of the main challenges of UAV is navigating through an environment where global positioning system (GPS) is being denied. The main purpose of this paper is to find a solution for UAV to be able to navigate in a GPS denied surrounding without affecting the drone flight performance. There are two ways to overcome these challenges such as using visual odometry (VO) or by using simultaneous localization and mapping (SLAM). However, VO has a drawback because camera sensors require good lighting which will affect the performance of the UAV when it is navigating through a low light intensity environment. Hence, in this paper 2-D SLAM will be use as a solution to help UAV to navigate under a GPS-denied environment with the help of a light detection and ranging (LIDAR) sensor which known as a LIDAR-based SLAM. This is because SLAM can help UAVs to localize itself and map the surrounding of the environment. The concept and idea of this paper will be fully simulated using MATLAB, where the drone navigation will be simulated in MATLAB to extract LIDAR data and to use the LIDAR data to carry out SLAM via pose graph optimization. Besides, the contribution to this research work has also identified that in pose graph optimization, the loop closure threshold and loop closure radius play an important role. The loop closure threshold can affect the accuracy of the trajectory of the drone and the accuracy of mapping the environment as compared to ground truth. On the other hand, the loop closure search radius can increase the processing speed of obtaining the data via pose graph optimization. The main contribution to this research work is shown that the processing speed can increase up to 45 % and the accuracy of the trajectory of the drone and the mapped surrounding is quite accurate as compared to ground truth.


2021 ◽  
Vol 2042 (1) ◽  
pp. 012114
Author(s):  
Dongjun Mah ◽  
Michael Kim ◽  
Athanasios Tzempelikos

Abstract The concept of integrating programmable low-cost cameras into the office infrastructure and BMS for real-time, web-based sensing and control of the luminous environment in buildings is presented in this study. Experiments were conducted to evaluate the potential of predicting the luminance field perceived by an office occupant using a programmable calibrated HDR camera installed at the rear side of a computer monitor or on the wall behind the occupant, for a variety of sky conditions and shading options. The generated luminance maps using Python scripts with OpenCV packages were further processed to extract daylighting and glare metrics using Evalgare. The results showed that: (i) among the different camera resolutions that were compared, the 330x330 resolution was selected as the best option to balance between accurate capturing of visual environment and comfort and computational efficiency; (ii) a camera sensor embedded on the rear side of a computer screen could capture interior visual conditions consistently similarly to those viewed by the occupant, except for sunny conditions without proper shading protection. This prototype study paves the way for luminance monitoring and daylight control using programmable low-cost camera sensors embedded into the office infrastructure.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


2021 ◽  
Author(s):  
Alessandro Avi ◽  
Matteo Zuccatti ◽  
Matteo Nardello ◽  
Nicola Conci ◽  
Davide Brunelli

2021 ◽  
Vol 17 (3) ◽  
Author(s):  
Aaron Abel ◽  
Suci Aulia ◽  
Dadan Nur Ramadhan ◽  
Sugondo Hadiyoso

An automatic parking system has been proposed to make the car parking process more efficient in terms of time and cost. The absence of information on the position of the parking lot makes the car driver take longer to find it. In multi-story parking lots, officers cannot constantly monitor the available parking conditions directly, so prospective parking users do not know the position of the open parking space. In addition, many parking lots use automatic door latch, but no parking space information display. Parking system automation can be based on hardware, software, or a combination of hardware and software. To the best of our knowledge, no software-based framework is entirely used on this system. Therefore, this study proposes an automatic parking system based on camera sensors and software, which is combined into an information system. The proposed method uses simple morphological operations. Based on the test results, the detection accuracy achieved is 100% with a light intensity of 3 lux, 15 lux, 30 lux, 60 lux, 120 lux, and 250 lux. The average processing time is 1.59 seconds. From this study, it is hoped that this prototype can be tested on relevant environmental conditions so that the prototype can be implemented in parking lots.


2021 ◽  
Vol 2025 (1) ◽  
pp. 012007
Author(s):  
Guobin Xu ◽  
Dongdong Li ◽  
Xiangyang Chen ◽  
Qiankun Li ◽  
Jun Yin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document