camera position
Recently Published Documents


TOTAL DOCUMENTS

159
(FIVE YEARS 51)

H-INDEX

8
(FIVE YEARS 3)

2021 ◽  
Vol 17 (6) ◽  
pp. 465-471
Author(s):  
M.B. Gorobeiko ◽  
A.V. Dinets ◽  
V.H. Hoperia ◽  
K.M. Abdalla

Background. Detection of parathyroid glands by spectroscopy of their autofluorescence in the near-infrared spectrum (NIRAF) is considered a promising intraoperative tool in addition to their verification with visual identification. The study was aimed to evaluate the role of NIRAF by using two different imaging systems to confirm parathyroid glands during operations for benign and malignant thyroid and parathyroid tumors. Materials and methods. The study included 62 patients who underwent verification of NIRAF by using two different imaging systems equipped with a near-infrared (NIR) camera during surgery. Intravenous fluorophore of indocyanine green was applied to amplify the NIRAF signal. Results. Normal parathyroid glands were identified and mobilized after a visual inspection in 50 patients (80 %), which was subsequently confirmed by evaluation with NIRAF spectroscopy. Determination of NIRAF in the parathyroid glands and their differentiation from lymph nodes was achieved in 8 (13 %) patients with papillary thyroid carcinoma. In 3 (5 %) patients, the parathyroid gland was identified in the area of the postoperative scar, using NIRAF but not by the visual identification. In 2 (3 %) cases, the appearance of the signal was determined but with a decrease in the intensity of the NIRAF from the parathyroid glands during their unintentional removal. The strong NIRAF signal intensity was observed from the parathyroid gland after changing the position of the NIR camera when using the Fluobeam 800 device at an angle of approximately 45–65° to the area of the parathyroid gland location. The Fluobeam LX demonstrated a satisfactory NIRAF signal without any specific changes in camera position. NIRAF signal was determined in the tissue of toxic thyroid adenomas. NIRAF signal of the low intensity was detected in the invasion of thyroid carcinoma in a capsule of a thyroid gland. No NIRAF signal was observed from metastatic and normal lymph nodes. Conclusions. The use of NIRAF parathyroid spectroscopy technology allows impro-ving their imaging and verification as an additional method during surgery of the neck. The practical value of NIRAF spectroscopy of the parathyroid gland is increased in the case of reoperations due to the risk of accidental removal of the parathyroid gland as well as for differential diagnosis between the parathyroid gland and metastatic lymph nodes.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6480
Author(s):  
Kai Guo ◽  
Hu Ye ◽  
Zinian Zhao ◽  
Junhao Gu

In this paper we propose an efficient closed form solution to the absolute orientation problem for cameras with an unknown focal length, from two 2D–3D point correspondences and the camera position. The problem can be decomposed into two simple sub-problems and can be solved with angle constraints. A polynomial equation of one variable is solved to determine the focal length, and then a geometric approach is used to determine the absolute orientation. The geometric derivations are easy to understand and significantly improve performance. Rewriting the camera model with the known camera position leads to a simpler and more efficient closed form solution, and this gives a single solution, without the multi-solution phenomena of perspective-three-point (P3P) solvers. Experimental results demonstrated that our proposed method has a better performance in terms of numerical stability, noise sensitivity, and computational speed, with synthetic data and real images.


Author(s):  
Fernando F. Doria ◽  
Felipe B. C. L. Lima ◽  
Lucas Real ◽  
Vinicius R. G. Oliveira ◽  
Rogerio Y. Takimoto ◽  
...  
Keyword(s):  

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1647
Author(s):  
Wahyu Rahmaniar ◽  
Wen-June Wang ◽  
Wahyu Caesarendra ◽  
Adam Glowacz ◽  
Krzysztof Oprzędkiewicz ◽  
...  

Localization for the indoor aerial robot remains a challenging issue because global positioning system (GPS) signals often cannot reach several buildings. In previous studies, navigation of mobile robots without the GPS required the registration of building maps beforehand. This paper proposes a novel framework for addressing indoor positioning for unmanned aerial vehicles (UAV) in unknown environments using a camera. First, the UAV attitude is estimated to determine whether the robot is moving forward. Then, the camera position is estimated based on optical flow and the Kalman filter. Semantic segmentation using deep learning is carried out to get the position of the wall in front of the robot. The UAV distance is measured using the comparison of the image size ratio based on the corresponding feature points between the current and the reference of the wall images. The UAV is equipped with ultrasonic sensors to measure the distance of the UAV from the surrounded wall. The ground station receives information from the UAV to show the obstacles around the UAV and its current location. The algorithm is verified by capture the images with distance information and compared with the current image and UAV position. The experimental results show that the proposed method achieves an accuracy of 91.7% and a computation time of 8 frames per second (fps).


Teknik ◽  
2021 ◽  
Vol 42 (2) ◽  
pp. 169-177
Author(s):  
Faqih Rofii ◽  
Gigih Priyandoko ◽  
Muhammad Ifan Fanani ◽  
Aji Suraji

Models for vehicle detection, classification, and counting based on computer vision and artificial intelligence are constantly evolving. In this study, we present the Yolov4-based vehicle detection, classification, and counting model approach. The number of vehicles was calculated by generating the serial number of the identity of each vehicle. The object is detected and classified, marked by the display of bounding boxes, classes, and confidence scores. The system input is a video dataset that considers the camera position, light intensity, and vehicle traffic density. The method has counted the number of vehicles: cars, motorcycles, buses, and trucks. Evaluation of model performance is based on accuracy, precision, and total recall of the confusion matrix. The results of the dataset test and the calculation of the model performance parameters had obtained the best accuracy, precision. Total recall values when the model testing was carried out during the day where the camera position was at the height of 6 m and the loss of 500 was 83%, 93%, and 94%. Meanwhile, the lowest total accuracy, precision, and recall were obtained when the model was tested at night. The camera position was at the height of 1.5 m, and 900 losses were 68%, 77%, and 78%.


Author(s):  
M. Buyukdemircioglu ◽  
S. Kocaman

Abstract. Spatiotemporal data visualization plays an important role for simulating the changes over time and representing dynamic geospatial phenomena. In aerial photogrammetry, image acquisition is the most important stage for obtaining high-quality products; and can be affected by various factors such as the weather and illumination conditions, imaging geometry, etc. 3D simulation of the aircraft trajectories at the planning stage helps the flight planners to make better decisions especially for unmanned aerial vehicle (UAV) missions in areas with mixed land use land cover, such as rugged topography, water bodies, restricted areas, etc.; since images with poor texture or large differences in scale may deteriorate the quality of the final products. In this study, a geovisualization approach for photogrammetric flights carried out with UAVs or airplane platforms was implemented using CesiumJS Virtual Globe. The measured flight trajectory parameters, such as image perspective centre coordinates and the camera rotations, the time of acquisition, and the interior orientation parameters (IOPs) of the camera were used for spatiotemporal visualization. In the developed approach, the EOPs and IOPs of the images were utilized to reconstruct the flight paths, the camera position, the footprints of the acquired images on the ground, and the rotation of the aircraft; and to present them on a 3D web environment precisely. The approach was demonstrated by using two case studies, one from a UAV flight mission and the other one from an airplane carried out with a large-format aerial camera.


Author(s):  
F. Ioli ◽  
L. Pinto ◽  
F. Ferrario

Abstract. The possibility of equipping UAVs with lightweight GNSS receivers in order to estimate the camera position within a photogrammetric block allows for a reduction of the number of Ground Control Points (GCP), saving time during the field work and decreasing operational costs. Additionally, this makes it possible to build photogrammetric models even in morphologically complex areas or in emergency situations. This work is proposing a non-intrusive and low-cost procedure to retrieve the coordinates of the camera projection centre with decimetric accuracy. The method was designed and tested with the quadcopter DJI Matrice 210 V2 drone equipped with a DJI ZENMUSE X5S camera and an Emlid reach M, a low-cost, single-frequency (L1) GNSS receiver. GNSS observations are post-processed in PPK in order to obtain the UAV trajectory. Synchronization between the camera and the GNSS receiver is achieved by looking at the camera triggering timestamps in flight telemetry data, without requiring an electronic connection between camera and the GNSS that may be troublesome with commercial UAVs. Two surveys were carried out, respectively to calibrate and validate the procedure. The validation test evidenced the possibility of obtaining the coordinates of the camera projection centres with decimetric accuracy. The centre of projections can then be employed for GNSS-assisted aerial triangulation as input of the bundle block adjustment. Provided that at least one GCP is used, it is possible to reach centimetric accuracy on the ground.


2021 ◽  
Vol 11 (13) ◽  
pp. 6014
Author(s):  
Kai Guo ◽  
Hu Ye ◽  
Junhao Gu ◽  
Honglin Chen

The aim of the perspective-three-point (P3P) problem is to estimate extrinsic parameters of a camera from three 2D–3D point correspondences, including the orientation and position information. All the P3P solvers have a multi-solution phenomenon that is up to four solutions and needs a fully calibrated camera. In contrast, in this paper we propose a novel method for intrinsic and extrinsic parameter estimation based on three 2D–3D point correspondences with known camera position. Our core contribution is to build a new, virtual camera system whose frame and image plane are defined by the original 3D points, to build a new, intermediate world frame by the original image plane and the original 2D image points, and convert our problem to a P3P problem. Then, the intrinsic and extrinsic parameter estimation is to solve frame transformation and the P3P problem. Lastly, we solve the multi-solution problem by image resolution. Experimental results show its accuracy, numerical stability and uniqueness of the solution for intrinsic and extrinsic parameter estimation in synthetic data and real images.


2021 ◽  
Vol 11 (9) ◽  
pp. 4040
Author(s):  
Lulu Niu ◽  
Gang Xiong ◽  
Xiuqin Shang ◽  
Chao Guo ◽  
Xi Chen ◽  
...  

Foot measurement is necessary for personalized customization. Nowadays, people usually obtain their foot size by using a ruler or foot scanner. However, there are some disadvantages to this, namely, large measurement error and variance when using rulers, and high price and poor convenience when using a foot scanner. To tackle these problems, we obtain foot parameters by 3D foot reconstruction based on mobile phone photography. Firstly, foot images are taken by a mobile phone. Secondly, the SFM (Structure-from-Motion) algorithm is used to acquire the corresponding parameters and then to calculate the camera position to construct the sparse model. Thirdly, the PMVS (Patch-based Multi View System) is adopted to build a dense model. Finally, the Meshlab is used to process and measure the foot model. The result shows that the experimental error of the 3D foot reconstruction method is around 1 mm, which is tolerable for applications such as shoe tree customization. The experiment proves that the method can construct the 3D foot model efficiently and easily. This technology has broad application prospects in the fields of shoe size recommendation, high-end customized shoes and medical correction.


Sign in / Sign up

Export Citation Format

Share Document