Automated Planning of Rooftop PV Systems with Aerial Image Processing

Author(s):  
Francis Carlos dos Santos ◽  
Jesse Thornburg ◽  
Taha Selim Ustun
2020 ◽  
Vol 1659 ◽  
pp. 012003
Author(s):  
Yaocheng Li ◽  
Weidong Zhang ◽  
Yingming Cai ◽  
Zhe Li ◽  
Xiuchen Jiang

2019 ◽  
Vol 7 (2) ◽  
pp. 279
Author(s):  
I Ketut Satria Rahadi ◽  
I Made Anom Sutrisna Wijaya ◽  
I Wayan Tika

Hama tikus adalah hama yang dapat menyebabkan kegagalan panen tanaman padi. Metode yang digunakan untuk mengukur besaran serangan hama tikus adalah metode pengambilan contoh dan pendekatan foto udara. Namun dari kedua metode ini tingkat serangan yang dihasilkan belum diketahui korelasinya. Maka dari itu dilakukannya penelitian ini untuk mendapatkan hubungan antara intensitas dan luas serangan hama tikus tanaman padi. Tahapan penelitian ini adalah survei lokasi yang terserang hama tikus, persiapan alat, pengambilan foto udara, pengambilan sampel untuk perhitungan intensitas serangan, pengolahan citra, perhitungan luas serangan, analisis regresi dan validasi. Intensitas serangan dihitung menggunakan perhitungan secara mutlak, sedangkan luas serangan dihitung menggunakan metode pengolahan citra foto udara yang dikembangkang oleh Widodo. Analisis regresi menunjukan bahwa hubungan antara intensitas serangan dengan luas serangan memiliki koefisien determinasi 0,889 dan persamaan regresi yang diperoleh y = 1,138x dengan faktor kesalahan 8,947%. Intensitas serangan hama tikus tanaman padi menggunakan metode pengambilan contoh berhubungan linier dengan luas serangan hasil analisis foto udara yang dikembangkan oleh Widodo.   Rat pests are pests that can cause crop failure in rice plant. The method used to calculate the number of rodent pest attacks is the method of sampling and obtaining aerial photographs. But from these two methods the level of attack produced is not known to correlate. So this study purpose to obtain a relationship between intensity of attack with area of attack rat pest of rice plants. The stages of this study were location surveys that were attacked by rat pests, preparation of tools, aerial photography, and sampling for the calculation of attack intensity, image processing, area attack, regression analysis and validation. The intensity of attacks is calculated using total calculations, while broad attacks are calculated using the aerial image processing method developed by Widodo. Regression analysis shows the relationship between the intensity of ??attack with the area of ??attack has a determination coefficient of 0.889 and the regression coefficient obtained y = 1.138x with an error factor of 8.947%. The intensity of rat pest attacks using linear related sampling methods with broad attack results from aerial photo analysis developed by Widodo.


2001 ◽  
Author(s):  
Haiju Lei ◽  
Dehua Li ◽  
Hanping Hu ◽  
Zhaonan Guo

2020 ◽  
Author(s):  
Michael Gomez Selvaraj ◽  
Manuel Valderrama ◽  
Diego Guzman ◽  
Milton Valencia ◽  
Henry Ruiz ◽  
...  

Abstract Background: Rapid non-destructive measurements to predict cassava root yield over the full growing season through large numbers of germplasm and multiple environments is a huge challenge in Cassava breeding programs. As opposed to waiting until the harvest season, multispectral imagery using unmanned aerial vehicles (UAV) are capable of measuring the canopy metrics and vegetation indices (VIs) traits at different time points of the growth cycle. This resourceful time series aerial image processing with appropriate analytical framework is very important for the automatic extraction of phenotypic features from the image data. Many studies have demonstrated the usefulness of advanced remote sensing technologies coupled with machine learning (ML) approaches for accurate prediction of valuable crop traits. Until now, Cassava has received little to no attention in aerial image-based phenotyping and ML model testing. Results: To accelerate image processing, an automated image-analysis framework called CIAT Pheno-i was developed to extract plot level vegetation indices/canopy metrics. Multiple linear regression models were constructed at different key growth stages of cassava, using ground-truth data and vegetation indices obtained from a multispectral sensor. Henceforth, the spectral indices/features were combined to develop models and predict cassava root yield using different Machine learning techniques. Our results showed that (1) Developed CIAT pheno-i image analysis framework was found to be easier and more rapid than manual methods. (2) The correlation analysis of four phenological stages of cassava revealed that elongation (EL) and late bulking (LBK) were the most useful stages to estimate above-ground biomass (AGB), below-ground biomass (BGB) and canopy height (CH). (3) The multi-temporal analysis revealed that cumulative image feature information of EL+early bulky (EBK) stages showed a higher significant correlation (r = 0.77) for Green Normalized Difference Vegetation indices (GNDVI) with BGB than individual time points. Canopy height measured on the ground correlated well with UAV (CHuav)-based measurements (r = 0.92) at late bulking (LBK) stage. Among different image features, normalized difference red edge index (NDRE) data were found to be consistently highly correlated (r = 0.65 to 0.84) with AGB at LBK stage. (4) Among the four ML algorithms used in this study, k-Nearest Neighbours (kNN), Random Forest (RF) and Support Vector Machine (SVM) showed the best performance for root yield prediction with the highest accuracy of R2 = 0.67, 0.66 and 0.64, respectively. Conclusion: UAV platforms, time series image acquisition, automated image analytical framework (CIAT Pheno-i), and key vegetation indices (VIs) to estimate phenotyping traits and root yield described in this work have great potential for use as a selection tool in the modern cassava breeding programs around the world to accelerate germplasm and varietal selection. The image analysis software (CIAT Pheno-i) developed from this study can be widely applicable to any other crop to extract phenotypic information rapidly.


2005 ◽  
Vol 21 (1-2) ◽  
pp. 118-123
Author(s):  
Mohammed Sadgal ◽  
Aziz El Fazziki ◽  
Abdellah Ait Ouahman

2021 ◽  
Vol 16 (5) ◽  
pp. 827-839
Author(s):  
Hidehiko Shishido ◽  
◽  
Koyo Kobayashi ◽  
Yoshinari Kameda ◽  
Itaru Kitahara

Building damage maps that show the damage status of buildings are an essential information source for various disaster countermeasures, such as evacuation, rescue, and reconstruction. Therefore, they must be generated as quickly as possible. However, to generate a building damage map, it is necessary to collect disaster information and estimate the damage situation over a wide area, which is time consuming. (In this paper, we consider disaster information collection as capturing aerial images.) In recent years, crowdsourcing has been widely used to understand the damage situation. Crowdsourcing achieves large-scale work by dividing it into microtasks that can be solved by anyone and by distributing the microtasks among an unspecified number of workers. We believe that crowdsourcing is suitable for gathering information and assessing damage situations as it can adjust the type and number of workers in a scalable manner and allocate resources according to the size of the disaster. Therefore, crowdsourcing has been used for gathering information and assessing the situation during disaster management. However, usually, the two types of crowdsourcing tasks (i.e., gathering information and assessing the damage) are performed independently; consequently, the collected information is often not utilized effectively. More efficient work can be expected by linking the two crowdsourcing tasks. This paper proposes a framework for efficiently generating a building damage map by combining the two methods of information collection on disaster areas and assessment of disaster situations using aerial image processing. The results of an experiment using a prototype of our proposed framework clarify the range of applications in the collection and assessment crowdsourcing tasks. The experimental results indicate the feasibility of understanding disaster situations using our method. In addition, it is possible to install artificial intelligence workers that can support human workers to estimate the damage situation more quickly.


Sign in / Sign up

Export Citation Format

Share Document