scholarly journals Integration of APSIM and PROSAIL models to develop more precise radiometric estimation of crop traits using deep learning

2021 ◽  
Author(s):  
Qiaomin Chen ◽  
Bangyou Zheng ◽  
Tong Chen ◽  
Scott Chapman

AbstractA major challenge for the estimation of crop traits (biophysical variables) from canopy reflectance is the creation of a high-quality training dataset. This can be addressed by using radiative transfer models (RTMs) to generate training dataset representing ‘real-world’ data in situations with varying crop types and growth status as well as various observation configurations. However, this approach can lead to “ill-posed” problems related to assumptions in the sampling strategy and due to uncertainty in the model, resulting in unsatisfactory inversion results for retrieval of target variables. In order to address this problem, this research investigates a practical way to generate higher quality ‘synthetic’ training data by integrating a crop growth model (CGM, in this case APSIM) with an RTM (in this case PROSAIL). This allows control of uncertainties of the RTM by imposing biological constraints on distribution and co-distribution of related variables. Subsequently, the method was theoretically validated on two types of synthetic dataset generated by PROSAIL or the coupling of APSIM and PROSAIL through comparing estimation precision for leaf area index (LAI), leaf chlorophyll content (Cab), leaf dry matter (Cm) and leaf water content (Cw). Additionally, the capabilities of current deep learning techniques using high spectral resolution hyperspectral data were investigated. The main findings include: (1) Feedforward neural network (FFNN) provided with appropriate configuration is a promising technique to retrieve crop traits from input features consisting of 1 nm-wide hyperspectral bands across 400-2500 nm range and observation configuration (solar and viewing angles), leading to a precise joint estimation for LAI (RMSE=0.061 m2 m-2), Cab (RMSE=1.42 μg cm-2), Cm (RMSE=0.000176 g cm-2) and Cw (RMSE=0.000319 g cm-2); (2) For the aim of model simplification, a narrower range in 400-1100 nm without observation configuration in input of FFNN model provided less precise estimation for LAI (RMSE=0.087 m2 m-2), Cab (RMSE=1.92 μg cm-2), Cm (RMSE=0.000299 g cm-2) and Cw (RMSE=0.001271 g cm-2); (3) The introduction of biological constraints in training datasets improved FFNN model performance in both average precision and stability, resulting in a much accurate estimation for LAI (RMSE=0.006 m2 m-2), Cab (RMSE=0.45 μg cm-2), Cm (RMSE=0.000039 g cm-2) and Cw (RMSE=0.000072 g cm-2), and this improvement could be further increased by enriching sample diversity in training dataset.

Plant Methods ◽  
2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Shanjun Luo ◽  
Yingbin He ◽  
Qian Li ◽  
Weihua Jiao ◽  
Yaqiu Zhu ◽  
...  

Abstract Background The accurate estimation of potato yield at regional scales is crucial for food security, precision agriculture, and agricultural sustainable development. Methods In this study, we developed a new method using multi-period relative vegetation indices (rVIs) and relative leaf area index (rLAI) data to improve the accuracy of potato yield estimation based on the weighted growth stage. Two experiments of field and greenhouse (water and nitrogen fertilizer experiments) in 2018 were performed to obtain the spectra and LAI data of the whole growth stage of potato. Then the weighted growth stage was determined by three weighting methods (improved analytic hierarchy process method, IAHP; entropy weight method, EW; and optimal combination weighting method, OCW) and the Slogistic model. A comparison of the estimation performance of rVI-based and rLAI-based models with a single and weighted stage was completed. Results The results showed that among the six test rVIs, the relative red edge chlorophyll index (rCIred edge) was the optimal index of the single-stage estimation models with the correlation with potato yield. The most suitable single stage for potato yield estimation was the tuber expansion stage. For weighted growth stage models, the OCW-LAI model was determined as the best one to accurately predict the potato yield with an adjusted R2 value of 0.8333, and the estimation error about 8%. Conclusion This study emphasizes the importance of inconsistent contributions of multi-period or different types of data to the results when they are used together, and the weights need to be considered.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e15012-e15012
Author(s):  
Mayur Sarangdhar ◽  
Venkatesh Kolli ◽  
William Seibel ◽  
John Peter Perentesis

e15012 Background: Recent advances in cancer treatment have revolutionized patient outcomes. However, toxicities associated with anti-cancer drugs remain a concern with many anti-cancer drugs now implicated in cardiotoxicity. The complete spectrum of cardiotoxicity associated with anti-cancer drugs is only evident post-approval of drugs. Deep Learning methods can identify novel and emerging safety signals in “real-world” clinical settings. Methods: We used AERS Mine, an open-source data mining platform to identify drug toxicity signatures in the FDA’s Adverse Event Reporting System of 16 million patients. We identified 1.3 million patients on traditional and targeted anti-cancer therapy to analyze therapy-specific cardiotoxicity patterns. Cardiotoxicity training dataset contained 1571 molecules characterized with bioassay against hERG potassium channel and included 350 toxic compounds with an IC50 of < 1μM. We implemented a Deep Belief Network to extract a deep hierarchical representation of the training data, and the Extra Tree Classifier to predict the toxicity of drug candidates. Drugs were encoded using 1024-bit Morgan fingerprint representation using SMILES with search radius of 7 atoms. Pharmacovigilance metrics (Relative Risks and safety signals) were used to establish statistical correlation. Results: This analysis identified signatures of arrhythmias and conduction abnormalities associated with common anti-cancer drugs (e.g. atrial fibrillation with ibrutinib, alkylating agents, immunomodulatory drugs; sinus bradycardia with 5FU, paclitaxel, thalidomide; sinus tachycardia with anthracyclines). Our analysis also identified myositis/myocarditis association with newer immune checkpoint inhibitors (e.g., atezolizumab, durvalumab, cemiplimab, avelumab) paralleling earlier signals for pembrolizumab, nivolumab, and ipilimumab. Deep Learning identified signatures of chemical moieties linked to cardiotoxicity, including common motifs in drugs associated with arrhythmias and conduction abnormalities with an accuracy of 89%. Conclusions: Deep Learning provides a comprehensive insight into emerging cardiotoxicity patterns of approved and investigational drugs, allows detection of ‘rogue’ chemical moieties, and shows promise for novel drug discovery and development.


Author(s):  
M. Buyukdemircioglu ◽  
R. Can ◽  
S. Kocaman

Abstract. Automatic detection, segmentation and reconstruction of buildings in urban areas from Earth Observation (EO) data are still challenging for many researchers. Roof is one of the most important element in a building model. The three-dimensional geographical information system (3D GIS) applications generally require the roof type and roof geometry for performing various analyses on the models, such as energy efficiency. The conventional segmentation and classification methods are often based on features like corners, edges and line segments. In parallel to the developments in computer hardware and artificial intelligence (AI) methods including deep learning (DL), image features can be extracted automatically. As a DL technique, convolutional neural networks (CNNs) can also be used for image classification tasks, but require large amount of high quality training data for obtaining accurate results. The main aim of this study was to generate a roof type dataset from very high-resolution (10 cm) orthophotos of Cesme, Turkey, and to classify the roof types using a shallow CNN architecture. The training dataset consists 10,000 roof images and their labels. Six roof type classes such as flat, hip, half-hip, gable, pyramid and complex roofs were used for the classification in the study area. The prediction performance of the shallow CNN model used here was compared with the results obtained from the fine-tuning of three well-known pre-trained networks, i.e. VGG-16, EfficientNetB4, ResNet-50. The results show that although our CNN has slightly lower performance expressed with the overall accuracy, it is still acceptable for many applications using sparse data.


2020 ◽  
Author(s):  
Haiming Tang ◽  
Nanfei Sun ◽  
Steven Shen

Artificial intelligence (AI) has an emerging progress in diagnostic pathology. A large number of studies of applying deep learning models to histopathological images have been published in recent years. While many studies claim high accuracies, they may fall into the pitfalls of overfitting and lack of generalization due to the high variability of the histopathological images. We use the example of Osteosarcoma to illustrate the pitfalls and how the addition of model input variability can help improve model performance. We use the publicly available osteosarcoma dataset to retrain a previously published classification model for osteosarcoma. We partition the same set of images into the training and testing datasets differently than the original study: the test dataset consists of images from one patient while the training dataset consists images of all other patients. The performance of the model on the test set using the new partition schema declines dramatically, indicating a lack of model generalization and overfitting.We also show the influence of training data variability on model performance by collecting a minimal dataset of 10 osteosarcoma subtypes as well as benign tissues and benign bone tumors of differentiation. We show the additions of more and more subtypes into the training data step by step under the same model schema yield a series of coherent models with increasing performances. In conclusion, we bring forward data preprocessing and collection tactics for histopathological images of high variability to avoid the pitfalls of overfitting and build deep learning models of higher generalization abilities.


Author(s):  
A. Wichmann ◽  
A. Agoub ◽  
M. Kada

Machine learning methods have gained in importance through the latest development of artificial intelligence and computer hardware. Particularly approaches based on deep learning have shown that they are able to provide state-of-the-art results for various tasks. However, the direct application of deep learning methods to improve the results of 3D building reconstruction is often not possible due, for example, to the lack of suitable training data. To address this issue, we present RoofN3D which provides a new 3D point cloud training dataset that can be used to train machine learning models for different tasks in the context of 3D building reconstruction. It can be used, among others, to train semantic segmentation networks or to learn the structure of buildings and the geometric model construction. Further details about RoofN3D and the developed data preparation framework, which enables the automatic derivation of training data, are described in this paper. Furthermore, we provide an overview of other available 3D point cloud training data and approaches from current literature in which solutions for the application of deep learning to unstructured and not gridded 3D point cloud data are presented.


2020 ◽  
Vol 36 (12) ◽  
pp. 3863-3870
Author(s):  
Mischa Schwendy ◽  
Ronald E Unger ◽  
Sapun H Parekh

Abstract Motivation Deep learning use for quantitative image analysis is exponentially increasing. However, training accurate, widely deployable deep learning algorithms requires a plethora of annotated (ground truth) data. Image collections must contain not only thousands of images to provide sufficient example objects (i.e. cells), but also contain an adequate degree of image heterogeneity. Results We present a new dataset, EVICAN—Expert visual cell annotation, comprising partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms and magnifications that is readily usable as training data for computer vision applications. With 4600 images and ∼26 000 segmented cells, our collection offers an unparalleled heterogeneous training dataset for cell biology deep learning application development. Availability and implementation The dataset is freely available (https://edmond.mpdl.mpg.de/imeji/collection/l45s16atmi6Aa4sI?q=). Using a Mask R-CNN implementation, we demonstrate automated segmentation of cells and nuclei from brightfield images with a mean average precision of 61.6 % at a Jaccard Index above 0.5.


2020 ◽  
Vol 12 (24) ◽  
pp. 4193
Author(s):  
Sofia Tilon ◽  
Francesco Nex ◽  
Norman Kerle ◽  
George Vosselman

We present an unsupervised deep learning approach for post-disaster building damage detection that can transfer to different typologies of damage or geographical locations. Previous advances in this direction were limited by insufficient qualitative training data. We propose to use a state-of-the-art Anomaly Detecting Generative Adversarial Network (ADGAN) because it only requires pre-event imagery of buildings in their undamaged state. This approach aids the post-disaster response phase because the model can be developed in the pre-event phase and rapidly deployed in the post-event phase. We used the xBD dataset, containing pre- and post- event satellite imagery of several disaster-types, and a custom made Unmanned Aerial Vehicle (UAV) dataset, containing post-earthquake imagery. Results showed that models trained on UAV-imagery were capable of detecting earthquake-induced damage. The best performing model for European locations obtained a recall, precision and F1-score of 0.59, 0.97 and 0.74, respectively. Models trained on satellite imagery were capable of detecting damage on the condition that the training dataset was void of vegetation and shadows. In this manner, the best performing model for (wild)fire events yielded a recall, precision and F1-score of 0.78, 0.99 and 0.87, respectively. Compared to other supervised and/or multi-epoch approaches, our results are encouraging. Moreover, in addition to image classifications, we show how contextual information can be used to create detailed damage maps without the need of a dedicated multi-task deep learning framework. Finally, we formulate practical guidelines to apply this single-epoch and unsupervised method to real-world applications.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 932
Author(s):  
Yueh-Peng Chen ◽  
Tzuo-Yau Fan ◽  
Her-Chang Chao

Traditional watermarking techniques extract the watermark from a suspected image, allowing the copyright information regarding the image owner to be identified by the naked eye or by similarity estimation methods such as bit error rate and normalized correlation. However, this process should be more objective. In this paper, we implemented a model based on deep learning technology that can accurately identify the watermark copyright, known as WMNet. In the past, when establishing deep learning models, a large amount of training data needed to be collected. While constructing WMNet, we implemented a simulated process to generate a large number of distorted watermarks, and then collected them to form a training dataset. However, not all watermarks in the training dataset could properly provide copyright information. Therefore, according to the set restrictions, we divided the watermarks in the training dataset into two categories; consequently, WMNet could learn and identify the copyright information that the watermarks contained, so as to assist in the copyright verification process. Even if the retrieved watermark information was incomplete, the copyright information it contained could still be interpreted objectively and accurately. The results show that the method proposed by this study is relatively effective.


2020 ◽  
Vol 11 (3) ◽  
pp. 66-79 ◽  
Author(s):  
Miaomiao Ji ◽  
Keke Zhang ◽  
Qiufeng Wu

Soil temperature, as one of the critical meteorological parameters, plays a key role in physical, chemical and biological processes in terrestrial ecosystems. Accurate estimation of dynamic soil temperature is crucial for underground soil ecological research. In this work, a hybrid model SAE-BP is proposed by combining stacked auto-encoders (SAE) and back propagation (BP) algorithm to estimate soil temperature using hyperspectral remote sensing data. Experimental results show that the proposed SAE-BP model achieves a more stable and effective performance than the existing logistic regression (LR), support vector regression (SVR) and BP neural network with an average value of mean square error (MSE) = 1.926, mean absolute error (MAE) = 0.962 and coefficient of determination (R2) = 0.910. In addition, the effect of hidden structures and labeled training data ratios in SAE-BP is further explored. The SAE-BP model demonstrates the potential in high-dimensional and small hyperspectral datasets, representing a significant contribution to soil remote sensing.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 329 ◽  
Author(s):  
Yong Li ◽  
Guofeng Tong ◽  
Huashuai Gao ◽  
Yuebin Wang ◽  
Liqiang Zhang ◽  
...  

Panoramic images have a wide range of applications in many fields with their ability to perceive all-round information. Object detection based on panoramic images has certain advantages in terms of environment perception due to the characteristics of panoramic images, e.g., lager perspective. In recent years, deep learning methods have achieved remarkable results in image classification and object detection. Their performance depends on the large amount of training data. Therefore, a good training dataset is a prerequisite for the methods to achieve better recognition results. Then, we construct a benchmark named Pano-RSOD for panoramic road scene object detection. Pano-RSOD contains vehicles, pedestrians, traffic signs and guiding arrows. The objects of Pano-RSOD are labelled by bounding boxes in the images. Different from traditional object detection datasets, Pano-RSOD contains more objects in a panoramic image, and the high-resolution images have 360-degree environmental perception, more annotations, more small objects and diverse road scenes. The state-of-the-art deep learning algorithms are trained on Pano-RSOD for object detection, which demonstrates that Pano-RSOD is a useful benchmark, and it provides a better panoramic image training dataset for object detection tasks, especially for small and deformed objects.


Sign in / Sign up

Export Citation Format

Share Document