scholarly journals Accurate Wheat Lodging Extraction from Multi-Channel UAV Images Using a Lightweight Network Model

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6826
Author(s):  
Baohua Yang ◽  
Yue Zhu ◽  
Shuaijun Zhou

The extraction of wheat lodging is of great significance to post-disaster agricultural production management, disaster assessment and insurance subsidies. At present, the recognition of lodging wheat in the actual complex field environment still has low accuracy and poor real-time performance. To overcome this gap, first, four-channel fusion images, including RGB and DSM (digital surface model), as well as RGB and ExG (excess green), were constructed based on the RGB image acquired from unmanned aerial vehicle (UAV). Second, a Mobile U-Net model that combined a lightweight neural network with a depthwise separable convolution and U-Net model was proposed. Finally, three data sets (RGB, RGB + DSM and RGB + ExG) were used to train, verify, test and evaluate the proposed model. The results of the experiment showed that the overall accuracy of lodging recognition based on RGB + DSM reached 88.99%, which is 11.8% higher than that of original RGB and 6.2% higher than that of RGB + ExG. In addition, our proposed model was superior to typical deep learning frameworks in terms of model parameters, processing speed and segmentation accuracy. The optimized Mobile U-Net model reached 9.49 million parameters, which was 27.3% and 33.3% faster than the FCN and U-Net models, respectively. Furthermore, for RGB + DSM wheat lodging extraction, the overall accuracy of Mobile U-Net was improved by 24.3% and 15.3% compared with FCN and U-Net, respectively. Therefore, the Mobile U-Net model using RGB + DSM could extract wheat lodging with higher accuracy, fewer parameters and stronger robustness.

2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


Author(s):  
Yusuke Tanaka ◽  
Tomoharu Iwata ◽  
Toshiyuki Tanaka ◽  
Takeshi Kurashima ◽  
Maya Okawa ◽  
...  

We propose a probabilistic model for refining coarse-grained spatial data by utilizing auxiliary spatial data sets. Existing methods require that the spatial granularities of the auxiliary data sets are the same as the desired granularity of target data. The proposed model can effectively make use of auxiliary data sets with various granularities by hierarchically incorporating Gaussian processes. With the proposed model, a distribution for each auxiliary data set on the continuous space is modeled using a Gaussian process, where the representation of uncertainty considers the levels of granularity. The finegrained target data are modeled by another Gaussian process that considers both the spatial correlation and the auxiliary data sets with their uncertainty. We integrate the Gaussian process with a spatial aggregation process that transforms the fine-grained target data into the coarse-grained target data, by which we can infer the fine-grained target Gaussian process from the coarse-grained data. Our model is designed such that the inference of model parameters based on the exact marginal likelihood is possible, in which the variables of finegrained target and auxiliary data are analytically integrated out. Our experiments on real-world spatial data sets demonstrate the effectiveness of the proposed model.


2017 ◽  
Vol 49 (4) ◽  
pp. 1072-1087 ◽  
Author(s):  
Yeugeniy M. Gusev ◽  
Olga N. Nasonova ◽  
Evgeny E. Kovalev ◽  
Georgii V. Aizel

Abstract In order to study the possibility of reproducing river runoff with making use of the land surface model Soil Water–Atmosphere–Plants (SWAP) and information based on global data sets 11 river basins suggested within the framework of the Inter-Sectoral Impact Model Intercomparison Project and located in various regions of the globe under a wide variety of natural conditions were used. Schematization of each basin as a set of 0.5° × 0.5° computational grid cells connected by a river network was carried out. Input data including atmospheric forcing data and land surface parameters based, respectively, on the global WATCH and ECOCLIMAP data sets were prepared for each grid cell. Simulations of river runoff performed by SWAP with a priori input data showed poor agreement with observations. Optimization of a number of model parameters substantially improved the results. The obtained results confirm the universal character of SWAP. Natural uncertainty of river runoff caused by weather noise was estimated and analysed. It can be treated as the lowest limit of predictability of river runoff. It was shown that differences in runoff uncertainties obtained for different rivers depend greatly on natural conditions of a river basin, in particular, on the ratio of deterministic and random components of the river runoff.


2015 ◽  
Vol 8 (2) ◽  
pp. 295-316 ◽  
Author(s):  
D. Slevin ◽  
S. F. B. Tett ◽  
M. Williams

Abstract. This study evaluates the ability of the JULES land surface model (LSM) to simulate photosynthesis using local and global data sets at 12 FLUXNET sites. Model parameters include site-specific (local) values for each flux tower site and the default parameters used in the Hadley Centre Global Environmental Model (HadGEM) climate model. Firstly, gross primary productivity (GPP) estimates from driving JULES with data derived from local site measurements were compared to observations from the FLUXNET network. When using local data, the model is biased with total annual GPP underestimated by 16% across all sites compared to observations. Secondly, GPP estimates from driving JULES with data derived from global parameter and atmospheric reanalysis (on scales of 100 km or so) were compared to FLUXNET observations. It was found that model performance decreases further, with total annual GPP underestimated by 30% across all sites compared to observations. When JULES was driven using local parameters and global meteorological data, it was shown that global data could be used in place of FLUXNET data with a 7% reduction in total annual simulated GPP. Thirdly, the global meteorological data sets, WFDEI and PRINCETON, were compared to local data to find that the WFDEI data set more closely matches the local meteorological measurements (FLUXNET). Finally, the JULES phenology model was tested by comparing results from simulations using the default phenology model to those forced with the remote sensing product MODIS leaf area index (LAI). Forcing the model with daily satellite LAI results in only small improvements in predicted GPP at a small number of sites, compared to using the default phenology model.


Author(s):  
Salman Abbas ◽  
Gamze Ozal ◽  
Saman Hanif Shahbaz ◽  
Muhammad Qaiser Shahbaz

In this article, we present a new generalization of weighted Weibull distribution using Topp Leone family of distributions. We have studied some statistical properties of the proposed distribution including quantile function, moment generating function, probability generating function, raw moments, incomplete moments, probability, weighted moments, Rayeni and q th entropy. The have obtained numerical values of the various measures to see the eect of model parameters. Distribution of of order statistics for the proposed model has also been obtained. The estimation of the model parameters has been done by using maximum likelihood method. The eectiveness of proposed model is analyzed by means of a real data sets. Finally, some concluding remarks are given.


2017 ◽  
Vol 46 (1) ◽  
pp. 41-63 ◽  
Author(s):  
M.E. Mead ◽  
Ahmed Z. Afify ◽  
G.G. Hamedani ◽  
Indranil Ghosh

We define and study a new generalization of the Fréchet distribution called the beta exponential Fréchet distribution. The new model includes thirty two special models. Some of its mathematical properties, including explicit expressions for the ordinary and incomplete moments, quantile and generating functions, mean residual life, mean inactivity time, order statistics and entropies are derived. The method of maximum likelihood is proposed to estimate the model parameters. A small simulation study is alsoreported. Two real data sets are applied to illustrate the flexibility of the proposed model compared with some nested and non-nested models.


2021 ◽  
Vol 13 (17) ◽  
pp. 3411
Author(s):  
Lanxue Dang ◽  
Peidong Pang ◽  
Xianyu Zuo ◽  
Yang Liu ◽  
Jay Lee

Convolutional neural network (CNN) has shown excellent performance in hyperspectral image (HSI) classification. However, the structure of the CNN models is complex, requiring many training parameters and floating-point operations (FLOPs). This is often inefficient and results in longer training and testing time. In addition, the label samples of hyperspectral data are limited, and a deep network often causes the over-fitting phenomenon. Hence, a dual-path small convolution (DPSC) module is proposed. It is composed of two 1 × 1 small convolutions with a residual path and a density path. It can effectively extract abstract features from HSI. A dual-path small convolution network (DPSCN) is constructed by stacking DPSC modules. Specifically, the proposed model uses a DPSC module to complete the extraction of spectral and spectral–spatial features successively. It then uses a global average pooling layer at the end of the model to replace the conventional fully connected layer to complete the final classification. In the implemented study, all convolutional layers of the proposed network, except the middle layer, use 1 × 1 small convolution, effectively reduced model parameters and increased the speed of feature extraction processes. DPSCN was compared with several current state-of-the-art models. The results on three benchmark HSI data sets demonstrated that the proposed model is of lower complexity, has stronger generalization ability, and has higher classification efficiency.


Author(s):  
J.-I. Kim ◽  
H.-C. Kim

Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.


Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Muhammad Arslan Nasir ◽  
Christophe Chesneau ◽  
Jamal Abdul Nasir ◽  
...  

A new four-parameter lifetime distribution (called the Topp Leone Weibull-Lomax distribution) is proposed in this paper. Different mathematical properties of the proposed distribution were studied which include quantile function, ordinary and incomplete moments, probability weighted moment, conditional moments, order statistics, stochastic ordering, and stress-strength reliability parameter. The regression model and the residual analysis for the proposed model were also carried out. The model parameters were estimated by using the maximum likelihood criterion and the behaviour of these estimated parameters were examined by conducting a simulation study. The importance and flexibility of the proposed distribution have been proved empirically by using four separate data sets.


2019 ◽  
Vol 12 (2) ◽  
pp. 88 ◽  
Author(s):  
Zhongxian Men ◽  
Adam W. Kolkiewicz ◽  
Tony S. Wirjanto

This paper proposes a variant of a threshold stochastic conditional duration (TSCD) model for financial data at the transaction level. It assumes that the innovations of the duration process follow a threshold distribution with a positive support. In addition, it also assumes that the latent first-order autoregressive process of the log conditional durations switches between two regimes. The regimes are determined by the levels of the observed durations and the TSCD model is specified to be self-excited. A novel Markov-Chain Monte Carlo method (MCMC) is developed for parameter estimation of the model. For model discrimination, we employ deviance information criteria, which does not depend on the number of model parameters directly. Duration forecasting is constructed by using an auxiliary particle filter based on the fitted models. Simulation studies demonstrate that the proposed TSCD model and MCMC method work well in terms of parameter estimation and duration forecasting. Lastly, the proposed model and method are applied to two classic data sets that have been studied in the literature, namely IBM and Boeing transaction data.


Sign in / Sign up

Export Citation Format

Share Document