scholarly journals LOCAL ALGORITHM FOR MONITORING TOTAL SUSPENDED SEDIMENTS IN MICRO-WATERSHEDS USIN DRONES AND REMOTE SENSING APPLICATIONS. CASE STUDY: TEUSACÁ RIVER, LA CALERA, COLOMBIA

Author(s):  
N. A. Sáenz ◽  
D. E. Paez ◽  
C. Arango

An empirical relationship of Total Suspended Sediments (TSS) concentrations and reflectance values obtained with Drones’ aerial photos and processed using remote sensing tools was set up as the main objective of this research. A local mathematic algorithm for the micro-watershed of the Teusacá River at La Calera, Colombia, was developed based on the computing of four component of bands from consumed-grade cameras obtaining from each their corresponding reflectance values from procedures for correcting digital camera imagery and using statistical analysis for study the fit and RMSE of 25 regressions. The assessment was characterized by the comparison of reflectance values and 34 <i>in-situ</i> data measurements concentrations between 1.6 and 33 mg L<sup>&minus;1</sup> taken from the superficial layer of the river in two campaigns. A large data set of empirical and referenced algorithm from literature were used to evaluate the accuracy and precision of the relationship. For estimation of TSS, a higher accuracy was achieved using the Tassan’s algorithm with the BAND X/ BANDX ratio. The correlation coefficient with <i>R</i><sup>2</sup> = <i>X</i> demonstrate the feasibility of use remote sensed data with consumed-grade cameras as an effective tool for a frequent monitoring and controlling of water quality parameters such as Total Suspended Solids of watersheds, these being the most vulnerable and less compliance with environmental regulations.

1997 ◽  
Vol 80 (3) ◽  
pp. 676-680 ◽  
Author(s):  
Michael Thompson ◽  
Philip J Lowthian

Abstract A statistical test was made of the Horwitz function, an empirical relationship between the reproducibility precision of an analytical method and the concentration of the analyte regardless of the nature of the analyte, matrix, and the method. The large data set (7502 observations) was compiled by Horwitz from collaborative trials (method performance studies) spanning the period 1915 to 1995. The data followed the Horwitz function well down to concentrations of about 10-8 (10 ppb), but they followed a more stringent specification at lower concentrations. This discrepancy may be due to special circumstances prevailing in collaborative trials at very low concentrations. Deviations of individual observations from the function were in large part accounted for by random variations. No consequential improvement in precision with time was found.


2005 ◽  
Vol 23 (4) ◽  
pp. 1311-1316 ◽  
Author(s):  
E. A. Lvova ◽  
V. A. Sergeev ◽  
G. R. Bagautdinova

Abstract. Based on a large data set of polar NOAA-type satellite observations we studied the latitude-MLT shape of the 80keV proton isotropy boundary (IB) as a function of the solar wind parameters and magnetic activity. Using "snapshots" of isotropy boundaries near-simultaneously crossed at four points we found that its equatorward expansion, as well as its dawn-dusk shift, depends mostly on the AE-index and on the corrected Dst*, whereas the amplitude of the IB daily variation is mostly controlled by the solar wind dynamic pressure. Applying a nonlinear, multi-parametric, least-square regression procedure, the empirical relationship describing the IB latitude as a function of MLT and AE, Pd, Dst* parameters was obtained. Comparing it with the predictions from the Tsyganenko-2001 model we found a good agreement during the quiet time but some important differences during the disturbed periods. Interpretation of these results in terms of the properties of the magnetospheric configuration is briefly discussed.


2019 ◽  
Vol 85 (10) ◽  
pp. 737-752
Author(s):  
Yihua Tan ◽  
Shengzhou Xiong ◽  
Zhi Li ◽  
Jinwen Tian ◽  
Yansheng Li

The analysis of built-up areas has always been a popular research topic for remote sensing applications. However, automatic extraction of built-up areas from a wide range of regions remains challenging. In this article, a fully convolutional network (FCN)–based strategy is proposed to address built-up area extraction. The proposed algorithm can be divided into two main steps. First, divide the remote sensing image into blocks and extract their deep features by a lightweight multi-branch convolutional neural network (LMB-CNN). Second, rearrange the deep features into feature maps that are fed into a well-designed FCN for image segmentation. Our FCN is integrated with multi-branch blocks and outputs multi-channel segmentation masks that are utilized to balance the false alarm and missing alarm. Experiments demonstrate that the overall classification accuracy of the proposed algorithm can achieve 98.75% in the test data set and that it has a faster processing compared with the existing state-of-the-art algorithms.


2020 ◽  
Vol 13 (7) ◽  
pp. 3439-3463
Author(s):  
Jouni Susiluoto ◽  
Alessio Spantini ◽  
Heikki Haario ◽  
Teemu Härkönen ◽  
Youssef Marzouk

Abstract. Satellite remote sensing provides a global view to processes on Earth that has unique benefits compared to making measurements on the ground, such as global coverage and enormous data volume. The typical downsides are spatial and temporal gaps and potentially low data quality. Meaningful statistical inference from such data requires overcoming these problems and developing efficient and robust computational tools. We design and implement a computationally efficient multi-scale Gaussian process (GP) software package, satGP, geared towards remote sensing applications. The software is able to handle problems of enormous sizes and to compute marginals and sample from the random field conditioning on at least hundreds of millions of observations. This is achieved by optimizing the computation by, e.g., randomization and splitting the problem into parallel local subproblems which aggressively discard uninformative data. We describe the mean function of the Gaussian process by approximating marginals of a Markov random field (MRF). Variability around the mean is modeled with a multi-scale covariance kernel, which consists of Matérn, exponential, and periodic components. We also demonstrate how winds can be used to inform covariances locally. The covariance kernel parameters are learned by calculating an approximate marginal maximum likelihood estimate, and the validity of both the multi-scale approach and the method used to learn the kernel parameters is verified in synthetic experiments. We apply these techniques to a moderate size ozone data set produced by an atmospheric chemistry model and to the very large number of observations retrieved from the Orbiting Carbon Observatory 2 (OCO-2) satellite. The satGP software is released under an open-source license.


2020 ◽  
Vol 39 (5) ◽  
pp. 6419-6430
Author(s):  
Dusan Marcek

To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.


2020 ◽  
Vol 38 (4A) ◽  
pp. 510-514
Author(s):  
Tay H. Shihab ◽  
Amjed N. Al-Hameedawi ◽  
Ammar M. Hamza

In this paper to make use of complementary potential in the mapping of LULC spatial data is acquired from LandSat 8 OLI sensor images are taken in 2019.  They have been rectified, enhanced and then classified according to Random forest (RF) and artificial neural network (ANN) methods. Optical remote sensing images have been used to get information on the status of LULC classification, and extraction details. The classification of both satellite image types is used to extract features and to analyse LULC of the study area. The results of the classification showed that the artificial neural network method outperforms the random forest method. The required image processing has been made for Optical Remote Sensing Data to be used in LULC mapping, include the geometric correction, Image Enhancements, The overall accuracy when using the ANN methods 0.91 and the kappa accuracy was found 0.89 for the training data set. While the overall accuracy and the kappa accuracy of the test dataset were found 0.89 and 0.87 respectively.


2019 ◽  
Vol 21 (9) ◽  
pp. 662-669 ◽  
Author(s):  
Junnan Zhao ◽  
Lu Zhu ◽  
Weineng Zhou ◽  
Lingfeng Yin ◽  
Yuchen Wang ◽  
...  

Background: Thrombin is the central protease of the vertebrate blood coagulation cascade, which is closely related to cardiovascular diseases. The inhibitory constant Ki is the most significant property of thrombin inhibitors. Method: This study was carried out to predict Ki values of thrombin inhibitors based on a large data set by using machine learning methods. Taking advantage of finding non-intuitive regularities on high-dimensional datasets, machine learning can be used to build effective predictive models. A total of 6554 descriptors for each compound were collected and an efficient descriptor selection method was chosen to find the appropriate descriptors. Four different methods including multiple linear regression (MLR), K Nearest Neighbors (KNN), Gradient Boosting Regression Tree (GBRT) and Support Vector Machine (SVM) were implemented to build prediction models with these selected descriptors. Results: The SVM model was the best one among these methods with R2=0.84, MSE=0.55 for the training set and R2=0.83, MSE=0.56 for the test set. Several validation methods such as yrandomization test and applicability domain evaluation, were adopted to assess the robustness and generalization ability of the model. The final model shows excellent stability and predictive ability and can be employed for rapid estimation of the inhibitory constant, which is full of help for designing novel thrombin inhibitors.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ruolan Zeng ◽  
Jiyong Deng ◽  
Limin Dang ◽  
Xinliang Yu

AbstractA three-descriptor quantitative structure–activity/toxicity relationship (QSAR/QSTR) model was developed for the skin permeability of a sufficiently large data set consisting of 274 compounds, by applying support vector machine (SVM) together with genetic algorithm. The optimal SVM model possesses the coefficient of determination R2 of 0.946 and root mean square (rms) error of 0.253 for the training set of 139 compounds; and a R2 of 0.872 and rms of 0.302 for the test set of 135 compounds. Compared with other models reported in the literature, our SVM model shows better statistical performance in a model that deals with more samples in the test set. Therefore, applying a SVM algorithm to develop a nonlinear QSAR model for skin permeability was achieved.


Sign in / Sign up

Export Citation Format

Share Document