scholarly journals Geographic Graph Hybrid Network for Robust Inversion of Particulate Matters

2021 ◽  
Vol 13 (21) ◽  
pp. 4341
Author(s):  
Lianfa Li

Although remote sensors have been increasingly providing dense data and deriving reanalysis data for inversion of particulate matters, the use of these data is considerably limited by the ground monitoring samples and conventional machine learning models. As regional criteria air pollutants, particulate matters present a strong spatial correlation of long range. Conventional machine learning cannot or can only model such spatial pattern in a limited way. Here, we propose a method of a geographic graph hybrid network to encode a spatial neighborhood feature to make robust estimation of coarse and fine particulate matters (PM10 and PM2.5). Based on Tobler’s First Law of Geography and graph convolutions, we constructed the architecture of a geographic graph hybrid network, in which full residual deep layers were connected with graph convolutions to reduce over-smoothing, subject to the PM10–PM2.5 relationship constraint. In the site-based independent test in mainland China (2015–2018), our method achieved much better generalization than typical state-of-the-art methods (improvement in R2: 8–78%, decrease in RMSE: 14–48%). This study shows that the proposed method can encode the neighborhood information and can make an important contribution to improvement in generalization and extrapolation of geo-features with strong spatial correlation, such as PM2.5 and PM10.

2019 ◽  
Vol 11 (11) ◽  
pp. 1378 ◽  
Author(s):  
Lianfa Li

High-resolution spatiotemporal wind speed mapping is useful for atmospheric environmental monitoring, air quality evaluation and wind power siting. Although modern reanalysis techniques can obtain reliable interpolated surfaces of meteorology at a high temporal resolution, their spatial resolutions are coarse. Local variability of wind speed is difficult to capture due to its volatility. Here, a two-stage approach was developed for robust spatiotemporal estimations of wind speed at a high resolution. The proposed approach consists of geographically weighted ensemble machine learning (Stage 1) and downscaling based on meteorological reanalysis data (Stage 2). The geographically weighted machine learning method is based on three base learners, which are an autoencoder-based deep residual network, XGBoost and random forest, and it incorporates spatial autocorrelation and heterogeneity to boost the ensemble predictions. With reanalysis data, downscaling was introduced in Stage 2 to reduce bias and spatial abrupt (non-natural) variation in the predictions inferred from Stage 1. The autoencoder-based residual network was used in Stage 2 to adjust the difference between the averages of the fine-resolution predicted values and the coarse-resolution reanalysis data to ensure consistency. Using mainland China as a case study, the geographically weighted regression (GWR) ensemble predictions were shown to perform better than individual learners’ predictions (with an approximately 12–16% improvement in R2 and a decrease of 0.14–0.19 m/s in root mean square error). Downscaling further improved the predictions by reducing inconsistency and obtaining better spatial variation (smoothing). The proposed approach can also be applied for the high-resolution spatiotemporal estimation of other meteorological parameters or surface variables involving remote sensing images (i.e. reliable coarsely resolved data), ground monitoring data and other relevant factors.


2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


Author(s):  
Tiramareddy Manasa Swetha ◽  
Tekkali Yogitha ◽  
Manche Kuruba Sai Hitha ◽  
Puppala Syamanthika ◽  
S S Poorna ◽  
...  

2021 ◽  
Author(s):  
Rui Liu ◽  
Xin Yang ◽  
Chong Xu ◽  
Luyao Li ◽  
Xiangqiang Zeng

Abstract Landslide susceptibility mapping (LSM) is a useful tool to estimate the probability of landslide occurrence, providing a scientific basis for natural hazards prevention, land use planning, and economic development in landslide-prone areas. To date, a large number of machine learning methods have been applied to LSM, and recently the advanced Convolutional Neural Network (CNN) has been gradually adopted to enhance the prediction accuracy of LSM. The objective of this study is to introduce a CNN based model in LSM and systematically compare its overall performance with the conventional machine learning models of random forest, logistic regression, and support vector machine. Herein, we selected the Jiuzhaigou region in Sichuan Province, China as the study area. A total number of 710 landslides and 12 predisposing factors were stacked to form spatial datasets for LSM. The ROC analysis and several statistical metrics, such as accuracy, root mean square error (RMSE), Kappa coefficient, sensitivity, and specificity were used to evaluate the performance of the models in the training and validation datasets. Finally, the trained models were calculated and the landslide susceptibility zones were mapped. Results suggest that both CNN and conventional machine-learning based models have a satisfactory performance (AUC: 85.72% − 90.17%). The CNN based model exhibits excellent good-of-fit and prediction capability, and achieves the highest performance (AUC: 90.17%) but also significantly reduces the salt-of-pepper effect, which indicates its great potential of application to LSM.


2018 ◽  
Vol 32 (1) ◽  
pp. 60-68 ◽  
Author(s):  
Sai Nyan Lin Tun ◽  
Than Htut Aung ◽  
Aye Sandar Mon ◽  
Pyay Hein Kyaw ◽  
Wattasit Siriwong ◽  
...  

Purpose Dust (particulate matters) is very dangerous to our health as it is not visible with our naked eyes. Emissions of dust concentrations in the natural environment can occur mainly by road traffic, constructions and dust generating working environments. The purpose of this paper is to assess the ambient dust pollution status and to find out the association between PM concentrations and other determinant factors such as wind speed, ambient temperature, relative humidity and traffic congestion. Design/methodology/approach A cross-sectional study was conducted for two consecutive months (June and July, 2016) at a residential site (Defence Services Liver Hospital, Mingaladon) and a commercial site (Htouk-kyant Junction, Mingaladon) based on WHO Air Quality Reference Guideline Value (24-hour average). Hourly monitoring of PM2.5 and PM10 concentration and determinant factors such as traffic congestion, wind speed, ambient temperature and relative humidity for 24 hours a day was performed in both study sites. CW-HAT200 handheld particulate matters monitoring device was used to assess PM concentrations, temperature and humidity while traffic congestion was monitored by CCTV cameras. Findings The baseline PM2.5 and PM10 concentrations of Mingaladon area were (28.50±11.49)µg/m3 and (52.69±23.53)µg/m3, means 61.48 percent of PM2.5 concentration and 54.92 percent of PM10 concentration exceeded than the WHO reference value during the study period. PM concentration usually reached a peak during early morning (within 3:00 a.m.-5:00 a.m.) and at night (after 9:00 p.m.). PM2.5 concentration mainly depends on traffic congestion and temperature (adjusted R2=0.286), while PM10 concentration depends on traffic congestion and relative humidity (adjusted R2=0.292). Wind speed played a negative role in both PM2.5 and PM10 concentration with r=−0.228 and r=−0.266. Originality/value The air quality of the study area did not reach the satisfiable condition. The main cause of increased dust pollution in the whole study area was high traffic congestion (R2=0.63 and 0.60 for PM2.5 and PM10 concentration).


2021 ◽  
Vol 5 (1) ◽  
pp. 1-15
Author(s):  
Rubina Shaheen ◽  
Mir Kasi

The report gives a presents use of artificial intelligence in few administrative agencies. In-depth thematic analysis of some institution, have been conducted to review the current trends. In thematic analysis, 12 institutions have been selected and described the details of the institutions using artificial intelligence in different departments. These analyses yielded five major findings. First, the government has a wide application of Artificial Intelligence toolkit traversing the federal administrative and state. Almost half of the federal agencies evaluated (45%) has used AI and associated machine learning (ML) tools. Also, AI tools are already enhancing agency strategies in  the full span of governance responsibilities, such as keeping regulatory assignments bordering on market efficiency, safety in workplace, health care, and protection of the environmental, protecting the privileges and benefits of the government ranging from intellectual properties to disability, accessing, verifying and analyzing all risks to public  safety and health, Extracting essential data from the data stream of government including complaints by consumer and the communicating with citizens on their rights, welfare, asylum seeking and business ownership. AI toolkit owned by government span the complete scope of Artificial Intelligence techniques, ranging from conventional machine learning to deep learning including natural language and image data. Irrespective of huge acceptance of AI, much still has to be done in this area by the government. Recommendations also discussed at the end.


Data is the most crucial component of a successful ML system. Once a machine learning model is developed, it gets obsolete over time due to presence of new input data being generated every second. In order to keep our predictions accurate we need to find a way to keep our models up to date. Our research work involves finding a mechanism which can retrain the model with new data automatically. This research also involves exploring the possibilities of automating machine learning processes. We started this project by training and testing our model using conventional machine learning methods. The outcome was then compared with the outcome of those experiments conducted using the AutoML methods like TPOT. This helped us in finding an efficient technique to retrain our models. These techniques can be used in areas where people do not deal with the actual working of a ML model but only require the outputs of ML processes


2021 ◽  
Vol 893 (1) ◽  
pp. 012030
Author(s):  
H Harsa ◽  
M N Habibie ◽  
A S Praja ◽  
S P Rahayu ◽  
T D Hutapea ◽  
...  

Abstract A daily mean rainfall in a month forecast method is presented in this paper. The method provides spatial forecast over Indonesia and employs ensemble of Machine Learning and Artificial Intelligence algorithms as its forecast models. Each spatial grid in the forecast output is processed as an individual dataset. Therefore, each location in the forecast output has different stacked ensemble models as well as their model parameter settings. Furthermore, the best ensemble model is chosen for each spatial grid. The input dataset of the model consists of eight climate data (i.e., East and West Dipole Mode Index, Outgoing Longwave Radiation, Southern Oscillation Index, and Nino 1.2, 3, 4, 3.4) and monthly rainfall reanalysis data, ranging from January 1982 until December 2019. There are four assessment procedures performed on the models: daily mean rainfall establishment as a response function of climate patterns, and one-up to three-month lead forecast. The results show that, based on their performance, these non-Physical models are considerable to complement the existing forecast models.


Sign in / Sign up

Export Citation Format

Share Document