scholarly journals The 17-y spatiotemporal trend of PM2.5and its mortality burden in China

2020 ◽  
Vol 117 (41) ◽  
pp. 25601-25608
Author(s):  
Fengchao Liang ◽  
Qingyang Xiao ◽  
Keyong Huang ◽  
Xueli Yang ◽  
Fangchao Liu ◽  
...  

Investigations on the chronic health effects of fine particulate matter (PM2.5) exposure in China are limited due to the lack of long-term exposure data. Using satellite-driven models to generate spatiotemporally resolved PM2.5levels, we aimed to estimate high-resolution, long-term PM2.5and associated mortality burden in China. The multiangle implementation of atmospheric correction (MAIAC) aerosol optical depth (AOD) at 1-km resolution was employed as a primary predictor to estimate PM2.5concentrations. Imputation techniques were adopted to fill in the missing AOD retrievals and provide accurate long-term AOD aggregations. Monthly PM2.5concentrations in China from 2000 to 2016 were estimated using machine-learning approaches and used to analyze spatiotemporal trends of adult mortality attributable to PM2.5exposure. Mean coverage of AOD increased from 56 to 100% over the 17-y period, with the accuracy of long-term averages enhanced after gap filling. Machine-learning models performed well with a random cross-validationR2of 0.93 at the monthly level. For the time period outside the model training window, predictionR2values were estimated to be 0.67 and 0.80 at the monthly and annual levels. Across the adult population in China, long-term PM2.5exposures accounted for a total number of 30.8 (95% confidence interval [CI]: 28.6, 33.2) million premature deaths over the 17-y period, with an annual burden ranging from 1.5 (95% CI: 1.3, 1.6) to 2.2 (95% CI: 2.1, 2.4) million. Our satellite-based techniques provide reliable long-term PM2.5estimates at a high spatial resolution, enhancing the assessment of adverse health effects and disease burden in China.

Author(s):  
Domenico D'Alelio ◽  
Salvatore Rampone ◽  
Luigi Maria Cusano ◽  
Nadia Sanseverino ◽  
Luca Russo ◽  
...  

2019 ◽  
Author(s):  
Allan C. Just ◽  
Yang Liu ◽  
Meytar Sorek-Hamer ◽  
Johnathan Rush ◽  
Michael Dorman ◽  
...  

Abstract. The atmospheric products of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm include column water vapor (CWV) at 1 km resolution, derived from daily overpasses of NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard the Aqua and Terra satellites. We have recently shown that machine learning using extreme gradient boosting (XGBoost) can improve the estimation of MAIAC aerosol optical depth (AOD). Although MAIAC CWV is generally well validated (Pearson’s R > 0.97 versus CWV from AERONET sun photometers), it has not yet been assessed whether machine-learning approaches can further improve CWV. Using a novel spatiotemporal cross-validation approach to avoid overfitting, our XGBoost model with nine features derived from land use terms, date, and ancillary variables from the MAIAC retrieval, quantifies and can correct a substantial portion of measurement error relative to collocated measures at AERONET sites (26.9 % and 16.5 % decrease in Root Mean Square Error (RMSE) for Terra and Aqua datasets, respectively) in the Northeastern USA, 2000–2015. We use machine-learning interpretation tools to illustrate complex patterns of measurement error and describe a positive bias in MAIAC Terra CWV worsening in recent summertime conditions. We validate our predictive model on MAIAC CWV estimates at independent stations from the SuomiNet GPS network where our corrections decrease the RMSE by 19.7 % and 9.5 % for Terra and Aqua MAIAC CWV. Empirically correcting for measurement error with machine-learning algorithms is a post-processing opportunity to improve satellite-derived CWV data for Earth science and remote sensing applications.


2020 ◽  
Vol 13 (9) ◽  
pp. 4669-4681
Author(s):  
Allan C. Just ◽  
Yang Liu ◽  
Meytar Sorek-Hamer ◽  
Johnathan Rush ◽  
Michael Dorman ◽  
...  

Abstract. The atmospheric products of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm include column water vapor (CWV) at a 1 km resolution, derived from daily overpasses of NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard the Aqua and Terra satellites. We have recently shown that machine learning using extreme gradient boosting (XGBoost) can improve the estimation of MAIAC aerosol optical depth (AOD). Although MAIAC CWV is generally well validated (Pearson's R > 0.97 versus CWV from AERONET sun photometers), it has not yet been assessed whether machine-learning approaches can further improve CWV. Using a novel spatiotemporal cross-validation approach to avoid overfitting, our XGBoost model, with nine features derived from land use terms, date, and ancillary variables from the MAIAC retrieval, quantifies and can correct a substantial portion of measurement error relative to collocated measurements at AERONET sites (26.9 % and 16.5 % decrease in root mean square error (RMSE) for Terra and Aqua datasets, respectively) in the Northeastern USA, 2000–2015. We use machine-learning interpretation tools to illustrate complex patterns of measurement error and describe a positive bias in MAIAC Terra CWV worsening in recent summertime conditions. We validate our predictive model on MAIAC CWV estimates at independent stations from the SuomiNet GPS network where our corrections decrease the RMSE by 19.7 % and 9.5 % for Terra and Aqua MAIAC CWV. Empirically correcting for measurement error with machine-learning algorithms is a postprocessing opportunity to improve satellite-derived CWV data for Earth science and remote sensing applications.


Atmosphere ◽  
2021 ◽  
Vol 12 (8) ◽  
pp. 947
Author(s):  
R. Burciaga Valdez ◽  
Mohammad Z. Al-Hamdan ◽  
Mohammad Tabatabai ◽  
Darryl B. Hood ◽  
Wansoo Im ◽  
...  

There is a well-documented association between ambient fine particulate matter air pollution (PM2.5) and cardiovascular disease (CVD) morbidity and mortality. Exposure to PM2.5 can cause premature death and harmful and chronic health effects such as heart attack, diabetes, and stroke. The Environmental Protection Agency sets annual PM2.5 standards to reduce these negative health effects. Currently above an annual average level of 12.0 µg/m is considered unhealthy. Methods. We examined the association of long-term exposure to PM2.5 and CVD in a cohort of 44,610 individuals who resided in 12 states recruited into the Southern Community Cohort Study (SCCS). The SCCS was designed to recruit Black and White participants who received care from Federally Qualified Health Centers; hence, they represent vulnerable individuals from low-income families across this vast region. This study tests whether SCCS participants who lived in locations exposed to elevated ambient levels of PM2.5 concentrations were more likely to report a history of CVD at enrollment (2002–2009). Remotely sensed satellite data integrated with ground monitoring data provide an assessment of the average annual PM2.5 in urban and rural locations where the SCCS participants resided. We used multilevel logistic regression to estimate the associations between self-reported CVD and exposure to elevated ambient levels of PM2.5. Results. We found a 13.4 percent increase in the odds of reported CVD with exposure to unhealthy levels of PM2.5 exposure at enrollment. The SCCS participants with medical histories of hypertension, hypercholesterolemia, and smoking had, overall, 385 percent higher odds of reported CVD than those without these clinical risk factors. Additionally, Black participants were more likely to live in locations with higher ambient PM2.5 concentrations and report high levels of clinical risk factors, thus, they may be at a greater future risk of CVD. Conclusions: In the SCCS participants, we found a strong relation between exposures to high ambient levels of PM2.5 and self-reported CVD at enrollment.


Polymers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1768
Author(s):  
Chunhao Yang ◽  
Wuning Ma ◽  
Jianlin Zhong ◽  
Zhendong Zhang

The long-term mechanical properties of viscoelastic polymers are among their most important aspects. In the present research, a machine learning approach was proposed for creep properties’ prediction of polyurethane elastomer considering the effect of creep time, creep temperature, creep stress and the hardness of the material. The approaches are based on multilayer perceptron network, random forest and support vector machine regression, respectively. While the genetic algorithm and k-fold cross-validation were used to tune the hyper-parameters. The results showed that the three models all proposed excellent fitting ability for the training set. Moreover, the three models had different prediction capabilities for the testing set by focusing on various changing factors. The correlation coefficient values between the predicted and experimental strains were larger than 0.913 (mostly larger than 0.998) on the testing set when choosing the reasonable model.


2021 ◽  
Vol 3 ◽  
Author(s):  
Muhammad Kaleem ◽  
Aziz Guergachi ◽  
Sridhar Krishnan

Analysis of long-term multichannel EEG signals for automatic seizure detection is an active area of research that has seen application of methods from different domains of signal processing and machine learning. The majority of approaches developed in this context consist of extraction of hand-crafted features that are used to train a classifier for eventual seizure detection. Approaches that are data-driven, do not use hand-crafted features, and use small amounts of patients' historical EEG data for classifier training are few in number. The approach presented in this paper falls in the latter category, and is based on a signal-derived empirical dictionary approach, which utilizes empirical mode decomposition (EMD) and discrete wavelet transform (DWT) based dictionaries learned using a framework inspired by traditional methods of dictionary learning. Three features associated with traditional dictionary learning approaches, namely projection coefficients, coefficient vector and reconstruction error, are extracted from both EMD and DWT based dictionaries for automated seizure detection. This is the first time these features have been applied for automatic seizure detection using an empirical dictionary approach. Small amounts of patients' historical multi-channel EEG data are used for classifier training, and multiple classifiers are used for seizure detection using newer data. In addition, the seizure detection results are validated using 5-fold cross-validation to rule out any bias in the results. The CHB-MIT benchmark database containing long-term EEG recordings of pediatric patients is used for validation of the approach, and seizure detection performance comparable to the state-of-the-art is obtained. Seizure detection is performed using five classifiers, thereby allowing a comparison of the dictionary approaches, features extracted, and classifiers used. The best seizure detection performance is obtained using EMD based dictionary and reconstruction error feature and support vector machine classifier, with accuracy, sensitivity and specificity values of 88.2, 90.3, and 88.1%, respectively. Comparison is also made with other recent studies using the same database. The methodology presented in this paper is shown to be computationally efficient and robust for patient-specific automatic seizure detection. A data-driven methodology utilizing a small amount of patients' historical data is hence demonstrated as a practical solution for automatic seizure detection.


2018 ◽  
Vol 5 ◽  
pp. 13-30
Author(s):  
Gloria Re Calegari ◽  
Gioele Nasi ◽  
Irene Celino

Image classification is a classical task heavily studied in computer vision and widely required in many concrete scientific and industrial scenarios. Is it better to rely on human eyes, thus asking people to classify pictures, or to train a machine learning system to automatically solve the task? The answer largely depends on the specific case and the required accuracy: humans may be more reliable - especially if they are domain experts - but automatic processing can be cheaper, even if less capable to demonstrate an "intelligent" behaviour.In this paper, we present an experimental comparison of different Human Computation and Machine Learning approaches to solve the same image classification task on a set of pictures used in light pollution research. We illustrate the adopted methods and the obtained results and we compare and contrast them in order to come up with a long term combined strategy to address the specific issue at scale: while it is hard to ensure a long-term engagement of users to exclusively rely on the Human Computation approach, the human classification is indispensable to overcome the "cold start" problem of automated data modelling.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1672
Author(s):  
Sebastian Raubitzek ◽  
Thomas Neubauer

Measures of signal complexity, such as the Hurst exponent, the fractal dimension, and the Spectrum of Lyapunov exponents, are used in time series analysis to give estimates on persistency, anti-persistency, fluctuations and predictability of the data under study. They have proven beneficial when doing time series prediction using machine and deep learning and tell what features may be relevant for predicting time-series and establishing complexity features. Further, the performance of machine learning approaches can be improved, taking into account the complexity of the data under study, e.g., adapting the employed algorithm to the inherent long-term memory of the data. In this article, we provide a review of complexity and entropy measures in combination with machine learning approaches. We give a comprehensive review of relevant publications, suggesting the use of fractal or complexity-measure concepts to improve existing machine or deep learning approaches. Additionally, we evaluate applications of these concepts and examine if they can be helpful in predicting and analyzing time series using machine and deep learning. Finally, we give a list of a total of six ways to combine machine learning and measures of signal complexity as found in the literature.


Author(s):  
Stuti Pandey ◽  
Abhay Kumar Agarwal

In a human body, the heart is the second primary organ after the brain. It causes either a long-term impairment or death of a person if suffering from a cardiovascular disease. In medical science, a proper medical analysis and examination of a cardiovascular disease is very crucial, convincing, and sophisticated task for saving a human life. Data analytics rises because of the absence of sufficient practical tools for exploring the trends and unknown relationships in e-health records. It predicts and achieves information which can ease the diagnosis. This survey examines cardiovascular disease prediction systems developed by different researchers. It also reviews the trend of machine learning approaches used in the past decade with results. Related studies comprise the performance of various classifiers on distinct datasets.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012065
Author(s):  
Payal Soni ◽  
Yogya Tewari ◽  
Deepa Krishnan

Abstract Prediction of stock prices is one of the most researched topics and gathers interest from academia and the industry alike. With the emergence of Artificial Intelligence, various algorithms have been employed in order to predict the equity market movement. The combined application of statistics and machine learning algorithms have been designed either for predicting the opening price of the stock the very next day or understanding the long term market in the future. This paper explores the different techniques that are used in the prediction of share prices from traditional machine learning and deep learning methods to neural networks and graph-based approaches. It draws a detailed analysis of the techniques employed in predicting the stock prices as well as explores the challenges entailed along with the future scope of work in the domain.


Sign in / Sign up

Export Citation Format

Share Document