scholarly journals Spatial Interpolation of Air Pollutant and Meteorological Variables in Central Amazonia

Data ◽  
2021 ◽  
Vol 6 (12) ◽  
pp. 126
Author(s):  
Renato Okabayashi Miyaji ◽  
Felipe Valencia de Almeida ◽  
Lucas de Oliveira Bauer ◽  
Victor Madureira Ferrari ◽  
Pedro Luiz Pizzigatti Corrêa ◽  
...  

The Amazon Rainforest is highlighted by the global community both for its extensive vegetation cover that constantly suffers the effects of anthropic action and for its substantial biodiversity. This dataset presents data of meteorological variables from the Amazon Rainforest region with a spatial resolution of 0.001° in latitude and longitude, resulting from an interpolation process. The original data were obtained from the GoAmazon 2014/5 project, in the Atmospheric Radiation Measurement (ARM) repository, and then processed through mathematical and statistical methods. The dataset presented here can be used in experiments in the field of Data Science, such as training models for predicting climate variables or modeling the distribution of species.

2018 ◽  
Vol 18 (12) ◽  
pp. 9121-9145 ◽  
Author(s):  
Die Wang ◽  
Scott E. Giangrande ◽  
Mary Jane Bartholomew ◽  
Joseph Hardin ◽  
Zhe Feng ◽  
...  

Abstract. This study summarizes the precipitation properties collected during the GoAmazon2014/5 campaign near Manaus in central Amazonia, Brazil. Precipitation breakdowns, summary radar rainfall relationships and self-consistency concepts from a coupled disdrometer and radar wind profiler measurements are presented. The properties of Amazon cumulus and associated stratiform precipitation are discussed, including segregations according to seasonal (wet or dry regime) variability, cloud echo-top height and possible aerosol influences on the apparent oceanic characteristics of the precipitation drop size distributions. Overall, we observe that the Amazon precipitation straddles behaviors found during previous U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program tropical deployments, with distributions favoring higher concentrations of smaller drops than ARM continental examples. Oceanic-type precipitation characteristics are predominantly observed during the Amazon wet seasons. An exploration of the controls on wet season precipitation properties reveals that wind direction, compared with other standard radiosonde thermodynamic parameters or aerosol count/regime classifications performed at the ARM site, provides a good indicator for those wet season Amazon events having an oceanic character for their precipitation drop size distributions.


2020 ◽  
Author(s):  
Marisel Villafañe-Delgado ◽  
Erik C. Johnson ◽  
Marisa Hughes ◽  
Martha Cervantes ◽  
William Gray-Roncal

Educating the workforce of tomorrow is an increasingly critical challenge for areas such as data science, machine learning, and artificial intelligence. These core skills may revolutionize progress in areas such as health care and precision medicine, autonomous systems and robotics, and neuroscience. Skills in data science and artificial intelligence are in high demand in industrial research and development, but we do not believe that traditional recruiting and training models in industry (e.g., internships, continuing education) are serving the needs of the diverse populations of students who will be required to revolutionize these fields. Our program, the Cohort-based Integrated Research Community for Undergraduate Innovation and Trailblazing (CIRCUIT), targets trailblazing, high-achieving students who face barriers in achieving their goals and becoming leaders in data science, machine learning, and artificial intelligence research. Traditional recruitment practices often miss these ambitious and talented students from nontraditional backgrounds, and these students are at a higher risk of not persisting in research careers. In the CIRCUIT program we recruit holistically, selecting students on the basis of their commitment, potential, and need. We designed a training and support model for our internship. This model consists of a compressed data science and machine learning curriculum, a series of professional development training workshops, and a team-based robotics challenge. These activities develop the skills these trailblazing students will need to contribute to the dynamic, team-based engineering teams of the future.


2021 ◽  
Vol 8 (2) ◽  
pp. 118-125
Author(s):  
Shiwei Yang ◽  
Ashardi Abas

As the country implements the big data strategy and accelerates the construction of a digital China, data science has entered a new and dynamic era, and the demand for data science talents in all walks of life is increasing. Many talent training departments have added undergraduates or degrees to data science talents, but it is still unclear whether they can meet social and economic development needs. This article aims to improve the quality and adaptability of data science talent training and conduct an in-depth analysis of the demand for data science talents. The technology used in this article is data mining technology. The data information of data science talents is crawled out of the demand information of data science talents on the recruitment website. The core content of network relationship visualization is proposed and analyzed through machine learning methods and text subject word extraction models. Achieve a comprehensive exploration of the demand for data science talents and provide a reference for talent training units to formulate data science talent training models.


2018 ◽  
Author(s):  
Brian J. Cox

AbstractThe advent of text mining and natural text reading artificial intelligence has opened new research opportunities on the large collections of research publications available through journal and other resources. These systems have begun to identify novel connections or hypotheses due to an ability to read and extract information from more literature than a single individual could in their lifetime. Most research publications contain figures where data is represented in a graph. Modern publication guidelines are strongly encouraging publication of graphs where all data is displayed as apposed to summary figures such as bar charts. Figures are often encoded in a graphing language that is interpreted and displayed as a graphics. Conversion figures in publications to the underlying code should enable text-based mining to extract the underlying raw data of the graph. Here I show that data from publications greater than 15 years old that contain time series data on human patients is extractable from the original publication and can be reassessed using modern tools. This could benefit cases where data sets are not available due to file loss or corruption. This may also create and issue for the publication of human data as sharing of human data often requires research ethics approval.Author summaryFigures embedded in published research manuscripts are a minable resource similar to text mining. Figures are text based code that draws the image, as such the underlying text of the code can be used to reassemble the original data set.


2021 ◽  
Author(s):  
Thomas Pliemon ◽  
Ulrich Foelsche ◽  
Christian Rohr ◽  
Christian Pfister

<p>Based on copies of the original data (source: Oeschger Center for Climate Change Research) we perform climate reconstructions for Paris between 1665 - 1709. The focus lies on the following meteorological variables: temperature, cloudiness, direction of movement of the clouds, precipitation and humidity. Apart from humidity, these meteorological variables were measured three times a day over the entire period from Louis Morin. Temperature and humidity were measured with instruments, whereas cloud cover, direction of movement of the clouds and precipitation were measured in a descriptive manner. In addition to the quantitative temperature measurements, conclusions about synoptic air movements over Europe are possible due to the additional meteorological variables. The Late Maunder Minimum is characterised by cold winters and moderate summers. Winter is characterised by a lower frequency of westerly direction of movement of the clouds. This reduction of advection from the ocean leads to cooling in Paris and also to less precipitation in winter. This can be seen very strongly between the last decade of the 17<sup>th</sup> century (cold) and the first decade of the 18<sup>th</sup> century (warm). A lower frequency of westerly direction of movement of the clouds can also be seen in summer, but the influence is stronger in winter than in summer. However, this reduction leads to moderate/warm temperatures in summer. So unusually cold winters in the Late Maunder Minimum can be attributed to a lower frequency of westerly direction of movement of the clouds.</p>


2018 ◽  
Vol 53 (3) ◽  
pp. 118-132
Author(s):  
Danielle E. Mitchell ◽  
K. Wayne Forsythe ◽  
Chris H. Marvin ◽  
Debbie A. Burniston

Abstract Spatial interpolation methods translate sediment contamination point data into informative area-based visualizations. Lake Erie was first sampled in 1971 based on a survey grid of 263 locations. Due to procedural costs, the 2014 survey was reduced to 34 sampling locations mostly located in deep offshore regions of the lake. Using the 1971 dataset, this study identifies the minimum sampling density at which statistically valid, and spatially accurate predictions can be made using ordinary kriging. Randomly down-sampled subsets at 10% intervals of the 1971 survey were created to include at least one set of data points with a smaller sample size than that of the 2014 dataset. Regression analyses of predicted contamination values assessed spatial autocorrelation between kriged surfaces created from the down-sampled subsets and the original dataset. Subsets at 10% and 20% of the original data density accurately predicted 51% and 75% (respectively) of the original dataset's predictions. Subsets representing 70%, 80% and 90% of the original data density accurately predicted 88%, 90% and 97% of the original dataset's predictions. Although all subsets proved to be statistically valid, sampling densities below 0.002 locations/km2 are likely to create very generalized contamination maps from which environmental decisions might not be justified.


10.2196/24388 ◽  
2020 ◽  
Vol 5 (1) ◽  
pp. e24388
Author(s):  
Rado Kotorov ◽  
Lianhua Chi ◽  
Min Shen

Background Due to the COVID-19 pandemic, the demand for remote electrocardiogram (ECG) monitoring has increased drastically in an attempt to prevent the spread of the virus and keep vulnerable individuals with less severe cases out of hospitals. Enabling clinicians to set up remote patient ECG monitoring easily and determining how to classify the ECG signals accurately so relevant alerts are sent in a timely fashion is an urgent problem to be addressed for remote patient monitoring (RPM) to be adopted widely. Hence, a new technique is required to enable routine and widespread use of RPM, as is needed due to COVID-19. Objective The primary aim of this research is to create a robust and easy-to-use solution for personalized ECG monitoring in real-world settings that is precise, easily configurable, and understandable by clinicians. Methods In this paper, we propose a Personalized Monitoring Model (PMM) for ECG data based on motif discovery. Motif discovery finds meaningful or frequently recurring patterns in patient ECG readings. The main strategy is to use motif discovery to extract a small sample of personalized motifs for each individual patient and then use these motifs to predict abnormalities in real-time readings of that patient using an artificial logical network configured by a physician. Results Our approach was tested on 30 minutes of ECG readings from 32 patients. The average diagnostic accuracy of the PMM was always above 90% and reached 100% for some parameters, compared to 80% accuracy for the Generalized Monitoring Models (GMM). Regardless of parameter settings, PMM training models were generated within 3-4 minutes, compared to 1 hour (or longer, with increasing amounts of training data) for the GMM. Conclusions Our proposed PMM almost eliminates many of the training and small sample issues associated with GMMs. It also addresses accuracy and computational cost issues of the GMM, caused by the uniqueness of heartbeats and training issues. In addition, it addresses the fact that doctors and nurses typically do not have data science training and the skills needed to configure, understand, and even trust existing black box machine learning models.


2019 ◽  
Author(s):  
Miquel Tomas-Burguera ◽  
Sergio M. Vicente-Serrano ◽  
Santiago Beguería ◽  
Fergus Reig ◽  
Borja Latorre

Abstract. Obtaining climate grids for distinct variables is of high importance to develop better climate studies, but also to offer usable products for other researchers and to end users. As a measure of atmospheric evaporative demand (AED), reference evapotranspiration (ETo) is a key variable for understanding both water and energy terrestrial balances, being important for climatology, hydrology and agronomy. In spite of its importance, the calculation of ETo is not very common, mainly because data of a high number of climate variables are required, and some of them are not commonly available. To solve this problem, a strategy based on the spatial interpolation of climate variables previous to calculation of ETo using FAO-56 Penman-Monteith was followed to obtain an ETo database for Continental Spain and Balearic Islands covering the 1961–2014 period at a spatial resolution of 1.1 km and at weekly temporal resolution. In this database, values for the radiative and aerodynamic components as well as the estimated uncertainty related with ETo are also provided. This database is available to download in Network Common Data Form (netcdf) format at https://doi.org/10.20350/digitalCSIC/8615 (Tomas-Burguera et al., 2019), and a map visualization tool (http://speto.csic.es) is also available to help users to download data of one specific point in comma-separated values (csv) format. A relevant number of research ares could take advantage of this database. Providing only some examples: i) the study of budyko curve, which relates rainfall data with evapotranspiration and AED at watershed scale; ii) the calculation of drought indices using AED data, such as SPEI or PDSI; iii) agroclimatic studies related with irrigation requirement; iv) validation of Climate Models water and energy balance; v) the study of the impacts of climate change in AED.


Sign in / Sign up

Export Citation Format

Share Document