scholarly journals The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations

2014 ◽  
Vol 18 (11) ◽  
pp. 1922-1931 ◽  
Author(s):  
Marc A Mason ◽  
Marie Fanelli Kuczmarski ◽  
Deanne Allegro ◽  
Alan B Zonderman ◽  
Michele K Evans

AbstractObjectiveAnalysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized.DesignDuplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods.SettingHealthy Aging in Neighborhoods of Diversity across the Life Span study.SubjectsAfrican-American and White adults with two dietary recalls (n 2177).ResultsDifferences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls.ConclusionsUse of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet–health relationships.

2020 ◽  
Vol 124 (2) ◽  
pp. 189-198 ◽  
Author(s):  
Liangzi Zhang ◽  
Hendriek Boshuizen ◽  
Marga Ocké

AbstractTechnology advancements have driven the use of self-administered dietary assessment methods in large-scale dietary surveys. Interviewer-assisted methods generally have a complicated recipe recording procedure enabling the adjustment from a standard recipe. In order to decide if this functionality can be omitted for self-administered dietary assessment, this study aimed to assess the extent of standard recipe modifications in the Dutch National Food Consumption Survey and measure the impact on the food group and nutrient intake distributions of the population when the modifications were disregarded. A two-scenario simulation analysis was conducted. Firstly, the individual recipe scenario omitted the full modifications to the standard recipes made by people who knew their recipes. Secondly, the modified recipe scenario omitted the modifications made by those who partially modified the standard recipe due to their limited knowledge. The weighted percentage differences for the nutrient and food group intake distributions between the scenarios and the original data set were calculated. The highest percentage of energy consumed through mixed dishes was 10 % for females aged 19–79 years. Comparing the combined scenario and the original data set, the average of the absolute percentage difference for the population mean intakes was 1·6 % across all food groups and 0·6 % for nutrients. The soup group (−6·6 %) and DHA (−2·3 %) showed the largest percentage difference. The recipe simplification caused a slight underestimation of the consumed amount of both foods (−0·2 %) and nutrients (−0·4 %). These results are promising for developing self-administered 24-hour recalls or food diary applications without complex recipe function.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


2015 ◽  
Vol 8 (1) ◽  
pp. 421-434 ◽  
Author(s):  
M. P. Jensen ◽  
T. Toto ◽  
D. Troyan ◽  
P. E. Ciesielski ◽  
D. Holdridge ◽  
...  

Abstract. The Midlatitude Continental Convective Clouds Experiment (MC3E) took place during the spring of 2011 centered in north-central Oklahoma, USA. The main goal of this field campaign was to capture the dynamical and microphysical characteristics of precipitating convective systems in the US Central Plains. A major component of the campaign was a six-site radiosonde array designed to capture the large-scale variability of the atmospheric state with the intent of deriving model forcing data sets. Over the course of the 46-day MC3E campaign, a total of 1362 radiosondes were launched from the enhanced sonde network. This manuscript provides details on the instrumentation used as part of the sounding array, the data processing activities including quality checks and humidity bias corrections and an analysis of the impacts of bias correction and algorithm assumptions on the determination of convective levels and indices. It is found that corrections for known radiosonde humidity biases and assumptions regarding the characteristics of the surface convective parcel result in significant differences in the derived values of convective levels and indices in many soundings. In addition, the impact of including the humidity corrections and quality controls on the thermodynamic profiles that are used in the derivation of a large-scale model forcing data set are investigated. The results show a significant impact on the derived large-scale vertical velocity field illustrating the importance of addressing these humidity biases.


2021 ◽  
Author(s):  
David Cotton ◽  

<p><strong>Introduction</strong></p><p>HYDROCOASTAL is a two year project funded by ESA, with the objective to maximise exploitation of SAR and SARin altimeter measurements in the coastal zone and inland waters, by evaluating and implementing new approaches to process SAR and SARin data from CryoSat-2, and SAR altimeter data from Sentinel-3A and Sentinel-3B. Optical data from Sentinel-2 MSI and Sentinel-3 OLCI instruments will also be used in generating River Discharge products.</p><p>New SAR and SARin processing algorithms for the coastal zone and inland waters will be developed and implemented and evaluated through an initial Test Data Set for selected regions. From the results of this evaluation a processing scheme will be implemented to generate global coastal zone and river discharge data sets.</p><p>A series of case studies will assess these products in terms of their scientific impacts.</p><p>All the produced data sets will be available on request to external researchers, and full descriptions of the processing algorithms will be provided</p><p> </p><p><strong>Objectives</strong></p><p>The scientific objectives of HYDROCOASTAL are to enhance our understanding  of interactions between the inland water and coastal zone, between the coastal zone and the open ocean, and the small scale processes that govern these interactions. Also the project aims to improve our capability to characterize the variation at different time scales of inland water storage, exchanges with the ocean and the impact on regional sea-level changes</p><p>The technical objectives are to develop and evaluate  new SAR  and SARin altimetry processing techniques in support of the scientific objectives, including stack processing, and filtering, and retracking. Also an improved Wet Troposphere Correction will be developed and evaluated.</p><p><strong>Project  Outline</strong></p><p>There are four tasks to the project</p><ul><li>Scientific Review and Requirements Consolidation: Review the current state of the art in SAR and SARin altimeter data processing as applied to the coastal zone and to inland waters</li> <li>Implementation and Validation: New processing algorithms with be implemented to generate a Test Data sets, which will be validated against models, in-situ data, and other satellite data sets. Selected algorithms will then be used to generate global coastal zone and river discharge data sets</li> <li>Impacts Assessment: The impact of these global products will be assess in a series of Case Studies</li> <li>Outreach and Roadmap: Outreach material will be prepared and distributed to engage with the wider scientific community and provide recommendations for development of future missions and future research.</li> </ul><p> </p><p><strong>Presentation</strong></p><p>The presentation will provide an overview to the project, present the different SAR altimeter processing algorithms that are being evaluated in the first phase of the project, and early results from the evaluation of the initial test data set.</p><p> </p>


Author(s):  
Danlei Xu ◽  
Lan Du ◽  
Hongwei Liu ◽  
Penghui Wang

A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.


2009 ◽  
Vol 2 (1) ◽  
pp. 87-98 ◽  
Author(s):  
C. Lerot ◽  
M. Van Roozendael ◽  
J. van Geffen ◽  
J. van Gent ◽  
C. Fayt ◽  
...  

Abstract. Total O3 columns have been retrieved from six years of SCIAMACHY nadir UV radiance measurements using SDOAS, an adaptation of the GDOAS algorithm previously developed at BIRA-IASB for the GOME instrument. GDOAS and SDOAS have been implemented by the German Aerospace Center (DLR) in the version 4 of the GOME Data Processor (GDP) and in version 3 of the SCIAMACHY Ground Processor (SGP), respectively. The processors are being run at the DLR processing centre on behalf of the European Space Agency (ESA). We first focus on the description of the SDOAS algorithm with particular attention to the impact of uncertainties on the reference O3 absorption cross-sections. Second, the resulting SCIAMACHY total ozone data set is globally evaluated through large-scale comparisons with results from GOME and OMI as well as with ground-based correlative measurements. The various total ozone data sets are found to agree within 2% on average. However, a negative trend of 0.2–0.4%/year has been identified in the SCIAMACHY O3 columns; this probably originates from instrumental degradation effects that have not yet been fully characterized.


2018 ◽  
Vol 39 (2) ◽  
pp. 329-358 ◽  
Author(s):  
Colin Provost ◽  
Brian J. Gerber

AbstractEnvironmental justice (EJ) has represented an important equity challenge in policymaking for decades. President Clinton’s executive order (EO) 12898 in 1994 represented a significant federal action, requiring agencies to account for EJ issues in new rulemakings. We examine the impact of EO 12898 within the larger question of how EO are implemented in complex policymaking. We argue that presidential preferences will affect bureaucratic responsiveness and fire alarm oversight. However, EJ policy complexity produces uncertainty leading to bureaucratic risk aversion, constraining presidential efforts to steer policy. We utilise an original data set of nearly 2,000 final federal agency rules citing EO 12898 and find significant variation in its utilisation across administrations. Uncertainty over the nature of the order has an important influence on bureaucratic responsiveness. Our findings are instructive for the twin influences of political control and policy-making uncertainty and raise useful questions for future EJ and policy implementation research.


2021 ◽  
Author(s):  
Ahmed Attia ◽  
Matthew Lawrence

Abstract Distributed Fiber Optics (DFO) technology has been the new face for unconventional well diagnostics. This technology focuses on measuring Distributed Acoustic Sensing (DAS) and Distrusted Temperature Sensing (DTS) to give an in-depth understanding of well productivity pre and post stimulation. Many different completion design strategies, both on surface and downhole, are used to obtain the best fracture network outcome; however, with complex geological features, different fracture designs, and fracture driven interactions (FDIs) effecting nearby wells, it is difficult to grasp a full understanding on completion design performance for each well. Validating completion designs and improving on the learnings found in each data set should be the foundation in developing each field. Capturing a data set with strong evidence of what works and what doesn't, can help the operator make better engineering decisions to make more efficient wells as well as help gauge the spacing between each well. The focus of this paper will be on a few case studies in the Bakken which vividly show how infill wells greatly interfered with production output. A DFO deployed with a 0.6" OD, 23,000-foot-long carbon fiber rod to acquire DAS and DTS for post frac flow, completion, and interference evaluation. This paper will dive into the DFO measurements taken post frac to further explain what effects are seen on completion designs caused by interferences with infill wells; the learnings taken from the DFO post frac were applied to further escalate the understanding and awareness of how infill wells will preform on future pad sites. A showcase of three separate data sets from the Bakken will identify how effective DFO technology can be in evaluating and making informed decisions on future frac completions. In this paper we will also show and discuss how DFO can measure real time FDI events and what measures can be taken to lessen the impact on negative interference caused by infill wells.


2021 ◽  
Author(s):  
Gunta Kalvāne ◽  
Andis Kalvāns ◽  
Agrita Briede ◽  
Ilmārs Krampis ◽  
Dārta Kaupe ◽  
...  

<p>According to the Köppen climate classification, almost the entire area of Latvia belongs to the same climate type, Dfb, which is characterized by humid continental climates with warm (sometimes hot) summers and cold winters.  In the last decades whether conditions on the western coast of Latvia more characterized by temperate maritime climates. In this area there has been a transition (and still ongoing) to the climate type Cfb.</p><p>Temporal and spatial changes of temperature and precipitation regime have been examined in whole territory to identify the breaking point of climate type shifts. We used two type of climatological data sets: gridded daily temperature from the E-OBS data set version 21.0e (Cornes et al., 2018) and direct observations from meteorological stations (data source: Latvian Environment, Geology and Meteorology Centre). The temperature and precipitation regime have changed significantly in the last century - seasonal and regional differences can be observed in the territory of Latvia.</p><p>We have digitized and analysed more than 47 thousand phenological records, fixed by volunteers in period 1970-2018. Study has shown that significant seasonal changes have taken place across the Latvian landscape due to climate change (Kalvāne and Kalvāns, 2021). The largest changes have been recorded for the unfolding (BBCH11) and flowering (BBCH61) phase of plants – almost 90% of the data included in the database demonstrate a negative trend. The winter of 1988/1989 may be considered as breaking point, it has been common that many phases have begun sooner (particularly spring phases), while abiotic autumn phases have been characterized by late years.</p><p>Study gives an overview aboutclimate change (also climate type shift) impacts on ecosystems in Latvia, particularly to forest and semi-natural grasslands and temporal and spatial changes of vegetation structure and distribution areas.</p><p>This study was carried out within the framework of the Impact of Climate Change on Phytophenological Phases and Related Risks in the Baltic Region (No. 1.1.1.2/VIAA/2/18/265) ERDF project and the Climate change and sustainable use of natural resources institutional research grant of the University of Latvia (No. AAP2016/B041//ZD2016/AZ03).</p><p>Cornes, R. C., van der Schrier, G., van den Besselaar, E. J. M. and Jones, P. D.: An Ensemble Version of the E-OBS Temperature and Precipitation Data Sets, J. Geophys. Res. Atmos., 123(17), 9391–9409, doi:10.1029/2017JD028200, 2018.</p><p>Kalvāne, G. and Kalvāns, A.(2021): Phenological trends of multi-taxonomic groups in Latvia, 1970-2018, Int. J. Biometeorol., doi:https://doi.org/10.1007/s00484-020-02068-8, 2021.</p>


Sign in / Sign up

Export Citation Format

Share Document