scholarly journals How is �hearing loss� and �hearing aid(s)� represented in the United States newspaper media? (Preprint)

2018 ◽  
Author(s):  
Vinaya Manchaiah ◽  
Pierre Ratinaud ◽  
Eldre Beukes

BACKGROUND News media plays an important role in formulating peoples knowledge and opinions about various aspects including health. OBJECTIVE The current study explored how “hearing loss” and “hearing aid(s)” are represented in United States newspaper media. METHODS A cross-sectional study design was selected to analyze publicly available newspaper media data. The data sets were generated from the database, the U.S. Major Dailies by ProQuest by searching the key words for newspapers published during 1990-2017. Cluster analysis (i.e., text pattern analysis) and Chi square tests were performed using Iramuteq software. RESULTS The hearing loss data set had 1,527 texts (i.e., articles). The cluster analysis resulted in seven clusters, which were named as: (1) causes and consequences (26.1%); (2) early identification and diagnosis (9%); (3) health promotion and prevention (22.1%); (4) recreational noise exposure (10.4%); (5) prevalence (14.3%); (6) research and development (12.4%); and (7) cognitive hearing science (5.6%). The hearing aid(s) data set had 2,667 texts. The cluster analysis resulted in eight clusters, which were named as: (1) signal processing (20.2%); (2) insurance (8.9%); (3) prevalence (12.4%); (4) research and development (5.4%); (5) activities and relation (16.2%); (6) environment (13.8%); (7) innovation (12%); and (8) wireless and connectivity (11.1%). Time series analysis of clusters in both “hearing loss” and “hearing aid(s)” data sets indicated that the change in pattern of information presented in newspaper media during 1990-2016 (e.g., cluster 7 focusing on cognitive hearing science in hearing loss data set emerging only since the year 2012 and growing rapidly). CONCLUSIONS The text pattern analysis showed that the U.S. newspaper media focuses on a range of issues when considering “hearing loss” and “hearing aid(s),” and the pattern or trends change over time. The study results can be helpful for hearing healthcare professionals to understand what presuppositions society in general may have as the media has the ability to influence societal perception and opinions.

Author(s):  
Fred L. Bookstein

AbstractA matrix manipulation new to the quantitative study of develomental stability reveals unexpected morphometric patterns in a classic data set of landmark-based calvarial growth. There are implications for evolutionary studies. Among organismal biology’s fundamental postulates is the assumption that most aspects of any higher animal’s growth trajectories are dynamically stable, resilient against the types of small but functionally pertinent transient perturbations that may have originated in genotype, morphogenesis, or ecophenotypy. We need an operationalization of this axiom for landmark data sets arising from longitudinal data designs. The present paper introduces a multivariate approach toward that goal: a method for identification and interpretation of patterns of dynamical stability in longitudinally collected landmark data. The new method is based in an application of eigenanalysis unfamiliar to most organismal biologists: analysis of a covariance matrix of Boas coordinates (Procrustes coordinates without the size standardization) against their changes over time. These eigenanalyses may yield complex eigenvalues and eigenvectors (terms involving $$i=\sqrt{-1}$$ i = - 1 ); the paper carefully explains how these are to be scattered, gridded, and interpreted by their real and imaginary canonical vectors. For the Vilmann neurocranial octagons, the classic morphometric data set used as the running example here, there result new empirical findings that offer a pattern analysis of the ways perturbations of growth are attenuated or otherwise modified over the course of developmental time. The main finding, dominance of a generalized version of dynamical stability (negative autoregressions, as announced by the negative real parts of their eigenvalues, often combined with shearing and rotation in a helpful canonical plane), is surprising in its strength and consistency. A closing discussion explores some implications of this novel pattern analysis of growth regulation. It differs in many respects from the usual way covariance matrices are wielded in geometric morphometrics, differences relevant to a variety of study designs for comparisons of development across species.


2007 ◽  
Vol 56 (6) ◽  
pp. 75-83 ◽  
Author(s):  
X. Flores ◽  
J. Comas ◽  
I.R. Roda ◽  
L. Jiménez ◽  
K.V. Gernaey

The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.


2018 ◽  
Vol 40 ◽  
pp. 06021
Author(s):  
David Abraham ◽  
Tate McAlpin ◽  
Keaton Jones

The movement of bed forms (sand dunes) in large sand-bed rivers is being used to determine the transport rate of bed load. The ISSDOTv2 (Integrated Section Surface Difference Over Time version 2) methodology uses time sequenced differences of measured bathymetric surfaces to compute the bed-load transport rate. The method was verified using flume studies [1]. In general, the method provides very consistent and repeatable results, and also shows very good fidelity with most other measurement techniques. Over the last 7 years we have measured, computed and compiled what we believe to be the most extensive data set anywhere of bed-load measurements on large, sand bed rivers. Most of the measurements have been taken on the Mississippi, Missouri, Ohio and Snake Rivers in the United States. For cases where multiple measurements were made at varying flow rates, bed-load rating curves have been produced. This paper will provide references for the methodology, but is intended more to discuss the measurements, the resulting data sets, and current and potential uses for the bed-load data.


2005 ◽  
Vol 2005 (1) ◽  
pp. 143-147
Author(s):  
Daniel R. Norton

ABSTRACT The annual volume of oil spilled into the marine environment by tank vessels (tank barges and tanks hips) is analyzed against the total annual volume of oil transported by tank vessels in order to determine any correlational relationship. U.S. Coast Guard data was used to provide the volume of oil (petroleum) spilled into the marine environment each year by tank vessels. Data from the U.S. Army Corps of Engineers and the U.S. Department of Transportation's (US DOT) National Transportation Statistics (NTS) were used for the annual volume of oil transported via tank vessels in the United States. This data is provided in the form of tonnage and ton-miles, respectively. Each data set has inherent benefits and weaknesses. For the analysis the volume of oil transported was used as the explanatory variable (x) and the volume of oil spilled into the marine environment as the response variable (y). Both data sets were tested for correlation. A weak relationship, r = −0.38 was found using tonnage, and no further analysis was performed. A moderately strong relationship, r = 0.79, was found using ton-miles. Further analysis using regression analysis and a plot of residuals showed the data to be satisfactory with no sign of lurking variables, but with the year 1990 being a possible outlier.


Geophysics ◽  
1993 ◽  
Vol 58 (9) ◽  
pp. 1281-1296 ◽  
Author(s):  
V. J. S. Grauch

The magnetic data set compiled for the Decade of North American Geology (DNAG) project presents an important digital data base that can be used to examine the North American crust. The data represent a patchwork from many individual airborne and marine magnetic surveys. However, the portion of data for the conterminous U.S. has problems that limit the resolution and use of the data. Now that the data are available in digital form, it is important to describe the data limitations more specifically than before. The primary problem is caused by datum shifts between individual survey boundaries. In the western U.S., the DNAG data are generally shifted less than 100 nT. In the eastern U.S., the DNAG data may be shifted by as much as 300 nT and contain regionally shifted areas with wavelengths on the order of 800 to 1400 km. The worst case is the artificial low centered over Kentucky and Tennessee produced by a series of datum shifts. A second significant problem is lack of anomaly resolution that arises primarily from using survey data that is too widely spaced compared to the flight heights above magnetic sources. Unfortunately, these are the only data available for much of the U.S. Another problem is produced by the lack of common observation surface between individual pieces of the U.S. DNAG data. The height disparities introduce variations in spatial frequency content that are unrelated to the magnetization of rocks. The spectral effects of datum shifts and the variation of spatial frequency content due to height disparities were estimated for the DNAG data for the conterminous U.S. As a general guideline for digital filtering, the most reliable features in the U.S. DNAG data have wavelengths roughly between 170 and 500 km, or anomaly half‐widths between 85 and 250 km. High‐quality, large‐region magnetic data sets have become increasingly important to meet exploration and scientific objectives. The acquisition of a new national magnetic data set with higher quality at a greater range of wavelengths is clearly in order. The best approach is to refly much of the U.S. with common specifications and reduction procedures. At the very least, magnetic data sets should be remerged digitally using available or newly flown long‐distance flight‐line data to adjust survey levels. In any case, national coordination is required to produce a consistent, high‐quality national magnetic map.


2002 ◽  
Vol 16 (4) ◽  
pp. 137-160 ◽  
Author(s):  
Lawrence J White

I assemble two rarely used data sets to measure aggregate concentration in the U.S. in the 1980s and 1990s. Despite the merger waves of those decades, aggregate concentration declined in the 1980s and the early 1990s, but rose modestly in the late 1990s. The levels at the end of the decade were at or below the levels of the late 1980s or early 1990s. The average size of firm and the relative importance of larger size classes of firms increased, however. Gini coefficients for employment and payroll shares of companies showed moderate but steady increases from 1988 through 1999.


2018 ◽  
Vol 23 (3) ◽  
pp. 303-315 ◽  
Author(s):  
Rebecca Rebbe

Neglect is the most common form of reported child maltreatment in the United States with 75.3% of confirmed child maltreatment victims in 2015 neglected. Despite constituting the majority of reported child maltreatment cases and victims, neglect still lacks a standard definition. In the United States, congruent with the pervasiveness of law in child welfare systems, every state and the District of Columbia has its own statutory definition of neglect. This study used content analysis to compare state legal statutory definitions with the Fourth National Incidence Survey (NIS-4) operationalization of neglect. The resulting data set was then analyzed using cluster analysis, resulting in the identification of three distinct groups of states based on how they define neglect: minimal, cornerstones, and expanded. The states’ definitions incorporate few of the NIS-4 components. Practice and policy implications of these constructions of neglect definitions are discussed.


2014 ◽  
Vol 97 (6) ◽  
pp. 1626-1633 ◽  
Author(s):  
Jonathan R Deeds ◽  
Sara M Handy ◽  
Frederick Fry ◽  
Hudson Granade ◽  
Jeffrey T Williams ◽  
...  

Abstract With the recent adoption of a DNA sequencing-based method for the species identification for seafood products by the U.S. Food and Drug Administration (FDA), a library of standard sequences derived from reference specimens with authoritative taxonomic authentication was required. Provided here are details of how the FDA and its collaborators are building this reference standard sequence library that will be used to confirm the accurate labeling of seafood products sold in interstate commerce in the United States. As an example data set from this library, information for 117 fish reference standards, representing 94 species from 43 families in 15 orders, collected over a 4-year period from the Gulf of Mexico, U.S., that are now stored at the Smithsonian Museum Support Center in Suitland, MD, are provided.


2017 ◽  
Author(s):  
Francesca Carisi ◽  
Kai Schröter ◽  
Alessio Domeneghetti ◽  
Heidi Kreibich ◽  
Attilio Castellarin

Abstract. Simplified flood loss models are one important source of uncertainty in flood risk assessments. Many countries experience sparseness or absence of comprehensive high-quality flood loss data sets which is often rooted in a lack of protocols and reference procedures for compiling loss data sets after flood events. Such data are an important reference for developing and validating flood loss models. We consider the Secchia river flood event of January 2014, when a sudden levee-breach caused the inundation of nearly 52 km2 in Northern Italy. For this event we compiled a comprehensive flood loss data set of affected private households including buildings footprint, economic value, damages to contents, etc. based on information collected by local authorities after the event. By analysing this data set we tackle the problem of flood damage estimation in Emilia-Romagna (Italy) by identifying empirical uni- and multi-variable loss models for residential buildings and contents. The accuracy of the proposed models is compared with those of several flood-damage models reported in the literature, providing additional insights on the transferability of the models between different contexts. Our results show that (1) even simple uni-variable damage models based on local data are significantly more accurate than literature models derived for different contexts; (2) multi-variable models that consider several explanatory variables outperform uni-variable models which use only water depth. However, multi-variable models can only be effectively developed and applied if sufficient and detailed information is available.


2020 ◽  
Vol 7 (1) ◽  
pp. 163-180
Author(s):  
Saagar S Kulkarni ◽  
Kathryn E Lorenz

This paper examines two CDC data sets in order to provide a comprehensive overview and social implications of COVID-19 related deaths within the United States over the first eight months of 2020. By analyzing the first data set during this eight-month period with the variables of age, race, and individual states in the United States, we found correlations between COVID-19 deaths and these three variables. Overall, our multivariable regression model was found to be statistically significant.  When analyzing the second CDC data set, we used the same variables with one exception; gender was used in place of race. From this analysis, it was found that trends in age and individual states were significant. However, since gender was not found to be significant in predicting deaths, we concluded that, gender does not play a significant role in the prognosis of COVID-19 induced deaths. However, the age of an individual and his/her state of residence potentially play a significant role in determining life or death. Socio-economic analysis of the US population confirms Qualitative socio-economic Logic based Cascade Hypotheses (QLCH) of education, occupation, and income affecting race/ethnicity differently. For a given race/ethnicity, education drives occupation then income, where a person lives, and in turn his/her access to healthcare coverage. Considering socio-economic data based QLCH framework, we conclude that different races are poised for differing effects of COVID-19 and that Asians and Whites are in a stronger position to combat COVID-19 than Hispanics and Blacks.


Sign in / Sign up

Export Citation Format

Share Document