Combining Variables to Improve Subadult Age Estimation

2021 ◽  
Author(s):  
Kyra Stull ◽  
Kerianne Armelli

Anthropologists have reported that the combination of multiple variables and indicators generally increases precision and reduces bias in age estimates. However, endeavors specific to subadult age estimation have primarily focused on estimating age of the living and therefore on variables and indicators that are active later in ontogeny and easy to image. The current study aimed to determine if multivariable, single-indicator age-estimation models outperform single-variable age-estimation models throughout ontogeny using the three most common subadult age indicators: diaphyseal dimensions, epiphyseal fusion, and dental development. Data were collected from individuals from South Africa between birth and 12 years (N = 601) using Lodox Statscan radiographic images and from the United States between the ages of birth and 20 years (N = 1,277) using computed tomography images. Multivariate adaptive regression splines were used to build the multivariable, single-indicator, and single-variable models. Each subset used for model development had a unique training sample to build the model and testing sample to ensure that the results were generalizable. The multivariable models presented with increased precision and accuracy, reduced bias, and greater consistency across ontogeny compared to the single-variable models for both samples. Eighty percent of the independent test models (20/24) had ≥ 93% coverage, and 75% (18/24) of the independent tests models had ≥ 95% coverage. Besides providing more information to the resulting age estimate, multivariable models remove any a priori beliefs regarding variable importance and eliminate the requirement to contrive a final age estimate from multiple single-variable age-estimation models.

2007 ◽  
Vol 12 (1) ◽  
pp. 54-61 ◽  
Author(s):  
Marisa L. Beeble ◽  
Deborah Bybee ◽  
Cris M. Sullivan

While research has found that millions of children in the United States are exposed to their mothers being battered, and that many are themselves abused as well, little is known about the ways in which children are used by abusers to manipulate or harm their mothers. Anecdotal evidence suggests that perpetrators use children in a variety of ways to control and harm women; however, no studies to date have empirically examined the extent of this occurring. Therefore, the current study examined the extent to which survivors of abuse experienced this, as well as the conditions under which it occurred. Interviews were conducted with 156 women who had experienced recent intimate partner violence. Each of these women had at least one child between the ages of 5 and 12. Most women (88%) reported that their assailants had used their children against them in varying ways. Multiple variables were found to be related to this occurring, including the relationship between the assailant and the children, the extent of physical and emotional abuse used by the abuser against the woman, and the assailant's court-ordered visitation status. Findings point toward the complex situational conditions by which assailants use the children of their partners or ex-partners to continue the abuse, and the need for a great deal more research in this area.


Author(s):  
Roger L. Wayson ◽  
Kenneth Kaliski

Modeling road traffic noise levels without including the effects of meteorology may lead to substantial errors. In the United States, the required model is the Traffic Noise Model which does not include meteorology effects caused by refraction. In response, the Transportation Research Board sponsored NCHRP 25-52, Meteorological Effects on Roadway Noise, to collect highway noise data under different meteorological conditions, document the meteorological effects on roadway noise propagation under different atmospheric conditions, develop best practices, and provide guidance on how to: (a) quantify meteorological effects on roadway noise propagation; and (b) explain those effects to the public. The completed project at 16 barrier and no-barrier measurement positions adjacent to Interstate 17 (I-17) in Phoenix, Arizona provided the database which has enabled substantial developments in modeling. This report provides more recent information on the model development that can be directly applied by the noise analyst to include meteorological effects from simple look-up tables to more precise use of statistical equations.


Author(s):  
David Berry

AbstractHealthcare is fully embracing the promise of Big Data for improving performance and efficiency. Such a paradigm shift, however, brings many unforeseen impacts both positive and negative. Healthcare has largely looked at business models for inspiration to guide model development and practical implementation of Big Data. Business models, however, are limited in their application to healthcare as the two represent a complicated system versus a complex system respectively. Healthcare must, therefore, look toward other examples of complex systems to better gauge the potential impacts of Big Data. Military systems have many similarities with healthcare with a wealth of systems research, as well as practical field experience, from which healthcare can draw. The experience of the United States Military with Big Data during the Vietnam War is a case study with striking parallels to issues described in modern healthcare literature. Core principles can be extracted from this analysis that will need to be considered as healthcare seeks to integrate Big Data into its active operations.


Author(s):  
Shane E. Powers ◽  
William C. Wood

With the renewed interest in the construction of coal-fired power plants in the United States, there has also been an increased interest in the methodology used to calculate/determine the overall performance of a coal fired power plant. This methodology is detailed in the ASME PTC 46 (1996) Code, which provides an excellent framework for determining the power output and heat rate of coal fired power plants. Unfortunately, the power industry has been slow to adopt this methodology, in part because of the lack of some details in the Code regarding the planning needed to design a performance test program for the determination of coal fired power plant performance. This paper will expand on the ASME PTC 46 (1996) Code by discussing key concepts that need to be addressed when planning an overall plant performance test of a coal fired power plant. The most difficult aspect of calculating coal fired power plant performance is integrating the calculation of boiler performance with the calculation of turbine cycle performance and other balance of plant aspects. If proper planning of the performance test is not performed, the integration of boiler and turbine data will result in a test result that does not accurately reflect the true performance of the overall plant. This planning must start very early in the development of the test program, and be implemented in all stages of the test program design. This paper will address the necessary planning of the test program, including: • Determination of Actual Plant Performance. • Selection of a Test Goal. • Development of the Basic Correction Algorithm. • Designing a Plant Model. • Development of Correction Curves. • Operation of the Power Plant during the Test. All nomenclature in this paper utilizes the ASME PTC 46 definitions for the calculation and correction of plant performance.


2021 ◽  
Author(s):  
Markus Hrachowitz ◽  
Petra Hulsman ◽  
Hubert Savenije

<p>Hydrological models are often calibrated with respect to flow observations at the basin outlet. As a result, flow predictions may seem reliable but this is not necessarily the case for the spatiotemporal variability of system-internal processes, especially in large river basins. Satellite observations contain valuable information not only for poorly gauged basins with limited ground observations and spatiotemporal model calibration, but also for stepwise model development. This study explored the value of satellite observations to improve our understanding of hydrological processes through stepwise model structure adaption and to calibrate models both temporally and spatially. More specifically, satellite-based evaporation and total water storage anomaly observations were used to diagnose model deficiencies and to subsequently improve the hydrological model structure and the selection of feasible parameter sets. A distributed, process based hydrological model was developed for the Luangwa river basin in Zambia and calibrated with respect to discharge as benchmark. This model was modified stepwise by testing five alternative hypotheses related to the process of upwelling groundwater in wetlands, which was assumed to be negligible in the benchmark model, and the spatial discretization of the groundwater reservoir. Each model hypothesis was calibrated with respect to 1) discharge and 2) multiple variables simultaneously including discharge and the spatiotemporal variability in the evaporation and total water storage anomalies. The benchmark model calibrated with respect to discharge reproduced this variable well, as also the basin-averaged evaporation and total water storage anomalies. However, the evaporation in wetland dominated areas and the spatial variability in the evaporation and total water storage anomalies were poorly modelled. The model improved the most when introducing upwelling groundwater flow from a distributed groundwater reservoir and calibrating it with respect to multiple variables simultaneously. This study showed satellite-based evaporation and total water storage anomaly observations provide valuable information for improved understanding of hydrological processes through stepwise model development and spatiotemporal model calibration.</p>


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Prakash Acharya ◽  
Farhad Sami ◽  
Omar Al-Taweel ◽  
Sagar Ranka ◽  
Brianna Stack ◽  
...  

Introduction: Acute pericarditis accounts for one in every twenty emergency department visits for chest pain and a majority of these patients get admitted to a hospital. However, apart from small studies, there is a lack of data regarding the incidence and predictors of readmissions in these patients. Methodology: A secondary analysis of the Nationwide Readmission Database for years 2016-2017 was performed. Patients who were admitted with a primary diagnosis of acute pericarditis in the first six months of each year were identified based on International Classification of Diseases (ICD-10), Clinical Modification codes, and were followed for 180 days. A multivariate cox-regression model was utilized to delineate the predictors of pericarditis related readmissions. Results: A total of 21,115 patients were admitted with a primary diagnosis of acute pericarditis. The mean age was 53.3+19 years and 60.83% were males. About 23% of patients had pericardial effusion or tamponade and 19.4% of patients presenting with pericarditis required pericardiocentesis. The mortality rate during index admission was 3.21% and the mean length of stay was 6.4+9 days. The rate of all-cause readmission was 30.8% within 180 days, of which 23.8% were pericarditis related. The mean time to readmission for pericarditis was 37.7+41 days. Females were at higher risk of readmission for pericarditis [OR 1.66, CI (1.38-1.99), p<0.001] after adjustment for multiple variables (including connective tissue disease, congestive heart failure and malignancy). Presence of comorbidities like diabetes mellitus [HR 1.21, CI(1.01-1.45), p=0.04], obesity [HR 1.27, CI(1.05-1.54), p=0.01], and chronic lung disease [HR 1.32, CI(1.12-1.57), p=0.001] also increased risk of pericarditis related readmissions. Moreover, the length of index hospitalization was significantly higher in patients with pericarditis related readmissions [5.4+6 vs1.6+5 days, p<0.001]. Conclusion: Even though the mortality during index admission in patients admitted with pericarditis is low, about 1 in every 3 patients will be readmitted within 180 days. While females account for a minority of initial admissions for pericarditis, their risk of readmission is significantly higher.


2002 ◽  
Vol 1784 (1) ◽  
pp. 108-114 ◽  
Author(s):  
Sunanda Dissanayake ◽  
John Lu

Young drivers have the highest fatality involvement rates of any driver age group within the United States driving population. They also experience a higher percentage of single-vehicle crashes compared with others. When looking at the methods of improving this alarming death rate of young drivers, it is important to identify the determinants of higher crash and injury severity. With that intention, the study developed, using the Florida Traffic Crash Database, a set of sequential binary logistic regression models to predict the crash severity outcome of single-vehicle fixed-object crashes involving young drivers. Models were organized from the lowest severity level to the highest and vice versa to examine the reliability of the selection process, but it was found that there was no considerable impact based on this selection. The developed models were validated and the accuracy was tested by using crash data that were not utilized in the model development, and the results were found to be satisfactory. Factors influential in making a crash severity difference to young drivers were then identified through the models. Factors such as influence of alcohol or drugs, ejection in the crash, point of impact, rural crash locations, existence of curve or grade at the crash location, and speed of the vehicle significantly increased the probability of having a more severe crash. Restraint device usage and being a male clearly reduced the tendency of high severity, and some other variables, such as weather condition, residence location, and physical condition, were not important at all.


2020 ◽  
Vol 48 (2) ◽  
pp. 113-123
Author(s):  
Anthony F. Milano

Objective.— To update trends in incidence, prevalence, short- and long-term survival and mortality of esophageal cancer using the statistical database of SEER*Stat 8.3.4 for diagnosis years 1973-2014 employing multiple case selection variables. Methods.— A retrospective, population-based study using nationally representative data from the National Cancer Institute's (NCI) Surveillance, Epidemiology, and End Results (SEER) program to evaluate 83,658 cases of esophageal cancer for diagnosis years 1973-2014 comparing multiple variables of age, sex, race, stage, grade, cohort entry time-period, disease duration, and, two histologic oncotypes. Relative survival statistics were analyzed in two cohorts: 1973-1994 and 1995-2014. Survival statistics were derived from: SEER*Stat Database: Incidence – SEER 9 Regs Research Data, November 2016 Submission (1973-2014) &lt;Katrina/Rita Population Adjustment&gt; Released April 2017 (Ref. 9). Case frequency and incidence data, derived from the SEER program, were used to design the table format and number of pages for this report. Results.— In a total of 83,658 cases of esophageal cancer in the United States for diagnosis years 1973-2014, multiple variables of age, sex, race, stage, grade, cohort entry time-period, disease duration, and, two histologic oncotypes were compared. Mean age in males was 66.5 years, females 70.1 years, both male and female 67.2 years. Greater than 85% of incidence cases occurred between ages 55-85+ years with the zenith in males at 65-69 years (59.4%) and 70-74 years (60.5%) in females. The overall annual US death rate from 1975-2014 has slightly increased from 3.69 to 3.99 per 100,000 per year, and excess mortality remains exceedingly high. Of the 83,658 invasive cases, 82.6% were clinically staged and 79.4% were histologically graded. Conclusions.— Relative frequency, incidence and time-trends, and the clinical, demographic and secular variables of age, sex, race, stage, grade, cohort-entry time-periods, and predominant clinical oncotypes were comparatively analyzed to provide a comprehensive medical-actuarial assessment of esophageal cancer survival and mortality in the 1973-2014 time-frame.


2009 ◽  
Vol 27 (15_suppl) ◽  
pp. 4047-4047
Author(s):  
M. E. Valsecchi ◽  
J. Leighton ◽  
W. Tester

4047 Background: Colorectal cancer is the fourth most common malignancy in the United States. The single most important prognostic factor is lymph node involvement. Multiple guidelines recommend a minimum of 12 nodes should be sampled in order to insure accurate staging and treatment. However this standard of care requirement is not always achieved. The objective of this study is to identify potential modifiable factors that may explain this inadequacy between the optimal approach and routine practice. Methods: The medical charts of all patients treated for colorectal cancer stage I-III between 1999–2007 at Albert Einstein Medical Center were reviewed. The association between multiple variables and the presence of ≥12 lymph nodes reported were examined using logistic regression models. Results: A total of 337 patients were included; 173 (51%) had ≥12 lymph nodes retrieved with a mean of 12.7 (SD±7.6). Demographic characteristics: 78% older than 60 years old; 161 patients (47.8%) male; white (27%), black (67%) and other race (6%). Using a univariate analysis the following variables were statistically associated with ≥12 lymph nodes reported: Colon size (20.6±14.7 vs. 29.9±23.1 cm, P<.001); Mesocolon thickness (3.8±0.9 vs. 4.2±0.9 cm, P<.001); Tumor size (4.14±2.3 vs. 4.6±2.1, P=.03); Site of tumor (Right vs. Left, P<.001); Pathologist (P=.06); Pathologist's Assistant (P=.006); Type of surgery (Right or Sub-Total Colectomy vs. Others, P<.001), Individual Surgeon (P=.009). The results of the multivariate logistic regression analysis, adjusting for age, sex and race, are presented in the Table . Conclusions: This studied showed that multiple factors influence the number of lymph nodes sampled. The role of the surgeon, the pathologist and specially the pathologist's assistant are potentially improvable factors with appropriate education. [Table: see text] No significant financial relationships to disclose.


Sign in / Sign up

Export Citation Format

Share Document