scholarly journals Interpretation of Compositional Regression with Application to Time Budget Analysis

2018 ◽  
Vol 47 (2) ◽  
pp. 3-19 ◽  
Author(s):  
Ivo Muller ◽  
Karel Hron ◽  
Eva Fiserova ◽  
Jan Smahaj ◽  
Panajotis Cakirpaloglu ◽  
...  

Regression with compositional response or covariates, or even regression between parts of a composition, is frequently employed in social sciences. Among other possible applications, it may help to reveal interesting features in time allocation analysis. As individual activities represent relative contributions to the total amount of time, statistical processing of raw data (frequently represented directly as proportions or percentages) using standard methods may lead to biased results. Specific geometrical features of time budget variables are captured by the logratio methodology of compositional data, whose aim is to build (preferably orthonormal) coordinates to be applied with popular statistical methods. The aim of this paper is to present recent tools of regression analysis within the logratio methodology and apply them to reveal potential relationships among psychometric indicators in a real-world data set. In particular, orthogonal logratio coordinates have been introduced to enhance the interpretability of coefficients in regression models.

Author(s):  
K. G. van den Boogaart ◽  
P. Filzmoser ◽  
K. Hron ◽  
M. Templ ◽  
R. Tolosana-Delgado

Abstract Compositional data carry their relevant information in the relationships (logratios) between the compositional parts. It is shown how this source of information can be used in regression modeling, where the composition could either form the response, or the explanatory part, or even both. An essential step to set up a regression model is the way how the composition(s) enter the model. Here, balance coordinates will be constructed that support an interpretation of the regression coefficients and allow for testing hypotheses of subcompositional independence. Both classical least-squares regression and robust MM regression are treated, and they are compared within different regression models at a real data set from a geochemical mapping project.


2021 ◽  
pp. 095679762097165
Author(s):  
Matthew T. McBee ◽  
Rebecca J. Brand ◽  
Wallace E. Dixon

In 2004, Christakis and colleagues published an article in which they claimed that early childhood television exposure causes later attention problems, a claim that continues to be frequently promoted by the popular media. Using the same National Longitudinal Survey of Youth 1979 data set ( N = 2,108), we conducted two multiverse analyses to examine whether the finding reported by Christakis and colleagues was robust to different analytic choices. We evaluated 848 models, including logistic regression models, linear regression models, and two forms of propensity-score analysis. If the claim were true, we would expect most of the justifiable analyses to produce significant results in the predicted direction. However, only 166 models (19.6%) yielded a statistically significant relationship, and most of these employed questionable analytic choices. We concluded that these data do not provide compelling evidence of a harmful effect of TV exposure on attention.


2021 ◽  
pp. 1-13
Author(s):  
Hailin Liu ◽  
Fangqing Gu ◽  
Zixian Lin

Transfer learning methods exploit similarities between different datasets to improve the performance of the target task by transferring knowledge from source tasks to the target task. “What to transfer” is a main research issue in transfer learning. The existing transfer learning method generally needs to acquire the shared parameters by integrating human knowledge. However, in many real applications, an understanding of which parameters can be shared is unknown beforehand. Transfer learning model is essentially a special multi-objective optimization problem. Consequently, this paper proposes a novel auto-sharing parameter technique for transfer learning based on multi-objective optimization and solves the optimization problem by using a multi-swarm particle swarm optimizer. Each task objective is simultaneously optimized by a sub-swarm. The current best particle from the sub-swarm of the target task is used to guide the search of particles of the source tasks and vice versa. The target task and source task are jointly solved by sharing the information of the best particle, which works as an inductive bias. Experiments are carried out to evaluate the proposed algorithm on several synthetic data sets and two real-world data sets of a school data set and a landmine data set, which show that the proposed algorithm is effective.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


2021 ◽  
pp. 107110072110581
Author(s):  
Wenye Song ◽  
Naohiro Shibuya ◽  
Daniel C. Jupiter

Background: Ankle fractures in patients with diabetes mellitus have long been recognized as a challenge to practicing clinicians. Ankle fracture patients with diabetes may experience prolonged healing, higher risk of hardware failure, an increased risk of wound dehiscence and infection, and higher pain scores pre- and postoperatively, compared to patients without diabetes. However, the duration of opioid use among this patient cohort has not been previously evaluated. The purpose of this study is to retrospectively compare the time span of opioid utilization between ankle fracture patients with and without diabetes mellitus. Methods: We conducted a retrospective cohort study using our institution’s TriNetX database. A total of 640 ankle fracture patients were included in the analysis, of whom 73 had diabetes. All dates of opioid use for each patient were extracted from the data set, including the first and last date of opioid prescription. Descriptive analysis and logistic regression models were employed to explore the differences in opioid use between patients with and without diabetes after ankle fracture repair. A 2-tailed P value of .05 was set as the threshold for statistical significance. Results: Logistic regression models revealed that patients with diabetes are less likely to stop using opioids within 90 days, or within 180 days, after repair compared to patients without diabetes. Female sex, neuropathy, and prefracture opioid use are also associated with prolonged opioid use after ankle fracture repair. Conclusion: In our study cohort, ankle fracture patients with diabetes were more likely to require prolonged opioid use after fracture repair. Level of Evidence: Level III, prognostic.


Politics ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 464-479
Author(s):  
Gert-Jan Put ◽  
Jef Smulders ◽  
Bart Maddens

This article investigates the effect of candidates exhibiting local personal vote-earning attributes (PVEA) on the aggregate party vote share at the district level. Previous research has often assumed that packing ballot lists with localized candidates increases the aggregate party vote and seat shares. We present a strict empirical test of this argument by analysing the relative electoral swing of ballot lists at the district level, a measure of change in party vote shares which controls for the national party trend and previous party results in the district. The analysis is based on data of 7527 candidacies during six Belgian regional and federal election cycles between 2003 and 2014, which is aggregated to an original data set of 223 ballot lists. The ordinary least squares (OLS) regression models do not show a significant effect of candidates exhibiting local PVEA on relative electoral swing of ballot lists. However, the results suggest that ballot lists do benefit electorally if candidates with local PVEA are geographically distributed over different municipalities in the district.


2018 ◽  
Vol 35 (8) ◽  
pp. 1508-1518
Author(s):  
Rosembergue Pereira Souza ◽  
Luiz Fernando Rust da Costa Carmo ◽  
Luci Pirmez

Purpose The purpose of this paper is to present a procedure for finding unusual patterns in accredited tests using a rapid processing method for analyzing video records. The procedure uses the temporal differencing technique for object tracking and considers only frames not identified as statistically redundant. Design/methodology/approach An accreditation organization is responsible for accrediting facilities to undertake testing and calibration activities. Periodically, such organizations evaluate accredited testing facilities. These evaluations could use video records and photographs of the tests performed by the facility to judge their conformity to technical requirements. To validate the proposed procedure, a real-world data set with video records from accredited testing facilities in the field of vehicle safety in Brazil was used. The processing time of this proposed procedure was compared with the time needed to process the video records in a traditional fashion. Findings With an appropriate threshold value, the proposed procedure could successfully identify video records of fraudulent services. Processing time was faster than when a traditional method was employed. Originality/value Manually evaluating video records is time consuming and tedious. This paper proposes a procedure to rapidly find unusual patterns in videos of accredited tests with a minimum of manual effort.


2011 ◽  
Vol 57 (3) ◽  
pp. 223-246 ◽  
Author(s):  
Solomon A. Tadesse ◽  
Burt P. Kotler

Nubian ibex (Capra nubiana) prefer steep terrain in their landscape to reduce risks of predation and human nuisance disturbances. They also use vigilance and time allocation to manage risk of predation. We studied time budgets and habitat selection of Nubian ibex to: (1) identify the habitat variables to which Nubian ibex were behaviorally responsive; (2) investigate how time budget responses of Nubian ibex were related to season, slope condition, group size, and sex-age structure; and (3) develop behavioral-based models that account for the variations in the behaviors of Nubian ibex across the landscape and seasons.To quantify time budgets, we took regular field observations on focal individuals of Nubian ibex classified according to their habitat, group size, sex, and age. For each focal observation, we quantified environmental variables that were thought to influence the behavioral responses of ibex. Then, we developed behavioral models by correlating the proportion of behaviors measured in focal animal observations to the influential environmental variables. The behaviors of Nubian ibex significantly varied with sex and age structure, season, habitat type, and slope conditions. Adult females are more vigilant than adult male ibex, especially in the spring. This correlates with breeding and nursing activities. Based on the characteristics of the habitat, ibex behave to minimize risks of predation and human nuisance disturbances while maximizing their food intake.


2020 ◽  
Vol 7 (2) ◽  
pp. 524-553
Author(s):  
Sunday Oke ◽  
Stephen Chidera Nwafor ◽  
Chris Abiodun Ayanladun

In an earlier article, the central composite design was applied to the determination of geometrical features of casts in a two-phase transformation process to produce the wheel covers of automobiles whereby the A356 alloy is reinforced with organic substances for composite property enhancement. This article reexamines the assumptions in that circumstance to revise and expand the optimisation through the response surface methodology to a new method, Box-Behnken design (BBD), to facilitate a comprehensive treatment of the sand casting product parameters. Casting geometrical optimisation can be modelled to involve lengths, breadths, widths, heights, densities of casts and weight loss, varied at three discrete levels. The parameters are translated into codes (–1,0,1) with specified actual, minimum and maximum values. The framework, validated by published literature data, indicates its feasibility in a real-life circumstance. This article assessed the effects of the casting geometry parameters on the responses. Besides, it examined the accuracy of the parameters to predict in the regression models deployed. It was concluded that the BBD and the regression models are adequate and predict correctly. The BBD can be applied by composite developers to improve casting dimensional accuracy and economics.


Sign in / Sign up

Export Citation Format

Share Document