Implementing Standard Software with ARIS Models

1999 ◽  
pp. 177-183
Author(s):  
Peter Mattheis ◽  
Wolfram Jost
Keyword(s):  
2019 ◽  
Vol 10 (01) ◽  
pp. 060-065
Author(s):  
Teresa O'Leary ◽  
June Weiss ◽  
Benjamin Toll ◽  
Cynthia Brandt ◽  
Steven Bernstein

Background Investigators conducting prospective clinical trials must report patient flow using the Consolidated Standards of Reporting Trials (CONSORT) statement. Depending on how data are collected, this can be a laborious, time-intensive process. However, because many trials enter data electronically, CONSORT diagrams may be generated in an automated fashion. Objective Our objective was to use an off-the-shelf software to develop a technique to generate CONSORT diagrams automatically. Methods During a recent trial, data were entered into FileMaker Pro, a commercially available software, at enrollment and three waves of follow-up. Patient-level data were coded to automatically generate CONSORT diagrams for use by the study team. Results From August 2012 to July 2014, 1,044 participants were enrolled. CONSORT diagrams were generated weekly for study team meetings to track follow-ups at 1, 6, and 12 months, for 960 (92%), 921 (90%), and 871 (88%) participants who were contacted or deceased, respectively. Reasons for loss to follow-up were captured at each follow-up. Conclusion CONSORT diagrams can be generated using a standard software for any trial and can facilitate data collection, project management, and reporting.


2005 ◽  
Vol 37 (1) ◽  
pp. 127-132 ◽  
Author(s):  
P. Jedrusik ◽  
H. Schulze ◽  
C. D. Claussen ◽  
K. Golka

2014 ◽  
Vol 14 (9) ◽  
pp. 2321-2335 ◽  
Author(s):  
N. M. Neykov ◽  
P. N. Neytchev ◽  
W. Zucchini

Abstract. Stochastic daily precipitation models are commonly used to generate scenarios of climate variability or change on a daily timescale. The standard models consist of two components describing the occurrence and intensity series, respectively. Binary logistic regression is used to fit the occurrence data, and the intensity series is modeled using a continuous-valued right-skewed distribution, such as gamma, Weibull or lognormal. The precipitation series is then modeled using the joint density, and standard software for generalized linear models can be used to perform the computations. A drawback of these precipitation models is that they do not produce a sufficiently heavy upper tail for the distribution of daily precipitation amounts; they tend to underestimate the frequency of large storms. In this study, we adapted the approach of Furrer and Katz (2008) based on hybrid distributions in order to correct for this shortcoming. In particular, we applied hybrid gamma–generalized Pareto (GP) and hybrid Weibull–GP distributions to develop a stochastic precipitation model for daily rainfall at Ihtiman in western Bulgaria. We report the results of simulations designed to compare the models based on the hybrid distributions and those based on the standard distributions. Some potential difficulties are outlined.


2021 ◽  
Author(s):  
Joshua N. Sampson ◽  
Paul S. Albert ◽  
Mark P. Purdue

Abstract Background: We consider the analysis of nested, matched, case-control studies that have multiple biomarker measurements per individual. We propose a simple approach for estimating the marginal relationship between a biomarker measured at a single time point and the risk of an event. We know of no other standard software package that can perform such analyses while explicitly accounting for the matching. Results: We propose an application of conditional logistic regression (CLR) that can include all measurements and uses a robust variance estimator. We compare our approach to other methods such as performing CLR with only the first measurement, CLR with an average of all measurements, and Generalized Estimating Equations. In simulations, our approach is significantly more powerful than CLR with one measurement or an average of all measurements, and has similar to power to GEE but correctly accounts for the matching. We then apply our approach to the CLUE cohort to show that an increased level of the immune marker sCD27 is associated with non‐Hodgkin lymphoma (NHL) and, by evaluating the strength of the association as a function of time until diagnosis, that the an increased level is likely an effect of the disease as opposed to a cause of the disease. The approach can be implemented by the R function clogitRV available at https://github.com/sampsonj74/clogitRV.Conclusion: We offered an approach and software for analyzing matched case-control studies with multiple measurements. We demonstrated that these methods are accurate, precise, and statistically powerful.


2021 ◽  
Vol 1 (3) ◽  
pp. 76-86
Author(s):  
R.O. Maksimov ◽  
◽  
I.V. Chichekin ◽  

To determine the maximum loads acting in the rear air suspension of a truck at the early stages of design there was used computer modeling based on solving equations of dynamics of solids and implemented in the Recurdyn software. The components of the developed virtual test bench, includ-ing hinges, power connections, drive axles, a wheel-hub assembly with a wheel and a support plat-form, are considered in detail. The test bench is controlled using a mathematical model created in the environment for calculating the dynamics of rigid bodies and associated with a solid suspension model by standard software tools of the application. The test bench is controlled using a mathemati-cal model created in the environment for calculating the dynamics of rigid bodies and associated with a solid suspension model by standard software tools of the application. The use of such a test bench makes it possible to determine the loads in the hinges and power connections of the suspen-sion, to determine the mutual positions of the links for each load mode, to increase the accuracy of the calculation of loads in comparison with the flat kinematic and force calculation. The mathemati-cal model of the virtual test bench allows to carry out numerous parametric studies of the suspension without the involvement of expensive full-scale prototypes. This makes it possible at the early stages of design to determine all hazardous modes, select rational parameters of the elements, and reduce design costs. The paper shows the results of modeling the operation of a virtual test bench with an air suspen-sion in the most typical loading modes, identifying the most dangerous modes. The efficiency and adequacy of the mathematical model of the suspension was proved. Examples of determining the force in all the joints of the structure, the choice of maximum loads for design calculations when designing the air suspension of vehicle were shown.


2014 ◽  
Vol 70 (a1) ◽  
pp. C1269-C1269
Author(s):  
Ethan Merritt

"Tools for validating structural models of proteins are relatively mature and widely implemented. New protein crystallographers are introduced early on to the importance of monitoring conformance with expected φ/ψ values, favored rotamers, and local stereochemistry. The protein model is validated by the PDB at the time of deposition using criteria that are also available in the standard software packages used to refine the model being deposited. By contrast, crystallographers are typically much less familiar with procedures to validate key non-protein components of the model – cofactors, substrates, inhibitors, etc. It has been estimated that as many as a third of all ligands in the PDB exhibit preventable errors of some sort, ranging from minor deviations in expected bond angles to wholly implausible placement in the binding pocket. Following recommendations from the wwPDB Validation Task Force, the PDB recently began validating ligand geometry as an integral part of deposition processing. This means that many crystallographers will soon receive for the first time a ""grade"" on the quality of ligands in the structure they have just deposited. Some will be surprised, as I was following my first PDB deposition of 2014, at how easily bad ligand geometry can slip through the cracks in supposedly robust structure refinement protocols that their lab has used for many years. I will illustrate use of current tools for generating ligand restraints to guide model refinement. One is the jligand+coot+cprodrg pipeline integrated into the CCP4 suite. Another is the Grade web server provided as a community resource by Global Phasing Ltd. Furthermore I will show examples from recent in-house refinements of how things can still go wrong even if you do use these tools, and how we recovered. The new PDB deposition checks may expose errors in your ligand descriptions after the fact. This presentation may help you avoid introducing those errors in the first place."


2006 ◽  
Vol 165 (4) ◽  
pp. 453-463 ◽  
Author(s):  
D. L. Miglioretti ◽  
P. J. Heagerty

Author(s):  
Xinsheng Hu ◽  
Ji Zhou ◽  
Jun Yu ◽  
Baochang Shi ◽  
Zhijian Zong

Abstract A new algorithm for solving optimal linkage function generation is proposed. The algorithm is simple in form, easily used and has much reliability and accuracy than any other algorithms reported on linkage synthesis. The existing standard software of unconstrained differential optimization can directly be used in it. Numerical experiments indicate the effectiveness of the new algorithm.


Sign in / Sign up

Export Citation Format

Share Document