risk functions
Recently Published Documents


TOTAL DOCUMENTS

149
(FIVE YEARS 24)

H-INDEX

20
(FIVE YEARS 2)

2021 ◽  
Vol 506 (2) ◽  
Author(s):  
Đỗ Thu Thảo ◽  
Phạm Thị Lan

Mục tiêu: Đánh giá nguy cơ tim mạch trên bệnh nhân vảy nến thông qua thực trạng các yếu tố nguy cơ và ước lượng nguy cơ tim mạch trong 10 năm bằng thang điểm Framingham. Đối tượng và phương pháp: Nghiên cứu mô tả cắt ngang trên nhóm nghiên cứu gồm 306 bệnh nhân vảy nến và nhóm đối chứng gồm 306 bệnh nhân được chẩn đoán bệnh da thông thường, thời gian từ tháng 8/2020 đến tháng 8/2021. Cả 2 nhóm đượcmô thả thực trạng các yếu tố nguy cơ tim mạch như: hút thuốc lá, uống rượu bia, stress, thiếu hoạt động thể lực, thừa cân – béo phì, tăng huyết áp, đái tháo đường, rối loạn lipid máu. Sau đó tính nguy cơ tim mạch dựa trên các yếu tố là tuổi, giới, tình trạng hút thuốc lá, đái tháo đường, HATT (mmHg), nồng độ Cholesterol TP và HDL-C (mmol/L). Điểmnguy cơ tim mạch được tính dựa trên chương trình Framingham Heartwebsite: https://framinghamheartstudy.org/fhs-risk-functions/cardiovascular-disease-10-year-risk. Kết quả: Bệnh nhân vảy nến có các yếu tố nguy cơ tim mạch cao hơn cóý nghĩa so với nhóm đối chứng là hút thuốc lá (26,1% so với 19%; p = 0,033), stress (46,1% so với 19%; p < 0,01), thừa cân – béo phì (38,6% so với 28,4%; p = 0,008), tăng huyếtáp (32,4% so với 11,8%; p < 0,01), đái tháo đường (17,3% so với 7,5%; p < 0,01), rối loạn lipid máu (55,9% so với 35,3%; p < 0,01). Tình trạng uống rượu bia và thiếu hoạt động thể lực khác biệt không có ý nghĩa thống kê (kết quả lần lượt là 36,6% so với 32,4%; p = 0,269 và 58,2% so với 56,2%; p = 0,624). Nguy cơ tim mạch trong 10 năm tớiở nhóm vảy nến cao hơn nhómđối chứng (12,7 ± 9,5% so với 9,1 ± 6,9%; p < 0,01). Nhóm vảy nến có tỷ lệ bệnh nhân nguy cơ cao nhiều hơn nhóm đối chứng (23,9% so với 13,1%; p < 0,01). Đặc biệt, thông qua mô hình hồi quy tuyến tínhđa biến cho thấy bệnh vảy nến là yếu tố làm tăng nguy cơ tim mạch (hệ số hồi quy 1,79; p < 0,01). Kết luận: Bệnh nhân vảy nến có nguy cơ tim mạch cao hơn nhómbệnh da thông thường. Thể hiện qua thực trạngnhóm vảy nến có nhiều yếu tố nguy cơ tim mạch cao hơn như: hút thuốc lá, stress, thừa cân – béo phì, tăng huyết áp, đái tháo đường, rối loạn lipid máu. Nguy cơ tim mạchtrong 10 năm dựđoántheo thang điểm Framingham ởnhóm vảy nến cao hơn nhómđối chứng (12,7% so với 9,1%; 0 < 0,01). Trong mô hình hồi quy tuyến tínhđa biến, bệnh vảy nến là yếu tố làm tăng nguy cơ tim mạch (hệ số hồi quy 1,79; p <0,01).


Author(s):  
Max A. Greenberg

While recent scholarship has considered how algorithmic risk assessment is both shaped by and impacts social inequity, public health has not adequately considered the ways that statistical risk functions in the social world. Drawing on ethnographic and interview data collected in interpersonal violence prevention programs, this manuscript theorizes three “other lives” of statistically produced risk factors: the past lives of risk factors as quantifiable lived experience, the professional lives of risk as a practical vocabulary shaping social interactions, and the missing lives of risk as a meaningful social category for those marked as at risk. The manuscript considers how understanding these other lives of statistical risk can help public health scholars better understand barriers to social equity.


2021 ◽  
Author(s):  
Jiota Nusia ◽  
Jia Cheng Xu ◽  
Reimert Sjöblom ◽  
Johan Knälmann ◽  
Astrid Linder ◽  
...  

Aim: The purpose of this study was to develop Injury Risk Functions (IRFs) for the Anterior- and Posterior Cruciate Ligament (ACL and PCL, respectively) and the Medial- and Lateral Collateral Ligament (MCL and LCL, respectively) in the knee joint and address two injury mechanisms of the ligaments, mid-substance failure and ligament insertion detachment. Method: The IRFs were developed from Post-Mortem Human Subject (PMHS) tensile failure strains of Bone-Ligament-Bone (BLB) or dissected Ligament (LIG) preparations. To compensate for insufficient sample size of experimental datapoints, virtual failure strains were as well generated based on mean- and standard deviation from experiments that did not provide specimen-specific results. All virtual and specimen-specific values were then categorised into groups of static and dynamic rates and tested for the best fitting theoretical distribution to formulate the ligament IRF. Results: Nine IRFs were derived (3 for ACL, 2 for PCL, 1 for MCL and 3 for LCL). Conclusion: These IRFs are, to the best of the authors' knowledge, the first knee ligament injury predicting tool based on PMHS data. The IRFs of BLB address both failure modes of mid-ligament and attachment failure, while the IRFs of LIG address mid-ligament failures only. The proposed risk functions can be used to determine the effectiveness of injury prevention measures. Keywords: Injury risk functions, knee ligaments, anterior cruciate ligament, posterior cruciate ligament, medial collateral ligament, lateral collateral ligament.


2021 ◽  
Author(s):  
Madelen Fahlstedt ◽  
Shiyang Meng ◽  
Svein Kleiven

Finite element head models are a tool to better understand brain injury mechanisms. Many of the models use strain as output but with different percentile values such as 100th, 95th, 90th, and 50th percentiles. Some use the element value, whereas other use the nodal average value for the element. Little is known how strain post-processing is affecting the injury predictions and evaluation of different prevention systems. The objective of this study was to evaluate the influence of strain output on injury prediction and ranking. Two models with different mesh densities were evaluated (KTH Royal Institute of Technology head model and the Total Human Models for Safety (THUMS)). Pulses from reconstructions of American football impacts with and without a diagnosis of mild traumatic brain injury were applied to the models. The value for 100th, 99th, 95th, 90th, and 50th percentile for element and nodal averaged element strain was evaluated based on peak values, injury risk functions, injury predictability, correlation in ranking, and linear correlation. The injury risk functions were affected by the post-processing of the strain, especially the 100th percentile element value stood out. Meanwhile, the area under the curve (AUC) value was less affected, as well as the correlation in ranking (Kendall's tau 0.71-1.00) and the linear correlation (Pearson's r2 0.72-1.00). With the results presented in this study, it is important to stress that the same post-processed strain should be used for injury predictions as the one used to develop the risk function.


Networks ◽  
2021 ◽  
Author(s):  
Evgeny Gurevsky ◽  
Dmitry Kopelevich ◽  
Sergey Kovalev ◽  
Mikhail Y. Kovalyov
Keyword(s):  

2021 ◽  
Author(s):  
Jonathan Yu-Meng Li

The theory of convex risk functions has now been well established as the basis for identifying the families of risk functions that should be used in risk-averse optimization problems. Despite its theoretical appeal, the implementation of a convex risk function remains difficult, because there is little guidance regarding how a convex risk function should be chosen so that it also well represents a decision maker’s subjective risk preference. In this paper, we address this issue through the lens of inverse optimization. Specifically, given solution data from some (forward) risk-averse optimization problem (i.e., a risk minimization problem with known constraints), we develop an inverse optimization framework that generates a risk function that renders the solutions optimal for the forward problem. The framework incorporates the well-known properties of convex risk functions—namely, monotonicity, convexity, translation invariance, and law invariance—as the general information about candidate risk functions, as well as feedback from individuals—which include an initial estimate of the risk function and pairwise comparisons among random losses—as the more specific information. Our framework is particularly novel in that unlike classical inverse optimization, it does not require making any parametric assumption about the risk function (i.e., it is nonparametric). We show how the resulting inverse optimization problems can be reformulated as convex programs and are polynomially solvable if the corresponding forward problems are polynomially solvable. We illustrate the imputed risk functions in a portfolio selection problem and demonstrate their practical value using real-life data. This paper was accepted by Yinyu Ye, optimization.


Author(s):  
Jared A. Fisher ◽  
Maya Spaur ◽  
Ian D. Buller ◽  
Abigail R. Flory ◽  
Laura E. Beane Freeman ◽  
...  

Geocoding is a powerful tool for environmental exposure assessments that rely on spatial databases. Geocoding processes, locators, and reference datasets have improved over time; however, improvements have not been well-characterized. Enrollment addresses for the Agricultural Health Study, a cohort of pesticide applicators and their spouses in Iowa (IA) and North Carolina (NC), were geocoded in 2012–2016 and then again in 2019. We calculated distances between geocodes in the two periods. For a subset, we computed positional errors using “gold standard” rooftop coordinates (IA; N = 3566) or Global Positioning Systems (GPS) (IA and NC; N = 1258) and compared errors between periods. We used linear regression to model the change in positional error between time periods (improvement) by rural status and population density, and we used spatial relative risk functions to identify areas with significant improvement. Median improvement between time periods in IA was 41 m (interquartile range, IQR: −2 to 168) and 9 m (IQR: −80 to 133) based on rooftop coordinates and GPS, respectively. Median improvement in NC was 42 m (IQR: −1 to 109 m) based on GPS. Positional error was greater in rural and low-density areas compared to in towns and more densely populated areas. Areas of significant improvement in accuracy were identified and mapped across both states. Our findings underscore the importance of evaluating determinants and spatial distributions of errors in geocodes used in environmental epidemiology studies.


Epidemiology ◽  
2020 ◽  
Vol 31 (5) ◽  
pp. 704-712
Author(s):  
Tiffany L. Breger ◽  
Jessie K. Edwards ◽  
Stephen R. Cole ◽  
Michael Saag ◽  
Peter F. Rebeiro ◽  
...  

2020 ◽  
pp. 0148558X2093424 ◽  
Author(s):  
Gary C. Biddle ◽  
Mary L. Z. Ma ◽  
Frank M. Song

For a large sample of U.S. listed firms, we find that unconditional and conditional accounting conservatism help lower bankruptcy risk. We further find that the mitigating effect of accounting conservatism on bankruptcy risk functions via cash enhancement and earnings management mitigation channels. This evidence is relevant to accounting standards setting, financial regulation, financial risk management, and helps explain conservatism’s long-standing presence as a pervasive feature of financial accounting.


Sign in / Sign up

Export Citation Format

Share Document