Prediction of sperm extraction in non-obstructive azoospermia patients: a machine-learning perspective

2020 ◽  
Vol 35 (7) ◽  
pp. 1505-1514 ◽  
Author(s):  
A Zeadna ◽  
N Khateeb ◽  
L Rokach ◽  
Y Lior ◽  
I Har-Vardi ◽  
...  

Abstract STUDY QUESTION Can a machine-learning-based model trained in clinical and biological variables support the prediction of the presence or absence of sperm in testicular biopsy in non-obstructive azoospermia (NOA) patients? SUMMARY ANSWER Our machine-learning model was able to accurately predict (AUC of 0.8) the presence or absence of spermatozoa in patients with NOA. WHAT IS KNOWN ALREADY Patients with NOA can conceive with their own biological gametes using ICSI in combination with successful testicular sperm extraction (TESE). Testicular sperm retrieval is successful in up to 50% of men with NOA. However, to the best of our knowledge, there is no existing model that can accurately predict the success of sperm retrieval in TESE. Moreover, machine-learning has never been used for this purpose. STUDY DESIGN, SIZE, DURATION A retrospective cohort study of 119 patients who underwent TESE in a single IVF unit between 1995 and 2017 was conducted. All patients with NOA who underwent TESE during their fertility treatments were included. The development of gradient-boosted trees (GBTs) aimed to predict the presence or absence of spermatozoa in patients with NOA. The accuracy of these GBTs was then compared to a similar multivariate logistic regression model (MvLRM). PARTICIPANTS/MATERIALS, SETTING, METHODS We employed univariate and multivariate binary logistic regression models to predict the probability of successful TESE using a dataset from a retrospective cohort. In addition, we examined various ensemble machine-learning models (GBT and random forest) and evaluated their predictive performance using the leave-one-out cross-validation procedure. A cutoff value for successful/unsuccessful TESE was calculated with receiver operating characteristic (ROC) curve analysis. MAIN RESULTS AND THE ROLE OF CHANCE ROC analysis resulted in an AUC of 0.807 ± 0.032 (95% CI 0.743–0.871) for the proposed GBTs and 0.75 ± 0.052 (95% CI 0.65–0.85) for the MvLRM for the prediction of presence or absence of spermatozoa in patients with NOA. The GBT approach and the MvLRM yielded a sensitivity of 91% vs. 97%, respectively, but the GBT approach has a specificity of 51% compared with 25% for the MvLRM. A total of 78 (65.3%) men with NOA experienced successful TESE. FSH, LH, testosterone, semen volume, age, BMI, ethnicity and testicular size on clinical evaluation were included in these models. LIMITATIONS, REASONS FOR CAUTION This study is a retrospective cohort study, with all the associated inherent biases of such studies. This model was used only for TESE, since micro-TESE is not performed at our center. WIDER IMPLICATIONS OF THE FINDINGS Machine-learning models may lay the foundation for a decision support system for clinicians together with their NOA patients concerning TESE. The findings of this study should be confirmed with further larger and prospective studies. STUDY FUNDING/COMPETING INTEREST(S) The study was funded by the Division of Obstetrics and Gynecology, Soroka University Medical Center, there are no potential conflicts of interest for all authors.

2022 ◽  
Vol 8 ◽  
Author(s):  
Boshen Yang ◽  
Sixuan Xu ◽  
Di Wang ◽  
Yu Chen ◽  
Zhenfa Zhou ◽  
...  

Background: Hypertension is a rather common comorbidity among critically ill patients and hospital mortality might be higher among critically ill patients with hypertension (SBP ≥ 140 mmHg and/or DBP ≥ 90 mmHg). This study aimed to explore the association between ACEI/ARB medication during ICU stay and all-cause in-hospital mortality in these patients.Methods: A retrospective cohort study was conducted based on data from Medical Information Mart for Intensive Care IV (MIMIC-IV) database, which consisted of more than 40,000 patients in ICU between 2008 and 2019 at Beth Israel Deaconess Medical Center. Adults diagnosed with hypertension on admission and those had high blood pressure (SBP ≥ 140 mmHg and/or DBP ≥ 90 mmHg) during ICU stay were included. The primary outcome was all-cause in-hospital mortality. Patients were divided into ACEI/ARB treated and non-treated group during ICU stay. Propensity score matching (PSM) was used to adjust potential confounders. Nine machine learning models were developed and validated based on 37 clinical and laboratory features of all patients. The model with the best performance was selected based on area under the receiver operating characteristic curve (AUC) followed by 5-fold cross-validation. After hyperparameter optimization using Grid and random hyperparameter search, a final LightGBM model was developed, and Shapley Additive exPlanations (SHAP) values were calculated to evaluate feature importance of each feature. The features closely associated with hospital mortality were presented as significant features.Results: A total of 15,352 patients were enrolled in this study, among whom 5,193 (33.8%) patients were treated with ACEI/ARB. A significantly lower all-cause in-hospital mortality was observed among patients treated with ACEI/ARB (3.9 vs. 12.7%) as well as a lower 28-day mortality (3.6 vs. 12.2%). The outcome remained consistent after propensity score matching. Among nine machine learning models, the LightGBM model had the highest AUC = 0.9935. The SHAP plot was employed to make the model interpretable based on LightGBM model after hyperparameter optimization, showing that ACEI/ARB use was among the top five significant features, which were associated with hospital mortality.Conclusions: The use of ACEI/ARB in critically ill patients with hypertension during ICU stay is related to lower all-cause in-hospital mortality, which was independently associated with increased survival in a large and heterogeneous cohort of critically ill hypertensive patients with or without kidney dysfunction.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S262-S262
Author(s):  
Kok Hoe Chan ◽  
Bhavik Patel ◽  
Iyad Farouji ◽  
Addi Suleiman ◽  
Jihad Slim

Abstract Background Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection can lead to many different cardiovascular complications, we were interested in studying prognostic markers in patients with atrial fibrillation/flutter (A. Fib/Flutter). Methods A retrospective cohort study of patients with confirmed COVID-19 and either with existing or new onset A. Fib/Flutter who were admitted to our hospital between March 15 and May 20, 2020. Demographic, outcome and laboratory data were extracted from the electronic medical record and compared between survivors and non-survivors. Univariate and multivariate logistic regression were employed to identify the prognostic markers associated with mortality in patients with A. Fib/Flutter Results The total number of confirmed COVID-19 patients during the study period was 350; 37 of them had existing or new onset A. Fib/Flutter. Twenty one (57%) expired, and 16 (43%) were discharged alive. The median age was 72 years old, ranged from 19 to 100 years old. Comorbidities were present in 33 (89%) patients, with hypertension (82%) being the most common, followed by diabetes (46%) and coronary artery disease (30%). New onset of atrial fibrillation was identified in 23 patients (70%), of whom 13 (57%) expired; 29 patients (78%) presented with atrial fibrillation with rapid ventricular response, and 2 patients (5%) with atrial flutter. Mechanical ventilation was required for 8 patients, of whom 6 expired. In univariate analysis, we found a significant difference in baseline ferritin (p=0.04), LDH (p=0.02), neutrophil-lymphocyte ratio (NLR) (p=0.05), neutrophil-monocyte ratio (NMR) (p=0.03) and platelet (p=0.015) between survivors and non-survivors. With multivariable logistic regression analysis, the only value that had an odds of survival was a low NLR (odds ratio 0.74; 95% confidence interval 0.53–0.93). Conclusion This retrospective cohort study of hospitalized patients with COVID-19 demonstrated an association of increase NLR as risk factors for death in COVID-19 patients with A. Fib/Flutter. A high NLR has been associated with increased incidence, severity and risk for stroke in atrial fibrillation patients but to our knowledge, we are first to demonstrate the utilization in mortality predictions in COVID-19 patients with A. Fib/Flutter. Disclosures Jihad Slim, MD, Abbvie (Speaker’s Bureau)Gilead (Speaker’s Bureau)Jansen (Speaker’s Bureau)Merck (Speaker’s Bureau)ViiV (Speaker’s Bureau)


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Martine De Cock ◽  
Rafael Dowsley ◽  
Anderson C. A. Nascimento ◽  
Davis Railsback ◽  
Jianwei Shen ◽  
...  

Abstract Background In biomedical applications, valuable data is often split between owners who cannot openly share the data because of privacy regulations and concerns. Training machine learning models on the joint data without violating privacy is a major technology challenge that can be addressed by combining techniques from machine learning and cryptography. When collaboratively training machine learning models with the cryptographic technique named secure multi-party computation, the price paid for keeping the data of the owners private is an increase in computational cost and runtime. A careful choice of machine learning techniques, algorithmic and implementation optimizations are a necessity to enable practical secure machine learning over distributed data sets. Such optimizations can be tailored to the kind of data and Machine Learning problem at hand. Methods Our setup involves secure two-party computation protocols, along with a trusted initializer that distributes correlated randomness to the two computing parties. We use a gradient descent based algorithm for training a logistic regression like model with a clipped ReLu activation function, and we break down the algorithm into corresponding cryptographic protocols. Our main contributions are a new protocol for computing the activation function that requires neither secure comparison protocols nor Yao’s garbled circuits, and a series of cryptographic engineering optimizations to improve the performance. Results For our largest gene expression data set, we train a model that requires over 7 billion secure multiplications; the training completes in about 26.90 s in a local area network. The implementation in this work is a further optimized version of the implementation with which we won first place in Track 4 of the iDASH 2019 secure genome analysis competition. Conclusions In this paper, we present a secure logistic regression training protocol and its implementation, with a new subprotocol to securely compute the activation function. To the best of our knowledge, we present the fastest existing secure multi-party computation implementation for training logistic regression models on high dimensional genome data distributed across a local area network.


Critical Care ◽  
2019 ◽  
Vol 23 (1) ◽  
Author(s):  
Edgar Santos ◽  
Arturo Olivares-Rivera ◽  
Sebastian Major ◽  
Renán Sánchez-Porras ◽  
Lorenz Uhlmann ◽  
...  

Abstract Objective Spreading depolarizations (SD) are characterized by breakdown of transmembrane ion gradients and excitotoxicity. Experimentally, N-methyl-d-aspartate receptor (NMDAR) antagonists block a majority of SDs. In many hospitals, the NMDAR antagonist s-ketamine and the GABAA agonist midazolam represent the current second-line combination treatment to sedate patients with devastating cerebral injuries. A pressing clinical question is whether this option should become first-line in sedation-requiring individuals in whom SDs are detected, yet the s-ketamine dose necessary to adequately inhibit SDs is unknown. Moreover, use-dependent tolerance could be a problem for SD inhibition in the clinic. Methods We performed a retrospective cohort study of 66 patients with aneurysmal subarachnoid hemorrhage (aSAH) from a prospectively collected database. Thirty-three of 66 patients received s-ketamine during electrocorticographic neuromonitoring of SDs in neurointensive care. The decision to give s-ketamine was dependent on the need for stronger sedation, so it was expected that patients receiving s-ketamine would have a worse clinical outcome. Results S-ketamine application started 4.2 ± 3.5 days after aSAH. The mean dose was 2.8 ± 1.4 mg/kg body weight (BW)/h and thus higher than the dose recommended for sedation. First, patients were divided according to whether they received s-ketamine at any time or not. No significant difference in SD counts was found between groups (negative binomial model using the SD count per patient as outcome variable, p = 0.288). This most likely resulted from the fact that 368 SDs had already occurred in the s-ketamine group before s-ketamine was given. However, in patients receiving s-ketamine, we found a significant decrease in SD incidence when s-ketamine was started (Poisson model with a random intercept for patient, coefficient − 1.83 (95% confidence intervals − 2.17; − 1.50), p < 0.001; logistic regression model, odds ratio (OR) 0.13 (0.08; 0.19), p < 0.001). Thereafter, data was further divided into low-dose (0.1–2.0 mg/kg BW/h) and high-dose (2.1–7.0 mg/kg/h) segments. High-dose s-ketamine resulted in further significant decrease in SD incidence (Poisson model, − 1.10 (− 1.71; − 0.49), p < 0.001; logistic regression model, OR 0.33 (0.17; 0.63), p < 0.001). There was little evidence of SD tolerance to long-term s-ketamine sedation through 5 days. Conclusions These results provide a foundation for a multicenter, neuromonitoring-guided, proof-of-concept trial of ketamine and midazolam as a first-line sedative regime.


Sign in / Sign up

Export Citation Format

Share Document