scholarly journals Exploring Similarities and Differences of Non-European Migrants among Forensic Patients with Schizophrenia

Author(s):  
David A. Huber ◽  
Steffen Lau ◽  
Martina Sonnweber ◽  
Moritz P. Günther ◽  
Johannes Kirchebner

Migrants diagnosed with schizophrenia are overrepresented in forensic-psychiatric clinics. A comprehensive characterization of this offender subgroup remains to be conducted. The present exploratory study aims at closing this research gap. In a sample of 370 inpatients with schizophrenia spectrum disorders who were detained in a Swiss forensic-psychiatric clinic, 653 different variables were analyzed to identify possible differences between native Europeans and non-European migrants. The exploratory data analysis was conducted by means of supervised machine learning. In order to minimize the multiple testing problem, the detected group differences were cross-validated by applying six different machine learning algorithms on the data set. Subsequently, the variables identified as most influential were used for machine learning algorithm building and evaluation. The combination of two childhood-related factors and three therapy-related factors allowed to differentiate native Europeans and non-European migrants with an accuracy of 74.5% and a predictive power of AUC = 0.75 (area under the curve). The AUC could not be enhanced by any of the investigated criminal history factors or psychiatric history factors. Overall, it was found that the migrant subgroup was quite similar to the rest of offender patients with schizophrenia, which may help to reduce the stigmatization of migrants in forensic-psychiatric clinics. Some of the predictor variables identified may serve as starting points for studies aimed at developing crime prevention approaches in the community setting and risk management strategies tailored to subgroups of offenders with schizophrenia.

2021 ◽  
Author(s):  
Marc Raphael ◽  
Michael Robitaille ◽  
Jeff Byers ◽  
Joseph Christodoulides

Abstract Machine learning algorithms hold the promise of greatly improving live cell image analysis by way of (1) analyzing far more imagery than can be achieved by more traditional manual approaches and (2) by eliminating the subjective nature of researchers and diagnosticians selecting the cells or cell features to be included in the analyzed data set. Currently, however, even the most sophisticated model based or machine learning algorithms require user supervision, meaning the subjectivity problem is not removed but rather incorporated into the algorithm’s initial training steps and then repeatedly applied to the imagery. To address this roadblock, we have developed a self-supervised machine learning algorithm that recursively trains itself directly from the live cell imagery data, thus providing objective segmentation and quantification. The approach incorporates an optical flow algorithm component to self-label cell and background pixels for training, followed by the extraction of additional feature vectors for the automated generation of a cell/background classification model. Because it is self-trained, the software has no user-adjustable parameters and does not require curated training imagery. The algorithm was applied to automatically segment cells from their background for a variety of cell types and five commonly used imaging modalities - fluorescence, phase contrast, differential interference contrast (DIC), transmitted light and interference reflection microscopy (IRM). The approach is broadly applicable in that it enables completely automated cell segmentation for long-term live cell phenotyping applications, regardless of the input imagery’s optical modality, magnification or cell type.


2021 ◽  
Author(s):  
Michael C. Robitaille ◽  
Jeff M. Byers ◽  
Joseph A. Christodoulides ◽  
Marc P. Raphael

Machine learning algorithms hold the promise of greatly improving live cell image analysis by way of (1) analyzing far more imagery than can be achieved by more traditional manual approaches and (2) by eliminating the subjective nature of researchers and diagnosticians selecting the cells or cell features to be included in the analyzed data set. Currently, however, even the most sophisticated model based or machine learning algorithms require user supervision, meaning the subjectivity problem is not removed but rather incorporated into the algorithm's initial training steps and then repeatedly applied to the imagery. To address this roadblock, we have developed a self-supervised machine learning algorithm that recursively trains itself directly from the live cell imagery data, thus providing objective segmentation and quantification. The approach incorporates an optical flow algorithm component to self-label cell and background pixels for training, followed by the extraction of additional feature vectors for the automated generation of a cell/background classification model. Because it is self-trained, the software has no user-adjustable parameters and does not require curated training imagery. The algorithm was applied to automatically segment cells from their background for a variety of cell types and five commonly used imaging modalities - fluorescence, phase contrast, differential interference contrast (DIC), transmitted light and interference reflection microscopy (IRM). The approach is broadly applicable in that it enables completely automated cell segmentation for long-term live cell phenotyping applications, regardless of the input imagery's optical modality, magnification or cell type.


Data Science in healthcare is a innovative and capable for industry implementing the data science applications. Data analytics is recent science in to discover the medical data set to explore and discover the disease. It’s a beginning attempt to identify the disease with the help of large amount of medical dataset. Using this data science methodology, it makes the user to find their disease without the help of health care centres. Healthcare and data science are often linked through finances as the industry attempts to reduce its expenses with the help of large amounts of data. Data science and medicine are rapidly developing, and it is important that they advance together. Health care information is very effective in the society. In a human life day to day heart disease had increased. Based on the heart disease to monitor different factors in human body to analyse and prevent the heart disease. To classify the factors using the machine learning algorithms and to predict the disease is major part. Major part of involves machine level based supervised learning algorithm such as SVM, Naviebayes, Decision Trees and Random forest.


2021 ◽  
Author(s):  
Omar Alfarisi ◽  
Zeyar Aung ◽  
Mohamed Sassi

For defining the optimal machine learning algorithm, the decision was not easy for which we shall choose. To help future researchers, we describe in this paper the optimal among the best of the algorithms. We built a synthetic data set and performed the supervised machine learning runs for five different algorithms. For heterogeneity, we identified Random Forest, among others, to be the best algorithm.


2021 ◽  
Author(s):  
Omar Alfarisi ◽  
Zeyar Aung ◽  
Mohamed Sassi

For defining the optimal machine learning algorithm, the decision was not easy for which we shall choose. To help future researchers, we describe in this paper the optimal among the best of the algorithms. We built a synthetic data set and performed the supervised machine learning runs for five different algorithms. For heterogeneity, we identified Random Forest, among others, to be the best algorithm.


Processes ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 1411
Author(s):  
Mohamed E. Al-Atroush ◽  
Ashraf M. Hefny ◽  
Tamer M. Sorour

The full-scale static pile loading test is without question the most reliable methodology for estimating the ultimate capacity of large diameter bored piles (LDBP). However, in most cases, the obtained load-settlement curves from LDBP loading tests tend to increase without reaching the failure point or an asymptote. Loading an LDBP until reaching apparent failure is seldom practical because of the significant amount of settlement usually required for the full shaft and base mobilizations. With that in mind, the supervised learning algorithm requires a huge labeled data set to train the machine properly, which makes it ideal for sensitivity analysis, forecasting, and predictions, among other unsupervised algorithms. However, providing such a huge dataset of LDBP loaded to failure tests might be very complicated. In this paper, a novel practice has been proposed to establish a labeled dataset needed to train supervised machine learning algorithms on accurately predicting the ultimate capacity of an LDBP. A comprehensive numerical parametric study was carried out to investigate the effect of both pile geometrical and soil geotechnical parameters on both the ultimate capacity and settlement of an LDBP. This study was based on field measurements of loaded to failure LDBP tests. Results of the 29 applied models were compared with the calibrated model results, and the variation in LDBP behavior due to change in any of the hyperparameters was discussed. Accordingly, three primary characteristics were identified to diagnose the failure of LDBPs. Those characteristics were utilized to establish a decision tree of a supervised machine learning algorithm that can be used to predict the ultimate capacity of an LDBP.


Hypertension ◽  
2021 ◽  
Vol 78 (5) ◽  
pp. 1595-1604
Author(s):  
Fabrizio Buffolo ◽  
Jacopo Burrello ◽  
Alessio Burrello ◽  
Daniel Heinrich ◽  
Christian Adolf ◽  
...  

Primary aldosteronism (PA) is the cause of arterial hypertension in 4% to 6% of patients, and 30% of patients with PA are affected by unilateral and surgically curable forms. Current guidelines recommend screening for PA ≈50% of patients with hypertension on the basis of individual factors, while some experts suggest screening all patients with hypertension. To define the risk of PA and tailor the diagnostic workup to the individual risk of each patient, we developed a conventional scoring system and supervised machine learning algorithms using a retrospective cohort of 4059 patients with hypertension. On the basis of 6 widely available parameters, we developed a numerical score and 308 machine learning-based models, selecting the one with the highest diagnostic performance. After validation, we obtained high predictive performance with our score (optimized sensitivity of 90.7% for PA and 92.3% for unilateral PA [UPA]). The machine learning-based model provided the highest performance, with an area under the curve of 0.834 for PA and 0.905 for diagnosis of UPA, with optimized sensitivity of 96.6% for PA, and 100.0% for UPA, at validation. The application of the predicting tools allowed the identification of a subgroup of patients with very low risk of PA (0.6% for both models) and null probability of having UPA. In conclusion, this score and the machine learning algorithm can accurately predict the individual pretest probability of PA in patients with hypertension and circumvent screening in up to 32.7% of patients using a machine learning-based model, without omitting patients with surgically curable UPA.


2021 ◽  
Author(s):  
Marian Popescu ◽  
Rebecca Head ◽  
Tim Ferriday ◽  
Kate Evans ◽  
Jose Montero ◽  
...  

Abstract This paper presents advancements in machine learning and cloud deployment that enable rapid and accurate automated lithology interpretation. A supervised machine learning technique is described that enables rapid, consistent, and accurate lithology prediction alongside quantitative uncertainty from large wireline or logging-while-drilling (LWD) datasets. To leverage supervised machine learning, a team of geoscientists and petrophysicists made detailed lithology interpretations of wells to generate a comprehensive training dataset. Lithology interpretations were based on applying determinist cross-plotting by utilizing and combining various raw logs. This training dataset was used to develop a model and test a machine learning pipeline. The pipeline was applied to a dataset previously unseen by the algorithm, to predict lithology. A quality checking process was performed by a petrophysicist to validate new predictions delivered by the pipeline against human interpretations. Confidence in the interpretations was assessed in two ways. The prior probability was calculated, a measure of confidence in the input data being recognized by the model. Posterior probability was calculated, which quantifies the likelihood that a specified depth interval comprises a given lithology. The supervised machine learning algorithm ensured that the wells were interpreted consistently by removing interpreter biases and inconsistencies. The scalability of cloud computing enabled a large log dataset to be interpreted rapidly; >100 wells were interpreted consistently in five minutes, yielding >70% lithological match to the human petrophysical interpretation. Supervised machine learning methods have strong potential for classifying lithology from log data because: 1) they can automatically define complex, non-parametric, multi-variate relationships across several input logs; and 2) they allow classifications to be quantified confidently. Furthermore, this approach captured the knowledge and nuances of an interpreter's decisions by training the algorithm using human-interpreted labels. In the hydrocarbon industry, the quantity of generated data is predicted to increase by >300% between 2018 and 2023 (IDC, Worldwide Global DataSphere Forecast, 2019–2023). Additionally, the industry holds vast legacy data. This supervised machine learning approach can unlock the potential of some of these datasets by providing consistent lithology interpretations rapidly, allowing resources to be used more effectively.


Author(s):  
Johannes René Kappes ◽  
David Alen Huber ◽  
Johannes Kirchebner ◽  
Martina Sonnweber ◽  
Moritz Philipp Günther ◽  
...  

The burden of self-injury among offenders undergoing inpatient treatment in forensic psychiatry is substantial. This exploratory study aims to add to the previously sparse literature on the correlates of self-injury in inpatient forensic patients with schizophrenia spectrum disorders (SSD). Employing a sample of 356 inpatients with SSD treated in a Swiss forensic psychiatry hospital, patient data on 512 potential predictor variables were retrospectively collected via file analysis. The dataset was examined using supervised machine learning to distinguish between patients who had engaged in self-injurious behavior during forensic hospitalization and those who had not. Based on a combination of ten variables, including psychiatric history, criminal history, psychopathology, and pharmacotherapy, the final machine learning model was able to discriminate between self-injury and no self-injury with a balanced accuracy of 68% and a predictive power of AUC = 71%. Results suggest that forensic psychiatric patients with SSD who self-injured were younger both at the time of onset and at the time of first entry into the federal criminal record. They exhibited more severe psychopathological symptoms at the time of admission, including higher levels of depression and anxiety and greater difficulty with abstract reasoning. Of all the predictors identified, symptoms of depression and anxiety may be the most promising treatment targets for the prevention of self-injury in inpatient forensic patients with SSD due to their modifiability and should be further substantiated in future studies.


Author(s):  
Kazuko Fuchi ◽  
Eric M. Wolf ◽  
David S. Makhija ◽  
Nathan A. Wukie ◽  
Christopher R. Schrock ◽  
...  

Abstract A machine learning algorithm that performs multifidelity domain decomposition is introduced. While the design of complex systems can be facilitated by numerical simulations, the determination of appropriate physics couplings and levels of model fidelity can be challenging. The proposed method automatically divides the computational domain into subregions and assigns required fidelity level, using a small number of high fidelity simulations to generate training data and low fidelity solutions as input data. Unsupervised and supervised machine learning algorithms are used to correlate features from low fidelity solutions to fidelity assignment. The effectiveness of the method is demonstrated in a problem of viscous fluid flow around a cylinder at Re ≈ 20. Ling et al. built physics-informed invariance and symmetry properties into machine learning models and demonstrated improved model generalizability. Along these lines, we avoid using problem dependent features such as coordinates of sample points, object geometry or flow conditions as explicit inputs to the machine learning model. Use of pointwise flow features generates large data sets from only one or two high fidelity simulations, and the fidelity predictor model achieved 99.5% accuracy at training points. The trained model was shown to be capable of predicting a fidelity map for a problem with an altered cylinder radius. A significant improvement in the prediction performance was seen when inputs are expanded to include multiscale features that incorporate neighborhood information.


Sign in / Sign up

Export Citation Format

Share Document