scholarly journals Advances in reliability analysis and health prognostics using probabilistic machine learning

2020 ◽  
Author(s):  
Meng Li
2021 ◽  
pp. 002224372110329
Author(s):  
Nicolas Padilla ◽  
Eva Ascarza

The success of Customer Relationship Management (CRM) programs ultimately depends on the firm's ability to identify and leverage differences across customers — a very diffcult task when firms attempt to manage new customers, for whom only the first purchase has been observed. For those customers, the lack of repeated observations poses a structural challenge to inferring unobserved differences across them. This is what we call the “cold start” problem of CRM, whereby companies have difficulties leveraging existing data when they attempt to make inferences about customers at the beginning of their relationship. We propose a solution to the cold start problem by developing a probabilistic machine learning modeling framework that leverages the information collected at the moment of acquisition. The main aspect of the model is that it exibly captures latent dimensions that govern the behaviors observed at acquisition as well as future propensities to buy and to respond to marketing actions using deep exponential families. The model can be integrated with a variety of demand specifications and is exible enough to capture a wide range of heterogeneity structures. We validate our approach in a retail context and empirically demonstrate the model's ability at identifying high-value customers as well as those most sensitive to marketing actions, right after their first purchase.


2021 ◽  
Author(s):  
Eunjeong Park ◽  
Kijeong Lee ◽  
Taehwa Han ◽  
Hyo Suk Nam

BACKGROUND Assessing the symptoms of proximal weakness caused by neurological deficits requires expert knowledge and experienced neurologists. Recent advances in artificial intelligence and the Internet of Things have resulted in the development of automated systems that emulate physicians’ assessments. OBJECTIVE This study provides an agreement and reliability analysis of using an automated scoring system to evaluate proximal weakness by experts and non-experts. METHODS We collected 144 observations from acute stroke patients in a neurological intensive care unit to measure the symptom of proximal weakness of upper and lower limbs. A neurologist performed a gold standard assessment and two medical students performed identical tests as non-expert assessments for manual and machine learning-based scaling of Medical Research Council (MRC) proximal scores. The system collects signals from sensors attached on patients’ limbs and trains a machine learning assessment model using the hybrid approach of data-level and algorithm-level methods for the ordinal and imbalanced classification in multiple classes. For the agreement analysis, we investigated the percent agreement of MRC proximal scores and Bland-Altman plots of kinematic features between the expert- and non-expert scaling. In the reliability analysis, we analysed the intra-class correlation coefficients (ICCs) of kinematic features and Krippendorff’s alpha of the three observers’ scaling. RESULTS The mean percent agreement between the gold standard and the non-expert scaling was 0.542 for manual scaling and 0.708 for IoT-assisted machine learning scaling, with 30.63% enhancement. The ICCs of kinematic features measured using sensors ranged from 0.742 to 0.850, whereas the Krippendorff’s alpha of manual scaling for the three observers was 0.275. The Krippendorff’s alpha of machine learning scaling increased to 0.445, with 61.82% improvement. CONCLUSIONS Automated scaling using sensors and machine learning provided higher inter-rater agreement and reliability in assessing acute proximal weakness. The enhanced assessment supported by the proposed system can be utilized as a reliable assessment tool for non-experts in various emergent environments.


2021 ◽  
Author(s):  
Florian Wellmann ◽  
Miguel de la Varga ◽  
Nilgün Güdük ◽  
Jan von Harten ◽  
Fabian Stamm ◽  
...  

<p>Geological models, as 3-D representations of subsurface structures and property distributions, are used in many economic, scientific, and societal decision processes. These models are built on prior assumptions and imperfect information, and they often result from an integration of geological and geophysical data types with varying quality. These aspects result in uncertainties about the predicted subsurface structures and property distributions, which will affect the subsequent decision process.</p><p>We discuss approaches to evaluate uncertainties in geological models and to integrate geological and geophysical information in combined workflows. A first step is the consideration of uncertainties in prior model parameters on the basis of uncertainty propagation (forward uncertainty quantification). When applied to structural geological models with discrete classes, these methods result in a class probability for each point in space, often represented in tessellated grid cells. These results can then be visualized or forwarded to process simulations. Another option is to add risk functions for subsequent decision analyses. In recent work, these geological uncertainty fields have also been used as an input to subsequent geophysical inversions.</p><p>A logical extension to these existing approaches is the integration of geological forward operators into inverse frameworks, to enable a full flow of inference for a wider range of relevant parameters. We investigate here specifically the use of probabilistic machine learning tools in combination with geological and geophysical modeling. Challenges exist due to the hierarchical nature of the probabilistic models, but modern sampling strategies allow for efficient sampling in these complex settings. We showcase the application with examples combining geological modeling and geophysical potential field measurements in an integrated model for improved decision making.</p>


Sign in / Sign up

Export Citation Format

Share Document