Data-driven decision making based on evidential reasoning approach and machine learning algorithms

2021 ◽  
pp. 107622
Author(s):  
Chao Fu ◽  
Che Xu ◽  
Min Xue ◽  
Weiyong Liu ◽  
Shanlin Yang
2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 520-520 ◽  
Author(s):  
André Pfob ◽  
Babak Mehrara ◽  
Jonas Nelson ◽  
Edwin G. Wilkins ◽  
Andrea Pusic ◽  
...  

520 Background: Post-surgical satisfaction with breasts is a key outcome for women undergoing cancer-related mastectomy and reconstruction. Current decision making relies on group-level evidence, which may not offer optimal choice of treatment for individuals. We developed and validated machine learning algorithms to predict individual post-surgical breast-satisfaction. We aim to facilitate individualized data-driven decision making in breast cancer. Methods: We collected clinical, perioperative, and patient-reported data from 3058 women who underwent breast reconstruction due to breast cancer across 11 sites in North America. We trained and evaluated four algorithms (regularized regression, Support Vector Machine, Neural Network, Regression Tree) to predict significant changes in satisfaction with breasts at 2-year follow up using the validated BREAST-Q measure. Accuracy and area under the receiver operating characteristics curve (AUC) were used to determine algorithm performance in the test sample. Results: Machine learning algorithms were able to accurately predict changes in women’s satisfaction with breasts (see table). Baseline satisfaction with breasts was the most informative predictor of outcome, followed by radiation during or after reconstruction, nipple-sparing and mixed mastectomy, implant-based reconstruction, chemotherapy, unilateral mastectomy, lower psychological well-being, and obesity. Conclusions: We reveal the crucial role of patient-reported outcomes in determining post-operative outcomes and that Machine Learning algorithms are suitable to identify individuals who might benefit from alternative treatment decisions than suggested by group-level evidence. We provide a web-based tool for individuals considering mastectomy and reconstruction. importdemo.com . Clinical trial information: NCT01723423 . [Table: see text]


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


Author(s):  
Pragya Paudyal ◽  
B.L. William Wong

In this paper we introduce the problem of algorithmic opacity and the challenges it presents to ethical decision-making in criminal intelligence analysis. Machine learning algorithms have played important roles in the decision-making process over the past decades. Intelligence analysts are increasingly being presented with smart black box automation that use machine learning algorithms to find patterns or interesting and unusual occurrences in big data sets. Algorithmic opacity is the lack visibility of computational processes such that humans are not able to inspect its inner workings to ascertain for themselves how the results and conclusions were computed. This is a problem that leads to several ethical issues. In the VALCRI project, we developed an abstraction hierarchy and abstraction decomposition space to identify important functional relationships and system invariants in relation to ethical goals. Such explanatory relationships can be valuable for making algorithmic process transparent during the criminal intelligence analysis process.


2020 ◽  
Vol 110 ◽  
pp. 91-95 ◽  
Author(s):  
Ashesh Rambachan ◽  
Jon Kleinberg ◽  
Jens Ludwig ◽  
Sendhil Mullainathan

There are widespread concerns that the growing use of machine learning algorithms in important decisions may reproduce and reinforce existing discrimination against legally protected groups. Most of the attention to date on issues of “algorithmic bias” or “algorithmic fairness” has come from computer scientists and machine learning researchers. We argue that concerns about algorithmic fairness are at least as much about questions of how discrimination manifests itself in data, decision-making under uncertainty, and optimal regulation. To fully answer these questions, an economic framework is necessary--and as a result, economists have much to contribute.


2021 ◽  
Author(s):  
Ali Nadernezhad ◽  
Jürgen Groll

With the continuous growth of extrusion bioprinting techniques, ink formulations based on rheology modifiers are becoming increasingly popular, as they enable 3D printing of non-printable biologically-favored materials. However, benchmarking and characterization of such systems are inherently complicated due to the variety of rheology modifiers and differences in mechanisms of inducing printability. This study tries to explain induced printability in formulations by incorporating machine learning algorithms that describe the underlying basis for decision-making in classifying a printable formulation. For this purpose, a library of rheological data and printability scores for 180 different formulations of hyaluronic acid solutions with varying molecular weights and concentrations and three rheology modifiers were produced. A feature screening methodology was applied to collect and separate the impactful features, which consisted of physically interpretable and easily measurable properties of formulations. In the final step, all relevant features influencing the model’s output were analyzed by advanced yet explainable statistical methods. The outcome provides a guideline for designing new formulations based on data-driven correlations from multiple systems.


2021 ◽  
Vol 73 (09) ◽  
pp. 43-43
Author(s):  
Reza Garmeh

The digital transformation that began several years ago continues to grow and evolve. With new advancements in data analytics and machine-learning algorithms, field developers today see more benefits to upgrading their traditional development work flows to automated artificial-intelligence work flows. The transformation has helped develop more-efficient and truly integrated development approaches. Many development scenarios can be automatically generated, examined, and updated very quickly. These approaches become more valuable when coupled with physics-based integrated asset models that are kept close to actual field performance to reduce uncertainty for reactive decision making. In unconventional basins with enormous completion and production databases, data-driven decisions powered by machine-learning techniques are increasing in popularity to solve field development challenges and optimize cube development. Finding a trend within massive amounts of data requires an augmented artificial intelligence where machine learning and human expertise are coupled. With slowed activity and uncertainty in the oil and gas industry from the COVID-19 pandemic and growing pressure for cleaner energy and environmental regulations, operators had to shift economic modeling for environmental considerations, predicting operational hazards and planning mitigations. This has enlightened the value of field development optimization, shifting from traditional workflow iterations on data assimilation and sequential decision making to deep reinforcement learning algorithms to find the best well placement and well type for the next producer or injector. Operators are trying to adapt with the new environment and enhance their capabilities to efficiently plan, execute, and operate field development plans. Collaboration between different disciplines and integrated analyses are key to the success of optimized development strategies. These selected papers and the suggested additional reading provide a good view of what is evolving with field development work flows using data analytics and machine learning in the era of digital transformation. Recommended additional reading at OnePetro: www.onepetro.org. SPE 203073 - Data-Driven and AI Methods To Enhance Collaborative Well Planning and Drilling-Risk Prediction by Richard Mohan, ADNOC, et al. SPE 200895 - Novel Approach To Enhance the Field Development Planning Process and Reservoir Management To Maximize the Recovery Factor of Gas Condensate Reservoirs Through Integrated Asset Modeling by Oswaldo Espinola Gonzalez, Schlumberger, et al. SPE 202373 - Efficient Optimization and Uncertainty Analysis of Field Development Strategies by Incorporating Economic Decisions in Reservoir Simulation Models by James Browning, Texas Tech University, et al.


2021 ◽  
Author(s):  
Ram Sunder Kalyanraman ◽  
Xiaoli Chen ◽  
Po-Yen Wu ◽  
Kevin Constable ◽  
Amit Govil ◽  
...  

Abstract Ultrasonic and sonic logs are increasingly used to evaluate the quality of cement placement in the annulus behind the pipe and its potential to perform as a barrier. Wireline logs are carried out in widely varying conditions and attempt to evaluate a variety of cement formulations in the annulus. The annulus geometry is complex due to pipe standoff and often affects the behavior (properties) of the cement. The transformation of ultrasonic data to meaningful cement evaluation is also a complex task and requires expertise to ensure the processing is correctly carried out as well interpreted correctly. Cement formulations can vary from heavy weight cement to ultralight foamed cements. The ultrasonic log-based evaluation, using legacy practices, works well for cements that are well behaved and well bonded to casing. In such cases, a lightweight cement and heavyweight cement, when bonded, can be easily discriminated from gas or liquid (mud) through simple quantitative thresholds resulting in a Solid(S) - Liquid(L) - Gas(G) map. However, ultralight and foamed cements may overlap with mud in quantitative terms. Cements may debond from casing with a gap (that is either wet or dry), resulting in a very complex log response that may not be amenable to simple threshold-based discrimination of S-L-G. Cement sheath evaluation and the inference of the cement sheath to serve as a barrier is complex. It is therefore imperative that adequate processes mitigate errors in processing and interpretation and bring in reliability and consistency. Processing inconsistencies are caused when we are unable to correctly characterize the borehole properties either due to suboptimal measurements or assumptions of the borehole environment. Experts can and do recognize inconsistencies in processing and can advise appropriate resolution to ensure correct processing. The same decision-making criteria that experts follow can be implemented through autonomous workflows. The ability for software to autocorrect is not only possible but significantly enables the reliability of the product for wellsite decisions. In complex situations of debonded cements and ultralight cements, we may need to approach the interpretation from a data behavior-based approach, which can be explained by physics and modeling or through observations in the field by experts. This leads a novel seven-class annulus characterization [5S-L-G] which we expect will bring improved clarity on the annulus behavior. We explain the rationale for such an approach by providing a catalog of log response for the seven classes. In addition, we introduce the ability to carry out such analysis autonomously though machine learning. Such machine learning algorithms are best carried out after ensuring the data is correctly processed. We demonstrate the capability through a few field examples. The ability to emulate an "expert" through software can lead to an ability to autonomously correct processing inconsistencies prior to an autonomous interpretation, thereby significantly enhancing the reliability and consistency of cement evaluation, ruling out issues related to subjectivity, training, and competency.


Sign in / Sign up

Export Citation Format

Share Document