scholarly journals Decision making under incomplete data using the imprecise Dirichlet model

2007 ◽  
Vol 44 (3) ◽  
pp. 322-338 ◽  
Author(s):  
L.V. Utkin ◽  
Th. Augustin
Author(s):  
JOAQUÍN ABELLÁN ◽  
ANDRÉS R. MASEGOSA

In this paper, we present the following contributions: (i) an adaptation of a precise classifier to work on imprecise classification for cost-sensitive problems; (ii) a new measure to check the performance of an imprecise classifier. The imprecise classifier is based on a method to build simple decision trees that we have modified for imprecise classification. It uses the Imprecise Dirichlet Model (IDM) to represent information, with the upper entropy as a tool for splitting. Our new measure to compare imprecise classifiers takes errors into account. Thus far, this has not been considered by other measures for classifiers of this type. This measure penalizes wrong predictions using a cost matrix of the errors, given by an expert; and it quantifies the success of an imprecise classifier based on the cardinal number of the set of non-dominated states returned. To compare the performance of our imprecise classification method and the new measure, we have used a second imprecise classifier known as Naive Credal Classifier (NCC) which is a variation of the classic Naive Bayes using the IDM; and a known measure for imprecise classification.


Author(s):  
Suranga C. H. Geekiyanage ◽  
Dan Sui ◽  
Bernt S. Aadnoy

Drilling industry operations heavily depend on digital information. Data analysis is a process of acquiring, transforming, interpreting, modelling, displaying and storing data with an aim of extracting useful information, so that the decision-making, actions executing, events detecting and incident managing of a system can be handled in an efficient and certain manner. This paper aims to provide an approach to understand, cleanse, improve and interpret the post-well or realtime data to preserve or enhance data features, like accuracy, consistency, reliability and validity. Data quality management is a process with three major phases. Phase I is an evaluation of pre-data quality to identify data issues such as missing or incomplete data, non-standard or invalid data and redundant data etc. Phase II is an implementation of different data quality managing practices such as filtering, data assimilation, and data reconciliation to improve data accuracy and discover useful information. The third and final phase is a post-data quality evaluation, which is conducted to assure data quality and enhance the system performance. In this study, a laboratory-scale drilling rig with a control system capable of drilling is utilized for data acquisition and quality improvement. Safe and efficient performance of such control system heavily relies on quality of the data obtained while drilling and its sufficient availability. Pump pressure, top-drive rotational speed, weight on bit, drill string torque and bit depth are available measurements. The data analysis is challenged by issues such as corruption of data due to noises, time delays, missing or incomplete data and external disturbances. In order to solve such issues, different data quality improvement practices are applied for the testing. These techniques help the intelligent system to achieve better decision-making and quicker fault detection. The study from the laboratory-scale drilling rig clearly demonstrates the need for a proper data quality management process and clear understanding of signal processing methods to carry out an intelligent digitalization in oil and gas industry.


Author(s):  
LEV V. UTKIN

One of the most common performance measures in selection and management of projects is the Net Present Value (NPV). In the paper, we study a case when initial data about the NPV parameters (cash flows and the discount rate) are represented in the form of intervals supplied by experts. A method for computing the NPV based on using random set theory is proposed and three conditions of independence of the parameters are taken into account. Moreover, the imprecise Dirichlet model for obtaining more cautious bounds for the NPV is considered. Numerical examples illustrate the proposed approach for computing the NPV.


2021 ◽  
Vol 71 (2) ◽  
pp. 239-256
Author(s):  
Saadat Boulanouar ◽  
Hafaifa Ahmed ◽  
Belhadef Rachid ◽  
Kouzou Abdellah

Abstract This paper proposes a decision making approach based on the development of a fuzzy prognostic system to ensure the vibrations monitoring of a gas turbine based on real time information obtained from different installed sensors. In this approach the case of incomplete obtained data which may occur frequently is taken into account by using an approach of full data reconstitution form incomplete data. The proposed fuzzy prognostic system approach presented in this paper allows the analysis of the data obtained via the vibration indicators of a gas turbine system for the accurate identification of the faults to avoid the performance degradation of such systems. In order to prove the robustness of the proposed approach presented in this paper, several tested has been performed.


2021 ◽  
Vol 2020 ◽  
Author(s):  
Femi Obasun

This report looks into the administering process of vaccines within the United States and the method designed to aid the decision-makers' process. The study method is based on a quantitative representation in which vaccine candidates are administered.   The procedure utilizes the corresponding (incomplete) data that could theoretically be used in other decision-making methods. The information provided by the vaccine manufacture is somewhat vague. The process entails predicting the future and gaps.  The study interview 1200 vaccinated patients to give an opinion based on the patients.


Sign in / Sign up

Export Citation Format

Share Document