Incorporating Preferences and Priorities into MCDA: Selecting an Appropriate Scoring and Weighting Technique

Author(s):  
Kevin Marsh ◽  
Praveen Thokala ◽  
Axel Mühlbacher ◽  
Tereza Lanitis
Keyword(s):  
2017 ◽  
Vol 33 (1) ◽  
pp. 111-120 ◽  
Author(s):  
Antoni Gilabert-Perramon ◽  
Josep Torrent-Farnell ◽  
Arancha Catalan ◽  
Alba Prat ◽  
Manel Fontanet ◽  
...  

Objectives:The aim of this study was to adapt and assess the value of a Multi-Criteria Decision Analysis (MCDA) framework (EVIDEM) for the evaluation of Orphan drugs in Catalonia (Catalan Health Service).Methods:The standard evaluation and decision-making procedures of CatSalut were compared with the EVIDEM methodology and contents. The EVIDEM framework was adapted to the Catalan context, focusing on the evaluation of Orphan drugs (PASFTAC program), during a Workshop with sixteen PASFTAC members. The criteria weighting was done using two different techniques (nonhierarchical and hierarchical). Reliability was assessed by re-test.Results:The EVIDEM framework and methodology was found useful and feasible for Orphan drugs evaluation and decision making in Catalonia. All the criteria considered for the development of the CatSalut Technical Reports and decision making were considered in the framework. Nevertheless, the framework could improve the reporting of some of these criteria (i.e., “unmet needs” or “nonmedical costs”). Some Contextual criteria were removed (i.e., “Mandate and scope of healthcare system”, “Environmental impact”) or adapted (“population priorities and access”) for CatSalut purposes. Independently of the weighting technique considered, the most important evaluation criteria identified for orphan drugs were: “disease severity”, “unmet needs” and “comparative effectiveness”, while the “size of the population” had the lowest relevance for decision making. Test–retest analysis showed weight consistency among techniques, supporting reliability overtime.Conclusions:MCDA (EVIDEM framework) could be a useful tool to complement the current evaluation methods of CatSalut, contributing to standardization and pragmatism, providing a method to tackle ethical dilemmas and facilitating discussions related to decision making.


2018 ◽  
Author(s):  
◽  
Li Chen

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] Longitudinal data contain repeated measurements of variables on the same experimental subject. It is often of interest to analyze the relationship between these variables. Typically, there is one or several longitudinal covariates and a response variable that can be either longitudinal or time to an event. Regression models can be employed to analyze these relationships. Ideally, longitudinal variables should be continuously monitored and their complete trajectories along the time are observed. Practically, however, this is unrealistic, either economically or methodologically. Often one only obtains so called sparse longitudinal data, where variables are intermittently observed at relatively sparse time points within the period of study. Such sparse longitudinal data give rise to an issue for the analysis of the response of time to an event, where survival analysis is typically implemented, e.g. the Cox model or additive hazards model. In both models, the values of covariates of all subjects at risk are needed in order to calculate the partial likelihood. But in the case of sparse longitudinal data, the availability of these observations may not be satis fied. Moreover, if the response variable is also longitudinal, it is possible that the response and covariates are not observed altogether, or at least not close to each other enough to be considered as observed simultaneously. Although a wealth of studies have been dedicated to longitudinal data analysis, very few of them have seriously considered and rigorously studied the situation aforementioned. In this dissertation, we discuss the regression analysis of longitudinal cavities with censored and longitudinal outcome. To be specific, Chapter 2 targets the additive hazards models with sparse longitudinal covariates, Chapter 3 studies the partially linear models with longitudinal covariates and response observed at mismatched time points, also known as asynchronous longitudinal data, and Chapter 4 explores longitudinal data with more complex structures with linear models. Kernel weighting technique is the key idea to all the stated researches. Estimators are derived based on kernel weighting technique and their asymptotical properties were rigorously examined, along with simulation studies for their fi nite sample performance, and illustrations using real data sets.


2015 ◽  
Vol 42 (6Part6) ◽  
pp. 3251-3251
Author(s):  
K Ganezer ◽  
M Krmar ◽  
I Josipovic

Helix ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 147-154
Author(s):  
Linu Lonappan ◽  
Joseph X. Rodrigues ◽  
Devika Menon ◽  
Lucy J. Gudino

Filomat ◽  
2018 ◽  
Vol 32 (5) ◽  
pp. 1853-1860 ◽  
Author(s):  
Qi Yue ◽  
Bingwen Yu ◽  
Yongshan Peng ◽  
Lei Zhang ◽  
Yu Hong

This paper combines the theory of hesitant fuzzy linguistic term sets (HFLTSs) with two-sided matching decision making (TSMDM). The related definitions of HFLTSs and two-sided matchings (TSMs) are introduced. Then, the problem of TSMDM with HFLTSs is presented. For solving this problem, a model of TSMDM with HFLTSs is developed. The AHP method is used to determine the important degrees of agents of each side. On this base, the model of TSMDM can be changed into a double-goal model with HFLTSs. Then, the double-goal model with HFLTSs is changed into the double-goal model with scores through using the proposed score function. Furthermore, the double-goal model can be changed into a single-goal model by using the linear weighting technique once again. The scheme of TSM can be obtained through solving the single-goal model. At last, an example with sensitive analysis is provided for the illustration of the presented approach of TSM.


2016 ◽  
Vol 10 (9) ◽  
pp. 245
Author(s):  
Arash Ghorban Niya Delavar ◽  
Zahra Jafari

SVM, a learning algorithm to analyze data and recognize patterns is used. But there is an important issue, replicate data as well as its real-time processing has not been correctly calculated. For this reason, in this paper we have provided a method DCSVM+ to reduce data classification using weighting technique in SVM +. The proposed method with regard to the parameters to SVM + has the optimum response time. By observing the parameter of data volume and their density, we abled to classify the size of interval as case that this classification to investigated case study reduces the running time of algorithm SVM +. Also by providing objective function of the proposed method, we abled to reduce replicate data to SVM + by integrating parameters and data classification and finally we provided threshold detector (TD) for method of DCSVM + to with respect to the competency function, we reduce the processing time as well as increase data processing speed. Finally proposed algorithm with weighting technique of function to SVM + is optimized in terms of efficiency.


Sign in / Sign up

Export Citation Format

Share Document