utility metrics
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 2093 (1) ◽  
pp. 012023
Author(s):  
Yongguang Liu

Abstract Utility function of packet is often used to decide how to replicate packet in DTN networks. But this method has more uncertainty and relies on a single performance metric. For this problem, in order to reduce the impact of a single metric uncertainty, multiple utility metrics are introduced in the new algorithm. A packet replication probability calculation method based on entropy weight is designed. By calculating the entropy weight of each metric, the algorithm obtains the replication probability of each packet and takes the probability as the priority of packet replication. Because of considering the two metrics of packet expected transmission delay and node encounter possibility, the algorithm effectively reduces the influence of encounter time distribution problem and direction prediction problem in the original algorithm, and reduces the uncertainty of utility function. Simulation results show that the algorithm reduces the packet replication number and the average delay, improves the successful packet delivery rate. The overall performance of the network is further improved.


2021 ◽  
Author(s):  
Fida Dankar ◽  
Mahmoud K. Ibrahim ◽  
Leila Ismail

BACKGROUND Synthetic datasets are gradually emerging as solutions for fast and inclusive health data sharing. Multiple synthetic data generators have been introduced in the last decade fueled by advancement in machine learning, yet their utility is not well understood. Few recent papers tried to compare the utility of synthetic data generators, each focused on different evaluation metrics and presented conclusions targeted at specific analysis. OBJECTIVE This work aims to understand the overall utility (referred to as quality) of four recent synthetic data generators by identifying multiple criteria for high-utility for synthetic data. METHODS We investigate commonly used utility metrics for masked data evaluation and classify them into criteria/categories depending on the function they attempt to preserve: attribute fidelity, bivariate fidelity, population fidelity, and application fidelity. Then we chose a representative metric from each of the identified categories based on popularity and consistency. The set of metrics together, referred to as quality criteria, are used to evaluate the overall utility of four recent synthetic data generators across 19 datasets of different sizes and feature counts. Moreover, correlations between the identified metrics are investigated in an attempt to streamline synthetic data utility. RESULTS Our results indicate that a non-parametric machine learning synthetic data generator (Synthpop) provides the best utility values across all quality criteria along with the highest stability. It displays the best overall accuracy in supervised machine learning and often agrees with real dataset on the learning model with the highest accuracy. On another front, our results suggest no strong correlation between the different metrics, which implies that all categories/dimensions are required when evaluating the overall utility of synthetic data. CONCLUSIONS The paper used four quality criteria to inform on the synthesizer with the best overall utility. The results are promising with small decreases in accuracy observed from the winning synthesizer when tested with real datasets (in comparison with models trained on real data). Further research into one (overall) quality measure would greatly help data holders in optimizing the utility of the released dataset.


Author(s):  
Matthew Browne ◽  
Vijay Rawat ◽  
Catherine Tulloch ◽  
Cailem Murray-Boyle ◽  
Matthew Rockloff

Jurisdictions around the world have a self-declared mandate to reduce gambling-related harm. However, historically, this concept has suffered from poor conceptualisation and operationalisation. However, recent years have seen swift advances in measuring gambling harm, based on the principle of it being a quantifiable decrement to the health and wellbeing of the gambler and those connected to them. This review takes stock of the background and recent developments in harm assessment and summarises recent research that has validated and applied the Short Gambling Harms Screen and related instruments. We recommend that future work builds upon the considerable psychometric evidence accumulated for the feasibility of direct elicitation of harmful consequences. We also advocate for grounding harms measures with respect to scalar changes to public health utility metrics. Such an approach will avoid misleading pseudo-clinical categorisations, provide accurate population-level summaries of where the burden of harm is carried, and serve to integrate gambling research with the broader field of public health.


Author(s):  
Manuel Mora ◽  
Jorge Marx Gómez ◽  
Fen Wang ◽  
Edgar Oswaldo Díaz

The current and most used IT Service Management (ITSM) frameworks (ITIL v2011, CMMI-SVC, and ISO/IEC 20000) correspond to a rigor-oriented paradigm. However, the high dynamism in business requirements for IT services has fostered the emergence of agile-assumed ITSM frameworks. In contrast with the Software Engineering field where the rigorous and agile development paradigms co-exist because both paradigms are well-known and well-accepted, in the ITSM field, agile ITSM frameworks are practically unknown. This chapter, thus, reviews the main emergent proffered agile ITSM frameworks (Lean IT, FitSM, IT4IT, and VeriSM) focusing on the IT service design process category. This process category is relevant because an IT service is designed after its business strategic authorization and the IT service design determines the future warranty and utility metrics for the IT service. The main findings suggest the need for clear and effortless agile ITSM frameworks with agile design practices to guide potential ITSM practitioners to cope with the new digital business environment.


2020 ◽  
Vol 8 (Suppl 3) ◽  
pp. A87-A87
Author(s):  
Bo Wei ◽  
John Kang ◽  
Miho Kibukawa ◽  
Gladys Arreaza ◽  
Maureen Maguire ◽  
...  

BackgroundVarious biomarkers have been investigated for their ability to identify patients more likely to respond to immunotherapy. Recently, the PD-1 inhibitor pembrolizumab was approved by the FDA for treating patients with unresectable or metastatic solid tumors with high TMB (TMB-H) who have no satisfactory alternative treatment options following progression on prior treatment. The FDA contemporaneously approved the FoundationOne®CDx (F1CDx; Foundation Medicine) as the companion diagnostic for TMB assessment for pembrolizumab. However, multiple comprehensive genomic profiling panels that can measure TMB are currently available or in development. We evaluated the performance of TruSight™ Oncology 500 (TSO500; Illumina) for assessing TMB and its clinical utility using F1CDx and whole exome sequencing (WES) as reference methods.MethodsPretreatment archival tumor samples from patients enrolled in 8 clinical trials of pembrolizumab monotherapy were evaluated for TMB by TSO500, F1CDx QSR pipeline v3.2.0, and WES. Correlation was assessed using Spearman’s rank correlation coefficient (ρ). The F1CDx and WES TMB cutpoints were 10 mut/Mb and 175 mut/exome, respectively. The TSO500 cutpoint was selected using the Youden index criterion. Concordance was assessed by calculating area under the receiver-operating curve (AUROC), positive percentage agreement (PPA), and negative percentage agreement (NPA). Statistical significance of the association of TMB measured by TSO500 with ORR was assessed using logistic regression adjusted for ECOG performance status and cancer type. Clinical utility of the selected TSO500 TMB cutpoint for discriminating responders and nonresponders was assessed by calculating sensitivity, specificity, positive predictive value, negative predictive value, ORR enrichment, and prevalence.ResultsTMB scores were valid for 294/294 patients assessed by TSO500, 269/270 assessed by F1CDx, and 293/294 assessed by WES. TMB assessed by TSO500 had good correlation with TMB assessed by F1CDx (ρ=0.76) and WES (ρ=0.74). Using Youden index criterion, 10 mut/Mb was the TSO500 cutpoint that corresponded with both the F1CDx and WES cutpoints. TSO500 reliably predicted TMB-H and non–TMB-H status as determined by the F1CDx (AUROC=0.99, PPA=97.4%, NPA=93.0%) and WES (AUROC=0.95, PPA=76.2%, NPA=96.1%) cutpoints. TMB measured by TSO500 was significantly associated with ORR (one-sided P<0.0001). Clinical utility metrics were generally similar for TSO500 and F1CDx (table 1) and TSO500 and WES (table 2).Abstract 80 Table 1Clinical Utility Metrics for the TSO500 TMB Cutpoint Compared with the F1CDx TMB Cutpoint (n=269)Abstract 80 Table 2Clinical Utility Metrics for the TSO500 TMB Cutpoint Compared with the WES TMB Cutpoint (n=293)ConclusionsTMB assessed by TSO500 is highly correlated and concordant with TMB assessed by F1CDx and WES. Similar to the validated and approved F1CDx TMB cutpoint of 10 mut/Mb, the TSO500 TMB cutpoint of 10 mut/Mb is predictive of response to pembrolizumab monotherapy.AcknowledgementsThis analysis and all included studies were sponsored by Merck Sharp & Dohme Corp., a subsidiary of Merck & Co., Inc., Kenilworth, NJ, USA.Trial RegistrationNAEthics ApprovalThe protocols and all amendments for the studies included in this analysis were approved by the appropriate ethics committee at each participating institution.ConsentNAReferencesNA


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Tanbir Ahmed ◽  
Md Momin Al Aziz ◽  
Noman Mohammed

Abstract According to a recent study, around 99% of hospitals across the US now use electronic health record systems (EHRs). One of the most common types of EHR is the unstructured textual data, and unlocking hidden details from this data is critical for improving current medical practices and research endeavors. However, these textual data contain sensitive information, which could compromise our privacy. Therefore, medical textual data cannot be released publicly without undergoing any privacy-protective measures. De-identification is a process of detecting and removing all sensitive information present in EHRs, and it is a necessary step towards privacy-preserving EHR data sharing. Over the last decade, there have been several proposals to de-identify textual data using manual, rule-based, and machine learning methods. In this article, we propose new methods to de-identify textual data based on the self-attention mechanism and stacked Recurrent Neural Network. To the best of our knowledge, we are the first to employ these techniques. Experimental results on three different datasets show that our model performs better than all state-of-the-art mechanism irrespective of the dataset. Additionally, our proposed method is significantly faster than the existing techniques. Finally, we introduced three utility metrics to judge the quality of the de-identified data.


Author(s):  
Seyedeh Hamideh Erfani ◽  
Reza Mortazavi

The growing popularity of social networks and the increasing need for publishing related data mean that protection of privacy becomes an important and challenging problem in social networks. This paper describes the (k,l k,l k,l)-anonymity model used for social network graph anonymization. The method is based on edge addition and is utility-aware, i.e. it is designed to generate a graph that is similar to the original one. Different strategies are evaluated to this end and the results are compared based on common utility metrics. The outputs confirm that the na¨ıve idea of adding some random or even minimum number of possible edges does not always produce useful anonymized social network graphs, thus creating some interesting alternatives for graph anonymization techniques.


Sign in / Sign up

Export Citation Format

Share Document