utility loss
Recently Published Documents


TOTAL DOCUMENTS

43
(FIVE YEARS 22)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Hui Jun Zhou ◽  
Guo Fen Luo ◽  
Nasheen Naidoo ◽  
Jian Shen ◽  
Meng Meng Gao ◽  
...  

Abstract Backgrounds: The health of university staff is a major occupational health concern worldwide. Studies have reported low health-related quality of life (HRQOL), low job satisfaction and poor mental health in this occupational group. However, none of previous studies have measured health utility and compared it to a national norm. Therefore, this study was conducted to gain a deeper understanding of the HRQOL of university staff in China and to identify risk factors influential to their health. Methods: This was a cross-sectional survey conducted in a public university in China. Participants were interviewed face-to-face for demographic and socioeconomic information and health conditions. The Chinese version of the EQ-5D-5L instrument was used to measure HRQOL for calculating health utility. The relationship between health utility and sample characteristics was first examined using t-test and correlation analysis. Multivariate generalized linear models were further applied to evaluate the significance of these associations while adjusting for other variables. Results: The sample (n=154) had a mean age of 40.65 years and slightly more females (51.30%). The overall prevalence of diseases or symptoms was 81.17%. Participants attained the means (SDs) of 0.945 (0.073) and 83.00 (11.32) for the health utility and visual analogue scale respectively. The most affected domain was the anxiety/depression with 40.26% of participants reporting problems and 37.66% of the sample reported problems in the pain/discomfort domain. There were less than 5% participants reported problems in the mobility, self-care or daily activity domains individually. Multivariate models revealed that psychological/emotional conditions were associated with the largest utility loss of -0.067 (95%CI: -0.089, -0.045) followed by having a Master’s degree or higher (-0.048, 95%CI: -0.09, -0.005) and pain in body parts other than head, neck and back (-0.034, 95%CI: -0.055, -0.014).Conclusions: University staff in China may have worse HRQOL than the general population, which manifested mainly with the pain/discomfort and anxiety/depression domains. The significant factors for utility loss were having a Master’s degree or higher, psychological conditions and pain in body parts other than the head, neck and back. Targeted health promotion policies and programs should be created to benefit this occupational group and society overall.


2021 ◽  
Vol 26 ◽  
pp. 1-26
Author(s):  
Giulia Bernardini ◽  
Huiping Chen ◽  
Gabriele Fici ◽  
Grigorios Loukides ◽  
Solon P. Pissis

We introduce the notion of reverse-safe data structures. These are data structures that prevent the reconstruction of the data they encode (i.e., they cannot be easily reversed). A data structure D is called z - reverse-safe when there exist at least z datasets with the same set of answers as the ones stored by D . The main challenge is to ensure that D stores as many answers to useful queries as possible, is constructed efficiently, and has size close to the size of the original dataset it encodes. Given a text of length n and an integer z , we propose an algorithm that constructs a z -reverse-safe data structure ( z -RSDS) that has size O(n) and answers decision and counting pattern matching queries of length at most d optimally, where d is maximal for any such z -RSDS. The construction algorithm takes O(nɷ log d) time, where ɷ is the matrix multiplication exponent. We show that, despite the nɷ factor, our engineered implementation takes only a few minutes to finish for million-letter texts. We also show that plugging our method in data analysis applications gives insignificant or no data utility loss. Furthermore, we show how our technique can be extended to support applications under realistic adversary models. Finally, we show a z -RSDS for decision pattern matching queries, whose size can be sublinear in n . A preliminary version of this article appeared in ALENEX 2020.


2021 ◽  
Author(s):  
Jayapradha J ◽  
Prakash M

Abstract Privacy of the individuals plays a vital role when a dataset is disclosed in public. Privacy-preserving data publishing is a process of releasing the anonymized dataset for various purposes of analysis and research. The data to be published contain several sensitive attributes such as diseases, salary, symptoms, etc. Earlier, researchers have dealt with datasets considering it would contain only one record for an individual [1:1 dataset], which is uncompromising in various applications. Later, many researchers concentrate on the dataset, where an individual has multiple records [1:M dataset]. In the paper, a model f-slip was proposed that can address the various attacks such as Background Knowledge (bk) attack, Multiple Sensitive attribute correlation attack (MSAcorr), Quasi-identifier correlation attack(QIcorr), Non-membership correlation attack(NMcorr) and Membership correlation attack(Mcorr) in 1:M dataset and the solutions for the attacks. In f -slip, the anatomization was performed to divide the table into two subtables consisting of i) quasi-identifier and ii) sensitive attributes. The correlation of sensitive attributes is computed to anonymize the sensitive attributes without breaking the linking relationship. Further, the quasi-identifier table was divided and k-anonymity was implemented on it. An efficient anonymization technique, frequency-slicing (f-slicing), was also developed to anonymize the sensitive attributes. The f -slip model is consistent as the number of records increases. Extensive experiments were performed on a real-world dataset Informs and proved that the f -slip model outstrips the state-of-the-art techniques in terms of utility loss, efficiency and also acquires an optimal balance between privacy and utility.


2021 ◽  
Vol 15 (1) ◽  
pp. 1-36
Author(s):  
Cody Kinneer ◽  
David Garlan ◽  
Claire Le Goues

Many software systems operate in environments of change and uncertainty. Techniques for self-adaptation allow these systems to automatically respond to environmental changes, yet they do not handle changes to the adaptive system itself, such as the addition or removal of adaptation tactics. Instead, changes in a self-adaptive system often require a human planner to redo an expensive planning process to allow the system to continue satisfying its quality requirements under different conditions; automated techniques must replan from scratch. We propose to address this problem by reusing prior planning knowledge to adapt to unexpected situations. We present a planner based on genetic programming that reuses existing plans and evaluate this planner on two case-study systems: a cloud-based web server and a team of autonomous aircraft. While reusing material in genetic algorithms has been recently applied successfully in the area of automated program repair, we find that naively reusing existing plans for self- * planning can actually result in a utility loss. Furthermore, we propose a series of techniques to lower the costs of reuse, allowing genetic techniques to leverage existing information to improve utility when replanning for unexpected changes, and we find that coarsely shaped search-spaces present profitable opportunities for reuse.


2020 ◽  
Vol 36 (Supplement_2) ◽  
pp. i903-i910
Author(s):  
Kerem Ayoz ◽  
Miray Aysen ◽  
Erman Ayday ◽  
A Ercument Cicek

Abstract Motivation Big data era in genomics promises a breakthrough in medicine, but sharing data in a private manner limit the pace of field. Widely accepted ‘genomic data sharing beacon’ protocol provides a standardized and secure interface for querying the genomic datasets. The data are only shared if the desired information (e.g. a certain variant) exists in the dataset. Various studies showed that beacons are vulnerable to re-identification (or membership inference) attacks. As beacons are generally associated with sensitive phenotype information, re-identification creates a significant risk for the participants. Unfortunately, proposed countermeasures against such attacks have failed to be effective, as they do not consider the utility of beacon protocol. Results In this study, for the first time, we analyze the mitigation effect of the kinship relationships among beacon participants against re-identification attacks. We argue that having multiple family members in a beacon can garble the information for attacks since a substantial number of variants are shared among kin-related people. Using family genomes from HapMap and synthetically generated datasets, we show that having one of the parents of a victim in the beacon causes (i) significant decrease in the power of attacks and (ii) substantial increase in the number of queries needed to confirm an individual’s beacon membership. We also show how the protection effect attenuates when more distant relatives, such as grandparents are included alongside the victim. Furthermore, we quantify the utility loss due adding relatives and show that it is smaller compared with flipping based techniques.


2020 ◽  
Author(s):  
Jean Roch Donsimoni

AbstractWe develop a model where individuals accumulate fatigue from work intensity when choosing hours worked. Fatigue captures intertemporal costs of labour supply and leads to a utility loss. As fatigue increases, individuals optimally choose to work fewer hours. The model also predicts that if individuals cannot easily shift consumption over time, they will work fewer hours but accumulate more fatigue when work intensity increases. Calibration to 19 European countries provides evidence for the claim that a higher share of the service sector is linked to increasing work fatigue and that public provisions of healthcare improves recovery and mental health.JEL codesE71, I12, J22


2020 ◽  
Author(s):  
Tailin Huang ◽  
Hwa-Lung Yu ◽  
Efthymios Nikolopoulos ◽  
Andreas Langousis ◽  
Jin Zhu ◽  
...  

<p>In most cases, disasters are assessed at an event-level, for example, by focusing on quantitative surveys of casualties, physical damages, and qualitative root-cause analyses of individual events. The disaster risks are evaluated based on expected utility loss by calculating the probability of occurrence and potential consequences. However, we should know that disaster causes are increasingly sophisticated and usually entangle quickly with deep social and organizational problems, and their impacts are prolonged with a further complication in the nexus of societal systems. To reduce disaster risk, we propose to consider disasters as inseparable parts of the societal operation and critical resource and service circulation, deviating from the well-established concept that a disaster is simply the tragic outcome of human casualties and property damages. Therefore, we will develop a novel DR3 analysis framework to address the dynamic change patterns of risks, i.e., “risk dynamics,” as a key concept for analyzing risk in complex socio-technical systems. In this proposition, DR3 analysis should consider all components of the socio-technical systems that are susceptible to disaster-induced functional perturbations and the DR3 assessment is associated with the overall state change of the socio-technical systems and their performance controllability of the organizations. The failures of the physical systems and individual human factors in the organizations are critical for comprehensive risk analysis. To achieve the goal, we establish a multidisciplinary team to address DR3 vital issues by using the participatory system dynamics modeling approach in this project. Consortium partners will focus on unique disaster cases and test the underlying hypotheses from multiple perspectives. Stakeholders from government agencies and infrastructure service providers will be engaged through continuous and direct involvement in dialogues and activities, supporting the development of risk-dynamics based DR3 solutions.</p>


10.29007/2p7l ◽  
2020 ◽  
Author(s):  
Lamyaa Al-Omairi ◽  
Jemal Abawajy ◽  
Morshed Chowdhury

In recent years graphs with massive nodes and edges have become widely used in various application fields, for example, social networks, web mining, traffic on transport, and more. Several researchers have shown that reducing the dimensions is very important in analyzing extensive graph data. They applied a variety of dimensionality reduction strategies, including linear methods or nonlinear methods. However, it is still not clear to what extent the information is lost or preserved when these techniques are applied to reduce the dimensions of large networks. In this study, we measured the utility of graph dimensionality reduction, and we proved when using the very recently suggested method, which is HDR to reduce dimensional for graph, the utility loss will be small compared with popular linear techniques, such as PCA, LDA, FA, and MDS. We measured the utility based on three essential network metrics: Average Clustering Coefficient (ACC), Average Path Length (APL), and Average Betweenness (ABW). The results showed that HDR achieved a lower rate of utility loss compared to other dimensionality reduction methods. We performed our experiments on the three undirected and unweighted graph datasets.


Sign in / Sign up

Export Citation Format

Share Document