Assessing the Limits of Privacy and Data Usage for Web Browsing Analytics

Author(s):  
Daniel Perdices ◽  
Jorge E. Lopez de Vergara ◽  
Ivan Gonzalez
Keyword(s):  
1997 ◽  
Author(s):  
Paul F. Syverson ◽  
Michael G. Reed ◽  
David M. Goldschlag
Keyword(s):  

Author(s):  
Seyed Kourosh Mahjour ◽  
Antonio Alberto Souza Santos ◽  
Manuel Gomes Correia ◽  
Denis José Schiozer

AbstractThe simulation process under uncertainty needs numerous reservoir models that can be very time-consuming. Hence, selecting representative models (RMs) that show the uncertainty space of the full ensemble is required. In this work, we compare two scenario reduction techniques: (1) Distance-based Clustering with Simple Matching Coefficient (DCSMC) applied before the simulation process using reservoir static data, and (2) metaheuristic algorithm (RMFinder technique) applied after the simulation process using reservoir dynamic data. We use these two methods as samples to investigate the effect of static and dynamic data usage on the accuracy and rate of the scenario reduction process focusing field development purposes. In this work, a synthetic benchmark case named UNISIM-II-D considering the flow unit modelling is used. The results showed both scenario reduction methods are reliable in selecting the RMs from a specific production strategy. However, the obtained RMs from a defined strategy using the DCSMC method can be applied to other strategies preserving the representativeness of the models, while the role of the strategy types to select the RMs using the metaheuristic method is substantial so that each strategy has its own set of RMs. Due to the field development workflow in which the metaheuristic algorithm is used, the number of required flow simulation models and the computational time are greater than the workflow in which the DCSMC method is applied. Hence, it can be concluded that static reservoir data usage on the scenario reduction process can be more reliable during the field development phase.


Author(s):  
Deyu Tian ◽  
Yun Ma ◽  
Aruna Balasubramanian ◽  
Yunxin Liu ◽  
Gang Huang ◽  
...  

2021 ◽  
Vol 48 (4) ◽  
pp. 37-40
Author(s):  
Nikolas Wehner ◽  
Michael Seufert ◽  
Joshua Schuler ◽  
Sarah Wassermann ◽  
Pedro Casas ◽  
...  

This paper addresses the problem of Quality of Experience (QoE) monitoring for web browsing. In particular, the inference of common Web QoE metrics such as Speed Index (SI) is investigated. Based on a large dataset collected with open web-measurement platforms on different device-types, a unique feature set is designed and used to estimate the RUMSI - an efficient approximation to SI, with machinelearning based regression and classification approaches. Results indicate that it is possible to estimate the RUMSI accurately, and that in particular, recurrent neural networks are highly suitable for the task, as they capture the network dynamics more precisely.


2021 ◽  
pp. 1-8
Author(s):  
Janessa Mladucky ◽  
Bonnie Baty ◽  
Jeffrey Botkin ◽  
Rebecca Anderson

Introduction: Customer data from direct-to-consumer genetic testing (DTC GT) are often used for secondary purposes beyond providing the customer with test results. Objective: The goals of this study were to determine customer knowledge of secondary uses of data, to understand their perception of risks associated with these uses, and to determine the extent of customer concerns about privacy. Methods: Twenty DTC GT customers were interviewed about their experiences. The semi-structured interviews were transcribed, coded, and analyzed for common themes. Results: Most participants were aware of some secondary uses of data. All participants felt that data usage for research was acceptable, but acceptability for non-research purposes varied across participants. The majority of participants were aware of the existence of a privacy policy, but few read the majority of the privacy statement. When previously unconsidered uses of data were discussed, some participants expressed concern over privacy protections for their data. Conclusion: When exposed to new information on secondary uses of data, customers express concerns and a desire to improve consent with transparency, more opt-out options, improved readability, and more information on future uses and potential risks from direct-to-consumer companies. Effective ways to improve readership about the secondary use, risk of use, and protection of customer data should be investigated and the findings implemented by DTC companies to protect public trust in these practices.


2021 ◽  
Vol 10 (1) ◽  
pp. 30
Author(s):  
Alfonso Quarati ◽  
Monica De Martino ◽  
Sergio Rosim

The Open Government Data portals (OGD), thanks to the presence of thousands of geo-referenced datasets, containing spatial information are of extreme interest for any analysis or process relating to the territory. For this to happen, users must be enabled to access these datasets and reuse them. An element often considered as hindering the full dissemination of OGD data is the quality of their metadata. Starting from an experimental investigation conducted on over 160,000 geospatial datasets belonging to six national and international OGD portals, this work has as its first objective to provide an overview of the usage of these portals measured in terms of datasets views and downloads. Furthermore, to assess the possible influence of the quality of the metadata on the use of geospatial datasets, an assessment of the metadata for each dataset was carried out, and the correlation between these two variables was measured. The results obtained showed a significant underutilization of geospatial datasets and a generally poor quality of their metadata. In addition, a weak correlation was found between the use and quality of the metadata, not such as to assert with certainty that the latter is a determining factor of the former.


Sign in / Sign up

Export Citation Format

Share Document