Design Preference Prediction With Data Privacy Safeguards: A Preliminary Study

Author(s):  
Alexander Burnap ◽  
Panos Y. Papalambros

Design preference models are used widely in product planning and design development. Their prediction accuracy requires large amounts of personal user data including purchase and other personal choice records. With increased Internet and smart device use, sources of personal data are becoming more varied and their capture more ubiquitous. This situation leads to questioning whether there is a trade off between improving products and compromising individual user privacy. To advance this conversation, we analyze how privacy safeguards may affect design preference modeling. We conduct an experiment using real user data to study the performance of design preference models under different levels of privacy. Results indicate there is a tradeoff between accuracy and privacy. However, with enough data, models with privacy safeguards can still be sufficiently accurate to answer population-level design questions.

2021 ◽  
Vol 00 (00) ◽  
pp. 1-19
Author(s):  
Diah Yuniarti ◽  
Sri Ariyanti

This study aims to provide recommendations to the government on regulating licence, content and data privacy and protection for integrated broadcast-broadband (IBB) operations in Indonesia, by referencing Singapore, Japan and Malaysia as case studies, considering the need for umbrella regulations for IBB implementation. Singapore and Japan were chosen as countries that have deployed IBB since they have been using hybrid broadcast broadband television (HbbTV) and Hybridcast standards, respectively. Malaysia was chosen because it is a neighbouring country that has conducted trials of the IBB service, bundled with its digital terrestrial television (DTT) service. The qualitative data are analysed using a comparative method. The results show that Indonesia needs to immediately revise its existing Broadcasting Law to accommodate DTT implementation, which is the basis for IBB and the expansion of the broadcaster’s TV business. Learning from Singapore, Indonesia could include over-the-top (OTT) content in its ‘Broadcast Behaviour Guidelines’ and ‘Broadcast Programme Standards’. Data privacy and protection requirements for each entity involved in the IBB ecosystem are necessary due to the vulnerability of IBB service user data leakage. In light of this, the ratification of the personal data protection law, as a legal umbrella, needs to be accelerated.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4175 ◽  
Author(s):  
Fabio Angeletti ◽  
Ioannis Chatzigiannakis ◽  
Andrea Vitaletti

In the era of the Internet of Things (IoT), drug developers can potentially access a wealth of real-world, participant-generated data that enable better insights and streamlined clinical trial processes. Protection of confidential data is of primary interest when it comes to health data, as medical condition influences daily, professional, and social life. Current approaches in digital trials entail that private user data are provisioned to the trial investigator that is considered a trusted party. The aim of this paper is to present the technical requirements and the research challenges to secure the flow and control of personal data and to protect the interests of all the involved parties during the first phases of a clinical trial, namely the characterization of the potential patients and their possible recruitment. The proposed architecture will let the individuals keep their data private during these phases while providing a useful sketch of their data to the investigator. Proof-of-concept implementations are evaluated in terms of performances achieved in real-world environments.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Siliang Dong ◽  
Zhixin Zeng ◽  
Yining Liu

Electricity theft occurs from time to time in the smart grid, which can cause great losses to the power supplier, so it is necessary to prevent the occurrence of electricity theft. Using machine learning as an electricity theft detection tool can quickly lock participants suspected of electricity theft; however, directly publishing user data to the detector for machine learning-based detection may expose user privacy. In this paper, we propose a real-time fault-tolerant and privacy-preserving electricity theft detection (FPETD) scheme that combines n -source anonymity and a convolutional neural network (CNN). In our scheme, we designed a fault-tolerant raw data collection protocol to collect electricity data and cut off the correspondence between users and their data, thereby ensuring the fault tolerance and data privacy during the electricity theft detection process. Experiments have proven that our dimensionality reduction method makes our model have an accuracy rate of 92.86% for detecting electricity theft, which is much better than others.


Teknologi ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 46-58
Author(s):  
Syifa Ilma Nabila Suwandi ◽  
◽  
Xavier Wahyuadi Seloatmodjo ◽  
Alexandra Situmorang ◽  
Nur Aini Rakhmawati ◽  
...  

The presence of user contact applications in the community as a means of preventing and overcoming the spread of COVID-19 can pose another risk to the potential dangers of protecting data privacy from contact tracing. This research examines more deeply related to user privacy policies through 3 (three) samples of android-based user contact applications that are used as a means of preventing, overcoming and controlling the spread of the COVID-19 virus in today's society and by reviewing the rules contained in the Presidential Regulation of the Republic. Indonesian No. 95 of 2018 concerning Electronic-Based Government Systems (SPBE). The study in this study was prepared using the method of literature study, observation and qualitative analysis. A comparison was made regarding the data privacy of the three samples, which was then evaluated and matched with the form of the privacy policy according to Presidential Regulation No. 95 of 2018 concerning Electronic-Based Government Systems (SPBE) and according to the ideal form of data privacy policy based on several experts. Comparative data is obtained through related applications and other electronic media which are then discussed together to conclude and evaluate the data privacy policies of the three sample applications. Based on this research, it can be concluded that privacy intervention to deal with damage and save lives is legal as long as its use is in accordance with regulations in the health, disaster, telecommunications, informatics and other related fields; in this case listed in the Presidential Decree No. 95 of 2018 concerning Electronic-Based Government Systems (SPBE) and there needs to be an increase in efforts to maintain the security and confidentiality of user data privacy through continuous system and data maintenance, encryption of data privacy storage in the manager's data warehouse and added with other data privacy policies can guarantee the security and confidentiality of the privacy of user data.


Significance The move comes after Facebook suspended a UK political consulting firm, Cambridge Analytica, following allegations on March 18 that it improperly obtained personal data on 50 million Facebook users that was subsequently used in political campaigns. The incident has reignited the debates in the United States and elsewhere on online privacy, targeted messaging and whether tech firms are now too powerful to be left to regulate themselves. Impacts First Amendment considerations will limit any efforts to control online political advertising in the United States. Accusations that Facebook facilitated foreign meddling in elections will dog it more than allegations of improper acquisitions of user data. Internal criticism of Facebook's practices by employees, former employees and investors may be greater agents for change than lawmakers.


Author(s):  
Sema Bulat Demir ◽  
Ayten Övür

Nowadays, social media platforms are frequently being used on the Internet. When the users create an account for these platforms, they are required to accept the data privacy policy. With the approval of the data policy, major problems may arise such as observing every activity of users on the platform, violations of security and protection of personal data, and sharing user data with third parties for commercial purposes. In this regard, it is significant to examine the privacy policies of social media platforms in detail. In this research, we examined the privacy policies of the five most popular free applications on the communication section of the Google Play Store on January 30th, 2021. The privacy policies of these applications were analyzed with the content analysis method, and the research aims to reveal the area of utilization of the data that the users provide, with or without the permission of the user.


Author(s):  
A Ismail ◽  
◽  
M R Hamzah ◽  
H Hussin ◽  
◽  
...  

Big data allows widespread use and exchange of user data, and this will lead to the possibility of privacy breaches. Governments and corporations will incorporate personal data from different sources and learn a great deal about people and in turn, raise concerns about privacy. This paper will provide a conceptual understanding on the antecedents towards user privacy concerns and online self-disclosure activities, which are the knowledge and perceived risks of big data. In this paper, big data knowledge is hypothesized to decrease privacy concerns, meanwhile perceived risks is suggested to increase the outcome. Based on the framework, propositions are formulated as a basis for the study that will follow.


2021 ◽  
Author(s):  
Kai Rannenberg ◽  
Sebastian Pape ◽  
Frédéric Tronnier ◽  
Sascha Löbner

The aim of this study was to identify and evaluate different de-identification techniques that may be used in several mobility-related use cases. To do so, four use cases have been defined in accordance with a project partner that focused on the legal aspects of this project, as well as with the VDA/FAT working group. Each use case aims to create different legal and technical issues with regards to the data and information that are to be gathered, used and transferred in the specific scenario. Use cases should therefore differ in the type and frequency of data that is gathered as well as the level of privacy and the speed of computation that is needed for the data. Upon identifying use cases, a systematic literature review has been performed to identify suitable de-identification techniques to provide data privacy. Additionally, external databases have been considered as data that is expected to be anonymous might be reidentified through the combination of existing data with such external data. For each case, requirements and possible attack scenarios were created to illustrate where exactly privacy-related issues could occur and how exactly such issues could impact data subjects, data processors or data controllers. Suitable de-identification techniques should be able to withstand these attack scenarios. Based on a series of additional criteria, de-identification techniques are then analyzed for each use case. Possible solutions are then discussed individually in chapters 6.1 - 6.2. It is evident that no one-size-fits-all approach to protect privacy in the mobility domain exists. While all techniques that are analyzed in detail in this report, e.g., homomorphic encryption, differential privacy, secure multiparty computation and federated learning, are able to successfully protect user privacy in certain instances, their overall effectiveness differs depending on the specifics of each use case.


2020 ◽  
Vol 17 (3) ◽  
pp. 819-834
Author(s):  
Wei Ou ◽  
Jianhuan Zeng ◽  
Zijun Guo ◽  
Wanqin Yan ◽  
Dingwan Liu ◽  
...  

With continuous improvements of computing power, great progresses in algorithms and massive growth of data, artificial intelligence technologies have entered the third rapid development era. However, With the great improvements in artificial intelligence and the arrival of the era of big data, contradictions between data sharing and user data privacy have become increasingly prominent. Federated learning is a technology that can ensure the user privacy and train a better model from different data providers. In this paper, we design a vertical federated learning system for the for Bayesian machine learning with the homomorphic encryption. During the training progress, raw data are leaving locally, and encrypted model information is exchanged. The model trained by this system is comparable (up to 90%) to those models trained by a single union server under the consideration of privacy. This system can be widely used in risk control, medical, financial, education and other fields. It is of great significance to solve data islands problem and protect users? privacy.


2021 ◽  
Author(s):  
Stanton Heister ◽  
Kristi Yuthas

Recent increases in security breaches and digital surveillance highlight the need for improved privacy and security, particularly over users’ personal data. Advances in cybersecurity and new legislation promise to improve data protection. Blockchain and distributed ledger technologies provide novel opportunities for protecting user data through decentralized identity and other privacy mechanisms. These systems can allow users greater sovereignty through tools that enable them to own and control their own data. Artificial intelligence provides further possibilities for enhancing system and user security, enriching data sets, and supporting improved analytical models.


Sign in / Sign up

Export Citation Format

Share Document