personal data
Recently Published Documents


TOTAL DOCUMENTS

4879
(FIVE YEARS 2836)

H-INDEX

34
(FIVE YEARS 10)

2022 ◽  
Vol 30 (7) ◽  
pp. 1-16
Author(s):  
Zhiqiang Xu ◽  
Dong Xiang ◽  
Jialiang He

This paper aims to study the protection of data privacy in news crowdfunding in the era of artificial intelligence. This paper respectively quotes the encryption algorithm of artificial intelligence data protection and the BP neural network prediction model to analyze the data privacy protection in news crowdfunding in the artificial intelligence era. Finally, this paper also combines the questionnaire survey method to understand the public’s awareness of privacy. The results of this paper show that artificial intelligence can promote personal data awareness and privacy, improve personal data and privacy measures and methods, and improve the effectiveness and level of privacy and privacy. In the analysis, the survey found that male college students only have 81.1% of the cognition of personal trait information, only 78.5% of network trace information, and only 78.3% of female college students’ cognition of personal credit.


2022 ◽  
Vol 14 (1) ◽  
pp. 1-10
Author(s):  
Tooska Dargahi ◽  
Hossein Ahmadvand ◽  
Mansour Naser Alraja ◽  
Chia-Mu Yu

Connected and Autonomous Vehicles (CAVs) are introduced to improve individuals’ quality of life by offering a wide range of services. They collect a huge amount of data and exchange them with each other and the infrastructure. The collected data usually includes sensitive information about the users and the surrounding environment. Therefore, data security and privacy are among the main challenges in this industry. Blockchain, an emerging distributed ledger, has been considered by the research community as a potential solution for enhancing data security, integrity, and transparency in Intelligent Transportation Systems (ITS). However, despite the emphasis of governments on the transparency of personal data protection practices, CAV stakeholders have not been successful in communicating appropriate information with the end users regarding the procedure of collecting, storing, and processing their personal data, as well as the data ownership. This article provides a vision of the opportunities and challenges of adopting blockchain in ITS from the “data transparency” and “privacy” perspective. The main aim is to answer the following questions: (1) Considering the amount of personal data collected by the CAVs, such as location, how would the integration of blockchain technology affect transparency , fairness , and lawfulness of personal data processing concerning the data subjects (as this is one of the main principles in the existing data protection regulations)? (2) How can the trade-off between transparency and privacy be addressed in blockchain-based ITS use cases?


2022 ◽  
Vol 24 (3) ◽  
pp. 1-25
Author(s):  
Nishtha Paul ◽  
Arpita Jadhav Bhatt ◽  
Sakeena Rizvi ◽  
Shubhangi

Frequency of malware attacks because Android apps are increasing day by day. Current studies have revealed startling facts about data harvesting incidents, where user’s personal data is at stake. To preserve privacy of users, a permission induced risk interface MalApp to identify privacy violations rising from granting permissions during app installation is proposed. It comprises of multi-fold process that performs static analysis based on app’s category. First, concept of reverse engineering is applied to extract app permissions to construct a Boolean-valued permission matrix. Second, ranking of permissions is done to identify the risky permissions across category. Third, machine learning and ensembling techniques have been incorporated to test the efficacy of the proposed approach on a data set of 404 benign and 409 malicious apps. The empirical studies have identified that our proposed algorithm gives a best case malware detection rate of 98.33%. The highlight of interface is that any app can be classified as benign or malicious even before running it using static analysis.


2022 ◽  
Vol 24 (3) ◽  
pp. 0-0

Frequency of malware attacks because Android apps are increasing day by day. Current studies have revealed startling facts about data harvesting incidents, where user’s personal data is at stake. To preserve privacy of users, a permission induced risk interface MalApp to identify privacy violations rising from granting permissions during app installation is proposed. It comprises of multi-fold process that performs static analysis based on app’s category. First, concept of reverse engineering is applied to extract app permissions to construct a Boolean-valued permission matrix. Second, ranking of permissions is done to identify the risky permissions across category. Third, machine learning and ensembling techniques have been incorporated to test the efficacy of the proposed approach on a data set of 404 benign and 409 malicious apps. The empirical studies have identified that our proposed algorithm gives a best case malware detection rate of 98.33%. The highlight of interface is that any app can be classified as benign or malicious even before running it using static analysis.


2022 ◽  
Author(s):  
Nishchal J

<p>Recent research has established the possibility of deducing soft-biometric attributes such as age, gender and race from an individual’s face image with high accuracy. Many techniques have been proposed to ensure user privacy, such as visible distortions to the images, manipulation of the original image with new face attributes, face swapping etc. Though these techniques achieve the goal of user privacy by fooling face recognition models, they don’t help the user when they want to upload original images without visible distortions or manipulation. The objective of this work is to implement techniques to ensure the privacy of user’s sensitive or personal data in face images by creating minimum pixel level distortions using white-box and black-box perturbation algorithms to fool AI models while maintaining the integrity of the image, so as to appear the same to a human eye.</p><div><br></div>


ERA Forum ◽  
2022 ◽  
Author(s):  
Teresa Quintel

AbstractFinancial information can play a key role in tackling money laundering, terrorist financing and combatting serious crime more generally. Preventing and fighting money laundering and the financing of terrorism were top priorities of the European Union’s (EU) Security Strategy for 2020-2025, which might explain the fast developments regarding legislative measures to further regulate anti-money laundering (AML) and counter terrorism financing (CTF). In May 2020, the European Commission put forward an Action Plan to establish a Union policy on combatting money laundering and shortly afterwards, proposed a new AML Package.Financial Intelligence Units (FIUs) play a crucial role in analysing and exchanging information concerning unusual and suspicious transactions, serving as intermediaries between the private sector and law enforcement authorities (LEAs). Such information includes personal data, which is protected under the EU data protection acquis. The latter is constituted of two main laws, the General Data Protection Regulation (GDPR), which applies to general processing and the so-called Law Enforcement Directive (LED) that is applicable when competent law enforcement authorities process personal data for law enforcement purposes.This Article argues that the current legal framework on AML and CTF legislation is unclear on the data protection regime that applies to the processing of personal data by FIUs and that the proposed AML Package does little or nothing to clarify this dilemma. In order to contribute to the discussion on the applicable data protection framework for FIUs, the assessment puts forward arguments for and against the application of the LED to such processing, taking into account the relevant legal texts on AML and data protection.


2022 ◽  
Author(s):  
Nishchal J

<p>Recent research has established the possibility of deducing soft-biometric attributes such as age, gender and race from an individual’s face image with high accuracy. Many techniques have been proposed to ensure user privacy, such as visible distortions to the images, manipulation of the original image with new face attributes, face swapping etc. Though these techniques achieve the goal of user privacy by fooling face recognition models, they don’t help the user when they want to upload original images without visible distortions or manipulation. The objective of this work is to implement techniques to ensure the privacy of user’s sensitive or personal data in face images by creating minimum pixel level distortions using white-box and black-box perturbation algorithms to fool AI models while maintaining the integrity of the image, so as to appear the same to a human eye.</p><div><br></div>


2022 ◽  
Vol 35 (1) ◽  
pp. 101-118
Author(s):  
Miral-Sabry AlAshry

The purpose of this study is to investigate the effectiveness of the Egyptian Personal Data Protection Law No. 151 for 2020, as well as its implications for journalistic practice. More specifically, the focal point of this study was to explore how Egyptian journalists interpret the law and its implication for press freedom in Egypt. The underpinning theoretical framework was informed by the Authoritarian school of thought. Questionnaires were distributed to 199 journalists from both independent and semi-governmental representing thirteen official newspapers of Egypt, while in-depth interviews were done with (3) Editors, (4) journalists, and (3) human rights lawyers. The finding of the study indicated that the government placed restrictions on journalists by using Data Protection Law relating to the media. That law is negatively impacting journalists and media houses. It was clear from the findings that the journalists see the law as an obstacle to media independence, as it allows the government to exercise greater information control through digital policy and puts rules of regulation against journalists.


Sign in / Sign up

Export Citation Format

Share Document