privacy breaches
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 52)

H-INDEX

11
(FIVE YEARS 2)

2022 ◽  
Vol 22 (2) ◽  
pp. 1-21
Author(s):  
Syed Atif Moqurrab ◽  
Adeel Anjum ◽  
Abid Khan ◽  
Mansoor Ahmed ◽  
Awais Ahmad ◽  
...  

Due to the Internet of Things evolution, the clinical data is exponentially growing and using smart technologies. The generated big biomedical data is confidential, as it contains a patient’s personal information and findings. Usually, big biomedical data is stored over the cloud, making it convenient to be accessed and shared. In this view, the data shared for research purposes helps to reveal useful and unexposed aspects. Unfortunately, sharing of such sensitive data also leads to certain privacy threats. Generally, the clinical data is available in textual format (e.g., perception reports). Under the domain of natural language processing, many research studies have been published to mitigate the privacy breaches in textual clinical data. However, there are still limitations and shortcomings in the current studies that are inevitable to be addressed. In this article, a novel framework for textual medical data privacy has been proposed as Deep-Confidentiality . The proposed framework improves Medical Entity Recognition (MER) using deep neural networks and sanitization compared to the current state-of-the-art techniques. Moreover, the new and generic utility metric is also proposed, which overcomes the shortcomings of the existing utility metric. It provides the true representation of sanitized documents as compared to the original documents. To check our proposed framework’s effectiveness, it is evaluated on the i2b2-2010 NLP challenge dataset, which is considered one of the complex medical data for MER. The proposed framework improves the MER with 7.8% recall, 7% precision, and 3.8% F1-score compared to the existing deep learning models. It also improved the data utility of sanitized documents up to 13.79%, where the value of the  k is 3.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 513
Author(s):  
Efstratios Chatzoglou ◽  
Georgios Kambourakis ◽  
Christos Smiliotopoulos

The impact that IoT technologies have on our everyday life is indisputable. Wearables, smart appliances, lighting, security controls, and others make our life simpler and more comfortable. For the sake of easy monitoring and administration, such devices are typically accompanied by smartphone apps, which are becoming increasingly popular, and sometimes are even required to operate the device. Nevertheless, the use of such apps may indirectly magnify the attack surface of the IoT device itself and expose the end-user to security and privacy breaches. Therefore, a key question arises: do these apps curtail their functionality to the minimum needed, and additionally, are they secure against known vulnerabilities and flaws? In seek of concrete answers to the aforesaid question, this work scrutinizes more than forty chart-topping Android official apps belonging to six diverse mainstream categories of IoT devices. We attentively analyse each app statically, and almost half of them dynamically, after pairing them with real-life IoT devices. The results collected span several axes, namely sensitive permissions, misconfigurations, weaknesses, vulnerabilities, and other issues, including trackers, manifest data, shared software, and more. The short answer to the posed question is that the majority of such apps still remain susceptible to a range of security and privacy issues, which in turn, and at least to a significant degree, reflects the general proclivity in this ecosystem.


Author(s):  
L. V. Chesnokova

The article examines the situation associated with the spread of social networks, which brought not only new communication opportunities, but also the risks of blurring the boundaries between privacy and publicity. People voluntarily share personal data in exchange for public acceptance. This information is recorded and studied by various government and commercial institutions. The danger to information privacy as a right to control access to personal information is aggravated by the peculiarities of online communication, which is characterized by “context collapse”: the merging of different audiences with different norms and values. Content posted on social media is searchable beyond a specific point in time and situation. If offline communication involves a foreseeable number of interlocutors, there is an “invisible audience” on social networks, which leads to information asymmetry. However, despite the fact that most users are aware of the potential dangers of privacy breaches, they share personal information on social networks. This phenomenon is called the privacy paradox. The reasons for this behavior are a lack of technical and social skills, a reluctance to spend time and energy on measures to minimize risks, a desire to have wide social connections and skepticism about the effectiveness of the efforts being made. The behavior of users on social networks is influenced primarily by factors such as age and education. The most concerned about the preservation of privacy are young people and middle-aged people, as they have to manage the most complex social relations.


2021 ◽  
Vol 26 (4) ◽  
pp. 293-298
Author(s):  
Florin Popescu ◽  
George Bucăţa ◽  
Sorin Pistol

Abstract In general, the Internet relies on complex codes to protect information, but hackers are becoming more adept at defeating such systems. These cyberattacks lead to privacy breaches of government officials, as well as large corporations, costing billions of euros per year in total and compromising customer data. According to ENISA reports, these numbers are set to rise. Quantum technology is seen by scientists as a revolutionary replacement for standard encryption techniques.


2021 ◽  
Vol 28 (1) ◽  
pp. e100450
Author(s):  
Ian A Scott ◽  
Stacy M Carter ◽  
Enrico Coiera

ObjectivesDifferent stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare.MethodsWe undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making.ResultsAcross 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty.DiscussionCertain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined.ConclusionStakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.


2021 ◽  
Author(s):  
Hans Bruijn

We can hardly underestimate the importance of privacy in our data-driven world. Privacy breaches are not just about disclosing information. Personal data is used to profile and manipulate us – sometimes on such a large scale that it affects society as a whole. What can governments do to protect our privacy? In The Governance of Privacy Hans de Bruijn first analyses the complexity of the governance challenge, using the metaphor of a journey. At the start, users have strong incentives to share data. Harvested data continue the journey that might lead to a privacy breach, but not necessarily – it can also lead to highly valued services. That is why both preparedness at the start of the journey and resilience during the journey are crucial to privacy protection. The book then explores three strategies to deal with governments, the market, and society. Governments can use the power of the law; they can exploit the power of the market by stimulating companies to compete on privacy; and they can empower society, strengthening its resilience in a data-driven world.


Author(s):  
A Ismail ◽  
◽  
M R Hamzah ◽  
H Hussin ◽  
◽  
...  

Big data allows widespread use and exchange of user data, and this will lead to the possibility of privacy breaches. Governments and corporations will incorporate personal data from different sources and learn a great deal about people and in turn, raise concerns about privacy. This paper will provide a conceptual understanding on the antecedents towards user privacy concerns and online self-disclosure activities, which are the knowledge and perceived risks of big data. In this paper, big data knowledge is hypothesized to decrease privacy concerns, meanwhile perceived risks is suggested to increase the outcome. Based on the framework, propositions are formulated as a basis for the study that will follow.


Author(s):  
Eleanore Hickman ◽  
Martin Petrin

AbstractAI will change many aspects of the world we live in, including the way corporations are governed. Many efficiencies and improvements are likely, but there are also potential dangers, including the threat of harmful impacts on third parties, discriminatory practices, data and privacy breaches, fraudulent practices and even ‘rogue AI’. To address these dangers, the EU published ‘The Expert Group’s Policy and Investment Recommendations for Trustworthy AI’ (the Guidelines). The Guidelines produce seven principles from its four foundational pillars of respect for human autonomy, prevention of harm, fairness, and explicability. If implemented by business, the impact on corporate governance will be substantial. Fundamental questions at the intersection of ethics and law are considered, but because the Guidelines only address the former without (much) reference to the latter, their practical application is challenging for business. Further, while they promote many positive corporate governance principles—including a stakeholder-oriented (‘human-centric’) corporate purpose and diversity, non-discrimination, and fairness—it is clear that their general nature leaves many questions and concerns unanswered. In this paper we examine the potential significance and impact of the Guidelines on selected corporate law and governance issues. We conclude that more specificity is needed in relation to how the principles therein will harmonise with company law rules and governance principles. However, despite their imperfections, until harder legislative instruments emerge, the Guidelines provide a useful starting point for directing businesses towards establishing trustworthy AI.


Energies ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 6384
Author(s):  
Nasser Kimbugwe ◽  
Tingrui Pei ◽  
Moses Ntanda Kyebambe

The role of the Internet of Things (IoT) networks and systems in our daily life cannot be underestimated. IoT is among the fastest evolving innovative technologies that are digitizing and interconnecting many domains. Most life-critical and finance-critical systems are now IoT-based. It is, therefore, paramount that the Quality of Service (QoS) of IoTs is guaranteed. Traditionally, IoTs use heuristic, game theory approaches and optimization techniques for QoS guarantee. However, these methods and approaches have challenges whenever the number of users and devices increases or when multicellular situations are considered. Moreover, IoTs receive and generate huge amounts of data that cannot be effectively handled by the traditional methods for QoS assurance, especially in extracting useful features from this data. Deep Learning (DL) approaches have been suggested as a potential candidate in solving and handling the above-mentioned challenges in order to enhance and guarantee QoS in IoT. In this paper, we provide an extensive review of how DL techniques have been applied to enhance QoS in IoT. From the papers reviewed, we note that QoS in IoT-based systems is breached when the security and privacy of the systems are compromised or when the IoT resources are not properly managed. Therefore, this paper aims at finding out how Deep Learning has been applied to enhance QoS in IoT by preventing security and privacy breaches of the IoT-based systems and ensuring the proper and efficient allocation and management of IoT resources. We identify Deep Learning models and technologies described in state-of-the-art research and review papers and identify those that are most used in handling IoT QoS issues. We provide a detailed explanation of QoS in IoT and an overview of commonly used DL-based algorithms in enhancing QoS. Then, we provide a comprehensive discussion of how various DL techniques have been applied for enhancing QoS. We conclude the paper by highlighting the emerging areas of research around Deep Learning and its applicability in IoT QoS enhancement, future trends, and the associated challenges in the application of Deep Learning for QoS in IoT.


Author(s):  
Olga Kengni Ngangmo ◽  
Ado Adamou Abba Ari ◽  
Alidou Mohamadou ◽  
Ousmane Thiare ◽  
Dina Taiwe Kolyang

Nowadays, the cloud computing technology combined with the new generation networks and internet of things facilitate the networking of numerous smart devices. Moreover, the advent of the smart web requires massive data backup from the smart connected devices to the cloud. Unfortunately, the publication of several of these data, such as medical information and financial transactions, could lead to serious privacy breaches, which is becoming the most serious issue in cloud of things. For instance, passive attacks can launched in order to get access to private information. For this reason, several data anonymization techniques have emerged in order to keep data as confidential as possible. However, these different techniques are making the data unusable the most of time. Meanwhile, differential privacy that has been used in a number of cyber physical systems recently emerged as an efficient technique for ensuring the privacy of cloud of things stored data. In this exploratory paper, we study the guarantees of differential privacy of a multi-level anonymization scheme of data graphs. The considered scheme disturbs the structure of the graph by adding false edges, groups the vertices in distinct sets and permutes the vertices in these groups. Particularly, we demonstrated the guarantees that the anonymized data by this algorithm remain exploitable while guaranteeing the anonymity of users.


Sign in / Sign up

Export Citation Format

Share Document