scholarly journals Distortion in Real-World Analytic Processes

2019 ◽  
Author(s):  
Peter Kieseberg ◽  
Lukas Daniel Klausner ◽  
Andreas Holzinger

In discussions on the General Data Protection Regulation (GDPR), anonymisation and deletion are frequently mentioned as suitable technical and organisational methods (TOMs) for privacy protection. The major problem of distortion in machine learning environments, as well as related issues with respect to privacy, are rarely mentioned. The Big Data Analytics project addresses these issues.

Medical Law ◽  
2019 ◽  
pp. 420-469
Author(s):  
Emily Jackson

All books in this flagship series contain carefully selected substantial extracts from key cases, legislation, and academic debate, providing students with a stand-alone resource. This chapter first examines the ethical justifications for protecting patient confidentiality and then discusses: the different legal sources of the duty of confidence, including the new General Data Protection Regulation; exceptions to the duty of confidence; and the remedies available for its breach. It briefly considers patients’ rights to gain access to their medical records. Finally, the chapter covers the implications of ‘big data’ and machine learning for healthcare, and the increasing use of mobile technology in order to generate, store and transmit health data, known as mHealth.


Symmetry ◽  
2018 ◽  
Vol 10 (10) ◽  
pp. 485 ◽  
Author(s):  
Muhammad Ashfaq Khan ◽  
Md. Rezaul Karim ◽  
Yangwoo Kim

Every day we experience unprecedented data growth from numerous sources, which contribute to big data in terms of volume, velocity, and variability. These datasets again impose great challenges to analytics framework and computational resources, making the overall analysis difficult for extracting meaningful information in a timely manner. Thus, to harness these kinds of challenges, developing an efficient big data analytics framework is an important research topic. Consequently, to address these challenges by exploiting non-linear relationships from very large and high-dimensional datasets, machine learning (ML) and deep learning (DL) algorithms are being used in analytics frameworks. Apache Spark has been in use as the fastest big data processing arsenal, which helps to solve iterative ML tasks, using distributed ML library called Spark MLlib. Considering real-world research problems, DL architectures such as Long Short-Term Memory (LSTM) is an effective approach to overcoming practical issues such as reduced accuracy, long-term sequence dependency, and vanishing and exploding gradient in conventional deep architectures. In this paper, we propose an efficient analytics framework, which is technically a progressive machine learning technique merged with Spark-based linear models, Multilayer Perceptron (MLP) and LSTM, using a two-stage cascade structure in order to enhance the predictive accuracy. Our proposed architecture enables us to organize big data analytics in a scalable and efficient way. To show the effectiveness of our framework, we applied the cascading structure to two different real-life datasets to solve a multiclass and a binary classification problem, respectively. Experimental results show that our analytical framework outperforms state-of-the-art approaches with a high-level of classification accuracy.


2020 ◽  
Vol 102 (913) ◽  
pp. 199-234
Author(s):  
Nema Milaninia

AbstractAdvances in mobile phone technology and social media have created a world where the volume of information generated and shared is outpacing the ability of humans to review and use that data. Machine learning (ML) models and “big data” analytical tools have the power to ease that burden by making sense of this information and providing insights that might not otherwise exist. In the context of international criminal and human rights law, ML is being used for a variety of purposes, including to uncover mass graves in Mexico, find evidence of homes and schools destroyed in Darfur, detect fake videos and doctored evidence, predict the outcomes of judicial hearings at the European Court of Human Rights, and gather evidence of war crimes in Syria. ML models are also increasingly being incorporated by States into weapon systems in order to better enable targeting systems to distinguish between civilians, allied soldiers and enemy combatants or even inform decision-making for military attacks.The same technology, however, also comes with significant risks. ML models and big data analytics are highly susceptible to common human biases. As a result of these biases, ML models have the potential to reinforce and even accelerate existing racial, political or gender inequalities, and can also paint a misleading and distorted picture of the facts on the ground. This article discusses how common human biases can impact ML models and big data analytics, and examines what legal implications these biases can have under international criminal law and international humanitarian law.


Sign in / Sign up

Export Citation Format

Share Document