Fake news and manipulated data, the new GDPR, and the future of information

2017 ◽  
Vol 34 (2) ◽  
pp. 81-85 ◽  
Author(s):  
Manny Cohen

Manny Cohen, a veteran of 40 years in research and information and one of the founders of the tech industry area around Shoreditch and Old Street, now known as silicon roundabout in London; gives an interesting insight into the current and future structures and changes in the information industry. He then discusses and describes the ramifications of a major topic of today, that is the issues around Fake News. What happens when the truth can be changed? Also discussed are the industry requirements around the GDPR (General Data Protection Regulations) and how Artificial intelligence is needed to guide the industries’ future and the search for the truth.

2019 ◽  
Vol 6 (1) ◽  
pp. 205395171986054 ◽  
Author(s):  
Heike Felzmann ◽  
Eduard Fosch Villaronga ◽  
Christoph Lutz ◽  
Aurelia Tamò-Larrieux

Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.


2021 ◽  
Vol 6 ◽  
Author(s):  
Johannes Langguth ◽  
Konstantin Pogorelov ◽  
Stefan Brenner ◽  
Petra Filkuková ◽  
Daniel Thilo Schroeder

We review the phenomenon of deepfakes, a novel technology enabling inexpensive manipulation of video material through the use of artificial intelligence, in the context of today’s wider discussion on fake news. We discuss the foundation as well as recent developments of the technology, as well as the differences from earlier manipulation techniques and investigate technical countermeasures. While the threat of deepfake videos with substantial political impact has been widely discussed in recent years, so far, the political impact of the technology has been limited. We investigate reasons for this and extrapolate the types of deepfake videos we are likely to see in the future.


PEDIATRICS ◽  
1967 ◽  
Vol 39 (3) ◽  
pp. 462-462
Author(s):  

THE painting Adopting a Child by Frederick Bacon Barwell (1857), calls our attention to the great forward strides effected in adoption procedures. The Committee on Adoptions of the Academy has recommended that, whenever possible, adoptions be consummated through established agencies. While the Committee has not taken the stand that only agency adoptions can be successful, we do believe that adoption agencies, on balance, can render superior service because they have the resources required to serve the diverse persons involved in this delicate and sensitive human relationship. This English genre or narrative painting affords an interesting insight into what is involved in an adoption. If one is permitted to mix retrospective analysis with speculation about the future, one could predict that the adoption portrayed in this picture would terminate in difficulties. The central figures are the natural mother and her child. In modern adoption procedures we recognize the importance of casework for the natural mother in her great decision to yield her child for adoption. The poignant despair revealed in the features of the mother indicates the lack of such necessary help to this unfortunate female. The absence of legal and moral privacy between the natural parent and the adopting parents derogates the possibility of a successful transfer of the child from one home to another.


2020 ◽  
Vol 89 (4) ◽  
pp. 55-72
Author(s):  
Nermin Varmaz

Summary: This article addresses the compliance of the use of Big Data and Artificial Intelligence (AI) by FinTechs with European data protection principles. FinTechs are increasingly replacing traditional credit institutions and are becoming more important in the provision of financial services, especially by using AI and Big Data. The ability to analyze a large amount of different personal data at high speed can provide insights into customer spending patterns, enable a better understanding of customers, or help predict investments and market changes. However, once personal data is involved, a collision with all basic data protection principles stipulated in the European General Data Protection Regulation (GDPR) arises, mostly due to the fact that Big Data and AI meet their overall objectives by processing vast data that lies beyond their initial processing purposes. The author shows that within this ratio, pseudonymization can prove to be a privacy-compliant and thus preferable alternative for the use of AI and Big Data while still enabling FinTechs to identify customer needs. Zusammenfassung: Dieser Artikel befasst sich mit der Vereinbarkeit der Nutzung von Big Data und Künstlicher Intelligenz (KI) durch FinTechs mit den europäischen Datenschutzgrundsätzen. FinTechs ersetzen zunehmend traditionelle Kreditinstitute und gewinnen bei der Bereitstellung von Finanzdienstleistungen an Bedeutung, insbesondere durch die Nutzung von KI und Big Data. Die Fähigkeit, eine große Menge unterschiedlicher personenbezogener Daten in hoher Geschwindigkeit zu analysieren, kann Einblicke in das Ausgabeverhalten der Kunden geben, ein besseres Verständnis der Kunden ermöglichen oder helfen, Investitionen und Marktveränderungen vorherzusagen. Sobald jedoch personenbezogene Daten involviert sind, kommt es zu einer Kollision mit allen grundlegenden Datenschutzprinzipien, die in der europäischen Datenschutzgrundverordnung (DS-GVO) festgelegt sind, vor allem aufgrund der Tatsache, dass Big Data und KI ihre übergeordneten Ziele durch die Verarbeitung großer Datenmengen erreichen, die über ihre ursprünglichen Verarbeitungszwecke hinausgehen. Der Autor zeigt, dass sich in diesem Verhältnis die Pseudonymisierung als datenschutzkonforme und damit vorzugswürdige Alternative für den Einsatz von KI und Big Data erweisen kann, die FinTechs dennoch in die Lage versetzt, Kundenbedürfnisse zu erkennen.


2019 ◽  
Vol 25 (4) ◽  
pp. 465-481 ◽  
Author(s):  
Adrián Todolí-Signes

Big data, algorithms and artificial intelligence now allow employers to process information on their employees and potential employees in a far more efficient manner and at a much lower cost than in the past. This makes it possible to profile workers automatically and even allows technology itself to replace human resources personnel in making decisions that have legal effects on employees (recruitment, promotion, dismissals, etc.). This entails great risks of worker discrimination and defencelessness, with workers unaware of the reasons underlying any such decision. This article analyses the protections established in the EU General Data Protection Regulation (GDPR) for safeguarding employees against discrimination. One of the main conclusions that can be drawn is that, in the face of the inadequacy of the GDPR in the field of labour relations, there is a need for the collective governance of workplace data protection, requiring the participation of workers’ representatives in establishing safeguards.


2018 ◽  
Vol 35 (2) ◽  
pp. 81-83 ◽  
Author(s):  
Claire Laybats ◽  
John Davies

This article discusses the main changes to data protection regulation with the introduction of the General Data Protection Regulation (GDPR) that comes into effect on 25 May 2018. It considers the effect on organizations coming under its jurisdiction through an interview with John Davies, Managing Director of digital agency Reading Room, and then goes on to consider the implications for organizations currently out of the geographical area the GDPR controls. It finally considers the implications for the future as the GDPR becomes established.


Sign in / Sign up

Export Citation Format

Share Document