On Facial Recognition and Fundamental Rights in India: A Law and Technology Perspective

2021 ◽  
Author(s):  
Faizan Mustafa ◽  
Utkarsh Leo
2020 ◽  
Vol 74 ◽  
pp. 03006
Author(s):  
Irena Nesterova

The growing use of facial recognition technologies has put them under the regulatory spotlight all around the world. The EU considers to regulate facial regulation technologies as a part of initiative of creating ethical and legal framework for trustworthy artificial intelligence. These technologies are attracting attention of the EU data protection authorities, e.g. in Sweden and the UK. In May, San Francisco was the first city in the US to ban police and other government agencies from using facial recognition technology, soon followed by other US cities. The paper aims to analyze the impact of facial recognition technology on the fundamental rights and values as well as the development of its regulation in Europe and the US. The paper will reveal how these technologies may significantly undermine fundamental rights, in particular the right to privacy, and may lead to prejudice and discrimination. Moreover, alongside the risks to fundamental rights a wider impact of these surveillance technologies on democracy and the rule of law needs to be assessed. Although the existing laws, in particular the EU General Data Protection Regulation already imposes significant requirements, there is a need for further guidance and clear regulatory framework to ensure trustworthy use of facial recognition technology.


2020 ◽  
Vol 11 (3) ◽  
pp. 375-389
Author(s):  
Isadora Neroni Rezende

Since 2019, over 600 law enforcement agencies across the United States have started using a groundbreaking facial recognition app designed by Clearview AI, a tech start-up which now plans to market its technology also in Europe. While the Clearview app is an expression of the wider phenomenon of the repurposing of privately held data in the law enforcement context, its use in criminal proceedings is likely to encroach on individuals’ rights in unprecedented ways. Indeed, the Clearview app goes far beyond traditional facial recognition tools. If these have been historically limited to matching government-stored images, Clearview now combines its technology with a database of over three billion images published on the Internet. Against this background, this article will review the use of this new investigative tool in light of the European Union (EU) legal framework on privacy and data protection. The proposed assessment will proceed as follows. Firstly, it will briefly assess the lawfulness of Clearview AI’s data scraping practices under the General Data Protection Regulation. Secondly, it will discuss the transfer of scraped data from the company to EU law enforcement agencies under the regime of the Directive 2016/680/EU (the Directive). Finally, it will analyse the compliance of the Clearview app with art 10 of the Police Directive, which lays down the criteria for lawful processing of biometric data. More specifically, this last analysis will focus on the strict necessity test, as defined in the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights. Following this assessment, it will be argued that the Clearview app’s use in criminal proceedings is highly problematic in light of the EU legislation on privacy and data protection.


2021 ◽  
Vol 23 (4) ◽  
pp. 457-484
Author(s):  
Niovi Vavoula

Abstract Since the past three decades, an elaborate legal framework on the operation of EU-Schengen information systems has been developed, whereby in the near future a series of personal data concerning almost all third-country nationals (TCN s) with an administrative or criminal law link with the EU/Schengen area will be monitored through at least one information system. This article provides a legal analysis on the embedment of Artificial Intelligence (AI) tools at the EU level in information systems for TCN s and critically examines the fundamental rights concerns that ensue from the use AI to manage and control migration. It discusses automated risk assessment and algorithmic profiling used to examine applications for travel authorisations and Schengen visas, the shift towards the processing of facial images of TCN s and the creation of future-proof information systems that anticipate the use of facial recognition technology. The contribution understands information systems as enabling the datafication of mobility and as security tools in an era whereby a foreigner is risky by default. It is argued that a violation of the right to respect for private life is merely the gateway for a series of other fundamental rights which are impacted, such as non-discrimination and right to effective remedies.


2021 ◽  
Author(s):  
Stephan Schindler

The use of biometric facial recognition by the police is highly controversial. Facial recognition can contribute to the effective combating of crimes, but in some cases it is accompanied by grievous encroachments on fundamental rights. To achieve an appropriate balance in this regard is first and foremost the task of the democratically legitimized legislature. Requirements for this can be found in German Basic Law, the European Convention on Human Rights, the Charter of Fundamental Rights of the European Union and the European data protection law. Against this background, the thesis examines the permissibility of the police use of face recognition in connection with video surveillance to combat crime.


2021 ◽  

New technologies have always challenged the social, economic, legal, and ideological status quo. Constitutional law is no less impacted by such technologically driven transformations, as the state must formulate a legal response to new technologies and their market applications, as well as the state's own use of new technology. In particular, the development of data collection, data mining, and algorithmic analysis by public and private actors present unique challenges to public law at the doctrinal as well as the theoretical level. This collection, aimed at legal scholars and practitioners, describes the constitutional challenges created by the algorithmic society. It offers an important synthesis of the state of play in law and technology studies, addressing the challenges for fundamental rights and democracy, the role of policy and regulation, and the responsibilities of private actors. This title is also available as Open Access on Cambridge Core.


Author(s):  
Chrisanthi Nega

Abstract. Four experiments were conducted investigating the effect of size congruency on facial recognition memory, measured by remember, know and guess responses. Different study times were employed, that is extremely short (300 and 700 ms), short (1,000 ms), and long times (5,000 ms). With the short study time (1,000 ms) size congruency occurred in knowing. With the long study time the effect of size congruency occurred in remembering. These results support the distinctiveness/fluency account of remembering and knowing as well as the memory systems account, since the size congruency effect that occurred in knowing under conditions that facilitated perceptual fluency also occurred independently in remembering under conditions that facilitated elaborative encoding. They do not support the idea that remember and know responses reflect differences in trace strength.


Sign in / Sign up

Export Citation Format

Share Document