scholarly journals Data governance and citizen participation in the digital welfare state

Data & Policy ◽  
2020 ◽  
Vol 2 ◽  
Author(s):  
Liesbet van Zoonen

Abstract U.S., UK, and European municipalities are increasingly experimenting with data as an instrument for social policy. This movement pertains often to the design of municipal data warehouses, dashboards, and predictive analytics, the latter mostly to identify risk of fraud. This transition to data-driven social policy, captured by the term “digital welfare state,” almost completely takes place out of political and social view, and escapes democratic decision making. In this article, I zoom in on The Netherlands and show in detail how sound data governance is lacking at three levels: data experiments and practices take place in a so-called “institutional void” without any clear democratic mandate; moreover, they are often based on disputable quality of data and analytic models; and they tend to transgress the recent EU General Data Protection Regulation (GDPR) about privacy and data protection. I also assess that key stakeholders in this data transition, that is the citizens whose data are used, are not actively informed let alone invited to participate. As a result, a practice of top-down monitoring, containment and control is evolving despite the desire of civil servants in this domain to do “good” with data. I explore several data and policy alternatives in the conclusion to contribute to a higher quality and more democratic usage of data in the digital welfare state.

2020 ◽  
Vol 28 (4) ◽  
pp. 531-553 ◽  
Author(s):  
Aggeliki Tsohou ◽  
Emmanouil Magkos ◽  
Haralambos Mouratidis ◽  
George Chrysoloras ◽  
Luca Piras ◽  
...  

Purpose General data protection regulation (GDPR) entered into force in May 2018 for enhancing personal data protection. Even though GDPR leads toward many advantages for the data subjects it turned out to be a significant challenge. Organizations need to implement long and complex changes to become GDPR compliant. Data subjects are empowered with new rights, which, however, they need to become aware of. GDPR compliance is a challenging matter for the relevant stakeholders calls for a software platform that can support their needs. The aim of data governance for supporting GDPR (DEFeND) EU project is to deliver such a platform. The purpose of this paper is to describe the process, within the DEFeND EU project, for eliciting and analyzing requirements for such a complex platform. Design/methodology/approach The platform needs to satisfy legal and privacy requirements and provide functionalities that data controllers request for supporting GDPR compliance. Further, it needs to satisfy acceptance requirements, for assuring that its users will embrace and use the platform. In this paper, the authors describe the methodology for eliciting and analyzing requirements for such a complex platform, by analyzing data attained by stakeholders from different sectors. Findings The findings provide the process for the DEFeND platform requirements’ elicitation and an indicative sample of those. The authors also describe the implementation of a secondary process for consolidating the elicited requirements into a consistent set of platform requirements. Practical implications The proposed software engineering methodology and data collection tools (i.e. questionnaires) are expected to have a significant impact for software engineers in academia and industry. Social implications It is reported repeatedly that data controllers face difficulties in complying with the GDPR. The study aims to offer mechanisms and tools that can assist organizations to comply with the GDPR, thus, offering a significant boost toward the European personal data protection objectives. Originality/value This is the first paper, according to the best of the authors’ knowledge, to provide software requirements for a GDPR compliance platform, including multiple perspectives.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Marta Choroszewicz ◽  
Beata Mäihäniemi

This article uses the sociolegal perspective to address current problems surrounding data protection and the experimental use of automated decision-making systems. This article outlines and discusses the hard laws regarding national adaptations of the European General Data Protection Regulation and other regulations as well as the use of automated decision-making in the public sector in six European countries (Denmark, Sweden, Germany, Finland, France, and the Netherlands). Despite its limitations, the General Data Protection Regulation has impacted the geopolitics of the global data market by empowering citizens and data protection authorities to voice their complaints and conduct investigations regarding data breaches. We draw on the Esping-Andersen welfare state typology to advance our understanding of the different approaches of states to citizens’ data protection and data use for automated decision-making between countries in the Nordic regime and the Conservative-Corporatist regime. Our study clearly indicates a need for additional legislation regarding the use of citizens’ data for automated decision-making and regulation of automated decision-making. Our results also indicate that legislation in Finland, Sweden, and Denmark draws upon the mutual trust between public administrations and citizens and thus offers only general guarantees regarding the use of citizens’ data. In contrast, Germany, France, and the Netherlands have enacted a combination of general and sectoral regulations to protect and restrict citizens’ rights. We also identify some problematic national policy responses to the General Data Protection Regulation that empower governments and related institutions to make citizens accountable to states’ stricter obligations and tougher sanctions. The article contributes to the discussion on the current phase of the developing digital welfare state in Europe and the role of new technologies (i.e., automated decision-making) in this phase. We argue that states and public institutions should play a central role in strengthening the social norms associated with data privacy and protection as well as citizens’ right to social security.


2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110187
Author(s):  
Luca Marelli ◽  
Giuseppe Testa ◽  
Ine van Hoyweghen

The emergence of a global industry of digital health platforms operated by Big Tech corporations, and its growing entanglements with academic and pharmaceutical research networks, raise pressing questions on the capacity of current data governance models, regulatory and legal frameworks to safeguard the sustainability of the health research ecosystem. In this article, we direct our attention toward the challenges faced by the European General Data Protection Regulation in regulating the potentially disruptive engagement of Big Tech platforms in health research. The General Data Protection Regulation upholds a rather flexible regime for scientific research through a number of derogations to otherwise stricter data protection requirements, while providing a very broad interpretation of the notion of “scientific research”. Precisely the breadth of these exemptions combined with the ample scope of this notion could provide unintended leeway to the health data processing activities of Big Tech platforms, which have not been immune from carrying out privacy-infringing and socially disruptive practices in the health domain. We thus discuss further finer-grained demarcations to be traced within the broadly construed notion of scientific research, geared to implementing use-based data governance frameworks that distinguish health research activities that should benefit from a facilitated data protection regime from those that should not. We conclude that a “re-purposing” of big data governance approaches in health research is needed if European nations are to promote research activities within a framework of high safeguards for both individual citizens and society.


2019 ◽  
Vol 6 (2) ◽  
pp. 205395171986259 ◽  
Author(s):  
Johannes Starkbaum ◽  
Ulrike Felt

Before the EU General Data Protection Regulation entered into force in May 2018, we witnessed an intense struggle of actors associated with data-dependent fields of science, in particular health-related academia and biobanks striving for legal derogations for data reuse in research. These actors engaged in a similar line of argument and formed issue alliances to pool their collective power. Using descriptive coding followed by an interpretive analysis, this article investigates the argumentative repertoire of these actors and embeds the analysis in ethical debates on data sharing and biobank-related data governance. We observe efforts to perform a paradigmatic shift of the discourse around the General Data Protection Regulation-implementation away from ‘protecting data’ as key concern to ‘protecting health’ of individuals and societies at large. Instead of data protection, the key risks stressed by health researchers became potential obstacles to research. In line, exchange of information with data subjects is not a key concern in the arguments of biobank-related actors and it is assumed that patients want ‘their’ data to be used. We interpret these narratives as a ‘reaction’ to potential restrictions for data reuse and in line with a broader trend towards Big Data science, as the very idea of biobanking is conceptualized around long-term use of readily prepared data. We conclude that a sustainable implementation of biobanks needs not only to comply with the General Data Protection Regulation, but must proactively re-imagine its relation to citizens and data subjects in order to account for the various ways that science gets entangled with society.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
A Puyol

Abstract The question of governing medical data use has become very important. The traditional way of governing it is through data protection (e.g. General Data Protection Regulation in EU). However, Prainsack and Buyx (2016, 2017) rightly point out that the strictly regulation approach is insufficient for dealing with all abuses related to the use of big data: excessive control can curb the opportunities and benefits of digital technologies for users and society, and control and regulation may be insufficient in controlling all risks associated with the use of big data. In opposition to the strict regulation approach, they propose a new one, based on solidarity. The solidarity approach entails the acceptance of the impossibility of eliminating the risks of modern data usage, and it implies a formula for compensating those affected by possible abuses: harm mitigation funds. Such funds would help to ensure that people who accept those risks and are harmed as a result have appropriate support. I do not question the adequacy of harm mitigation funds, but rather the conception of solidarity that they choose to justify them. I argue that this conception of solidarity, based on psychology and moral sociology, has less normative force than exists in the strict regulation approach, which is based on the defence of fundamental rights. If we want the policy of harm mitigation funds to have a normative force similar to that of the strict regulation approach, then we must choose a conception of solidarity based on respect for fundamental rights.


2020 ◽  
pp. medethics-2019-105948
Author(s):  
Concetta Tania Di Iorio ◽  
Fabrizio Carinci ◽  
Jillian Oderkirk ◽  
David Smith ◽  
Manuela Siano ◽  
...  

BackgroundData processing of health research databases often requires a Data Protection Impact Assessment to evaluate the severity of the risk and the appropriateness of measures taken to comply with the European Union (EU) General Data Protection Regulation (GDPR). We aimed to define and apply a comprehensive method for the evaluation of privacy, data governance and ethics among research networks involved in the EU Project Bridge Health.MethodsComputerised survey among associated partners of main EU Consortia, using a targeted instrument designed by the principal investigator and progressively refined in collaboration with an international advisory panel. Descriptive measures using the percentage of adoption of privacy, data governance and ethical principles as main endpoints were used for the analysis and interpretation of the results.ResultsA total of 15 centres provided relevant information on the processing of sensitive data from 10 European countries. Major areas of concern were noted for: data linkage (median, range of adoption: 45%, 30%–80%), access and accuracy of personal data (50%, 0%–100%) and anonymisation procedures (56%, 11%–100%). A high variability was noted in the application of privacy principles.ConclusionsA comprehensive methodology of Privacy and Ethics Impact and Performance Assessment was successfully applied at international level. The method can help implementing the GDPR and expanding the scope of Data Protection Impact Assessment, so that the public benefit of the secondary use of health data could be well balanced with the respect of personal privacy.


1970 ◽  
Vol 17 (3) ◽  
pp. 313-323 ◽  
Author(s):  
Jon Van Til ◽  
Sally Bould Van Til

Sign in / Sign up

Export Citation Format

Share Document