data transparency
Recently Published Documents


TOTAL DOCUMENTS

194
(FIVE YEARS 97)

H-INDEX

11
(FIVE YEARS 4)

2022 ◽  
Vol 14 (1) ◽  
pp. 1-12
Author(s):  
Sandra Geisler ◽  
Maria-Esther Vidal ◽  
Cinzia Cappiello ◽  
Bernadette Farias Lóscio ◽  
Avigdor Gal ◽  
...  

A data ecosystem (DE) offers a keystone-player or alliance-driven infrastructure that enables the interaction of different stakeholders and the resolution of interoperability issues among shared data. However, despite years of research in data governance and management, trustability is still affected by the absence of transparent and traceable data-driven pipelines. In this work, we focus on requirements and challenges that DEs face when ensuring data transparency. Requirements are derived from the data and organizational management, as well as from broader legal and ethical considerations. We propose a novel knowledge-driven DE architecture, providing the pillars for satisfying the analyzed requirements. We illustrate the potential of our proposal in a real-world scenario. Last, we discuss and rate the potential of the proposed architecture in the fulfillmentof these requirements.


2022 ◽  
Vol 14 (1) ◽  
pp. 1-9
Author(s):  
Saravanan Thirumuruganathan ◽  
Mayuresh Kunjir ◽  
Mourad Ouzzani ◽  
Sanjay Chawla

The data and Artificial Intelligence revolution has had a massive impact on enterprises, governments, and society alike. It is fueled by two key factors. First, data have become increasingly abundant and are often available openly. Enterprises have more data than they can process. Governments are spearheading open data initiatives by setting up data portals such as data.gov and releasing large amounts of data to the public. Second, AI engineering development is becoming increasingly democratized. Open source frameworks have enabled even an individual developer to engineer sophisticated AI systems. But with such ease of use comes the potential for irresponsible use of data. Ensuring that AI systems adhere to a set of ethical principles is one of the major problems of our age. We believe that data and model transparency has a key role to play in mitigating the deleterious effects of AI systems. In this article, we describe a framework to synthesize ideas from various domains such as data transparency, data quality, data governance among others to tackle this problem. Specifically, we advocate an approach based on automated annotations (of both data and the AI model), which has a number of appealing properties. The annotations could be used by enterprises to get visibility of potential issues, prepare data transparency reports, create and ensure policy compliance, and evaluate the readiness of data for diverse downstream AI applications. We propose a model architecture and enumerate its key components that could achieve these requirements. Finally, we describe a number of interesting challenges and opportunities.


Author(s):  
Marc-André Gagnon ◽  
Matthew Herder ◽  
Janice Graham ◽  
Katherine Fierlbeck ◽  
Anna Danyliuk

Author(s):  
David Lie ◽  
Lisa M. Austin ◽  
Peter Yi Ping Sun ◽  
Wenjun Qiu

We have a data transparency problem. Currently, one of the main mechanisms we have to understand data flows is through the self-reporting that organizations provide through privacy policies. These suffer from many well-known problems, problems that are becoming more acute with the increasing complexity of the data ecosystem and the role of third parties – the affiliates, partners, processors, ad agencies, analytic services, and data brokers involved in the contemporary data practices of organizations. In this article, we argue that automating privacy policy analysis can improve the usability of privacy policies as a transparency mechanism. Our argument has five parts. First, we claim that we need to shift from thinking about privacy policies as a transparency mechanism that enhances consumer choice and see them as a transparency mechanism that enhances meaningful accountability. Second, we discuss a research tool that we prototyped, called AppTrans (for Application Transparency), which can detect inconsistencies between the declarations in a privacy policy and the actions the mobile application can potentially take if it is used. We used AppTrans to test seven hundred applications and found that 59.5 per cent were collecting data in ways that were not declared in their policies. The vast majority of the discrepancies were due to third party data collection such as adversiting and analytics. Third, we outline the follow-on research we did to extend AppTrans to analyse the information sharing of mobile applications with third parties, with mixed results. Fourth, we situate our findings in relation to the third party issues that came to light in the recent Cambridge Analytica scandal and the calls from regulators for enhanced technical safeguards in managing these third party relationships. Fifth, we discuss some of the limitations of privacy policy automation as a strategy for enhanced data transparency and the policy implications of these limitations.


2021 ◽  
Vol 2 (3) ◽  
pp. 157-173
Author(s):  
Uci Pratiwi ◽  
Khana Wijaya ◽  
Fajriyah Fajriyah

The development of information technology, especially the internet, is certainly welcomed by all circles, one of which has even penetrated the world of organizations. Lemkari Kota Prabumulih is a sports organization located in Prabumulih City which is engaged in the sport of karate. The system applied in several lemkari karate training locations in Prabumulih is still done manually. If a payment transaction occurs, management records the payment into a manual written ledger and only recapitulates the records in Microsoft Office Excel. So that sometimes there are often errors in recording data, errors in recording data for those who have made payments because the participants are mostly small children, the time is reduced due to slow manual data recording so that it is not effective in training due to the slow service of trainers to participants who make payments. Therefore, it is necessary to have a website-based system for every administrative service in order to facilitate the process of good data management and data transparency for trainers, participants, and participants' parents. This website was created using the PHP (Personal Hypertext Preprocessor) programming language with MySQL Database storage and using UML (Unifield Modeling Language) as a design method.


2021 ◽  
Vol 1 (1) ◽  
pp. 1-32
Author(s):  
Sage Cammers-Goodwin ◽  
Naomi Van Stralen

“Transparency” is continually set as a core value for cities as they digitalize. Global initiatives and regulations claim that transparency will be key to making smart cities ethical. Unfortunately, how exactly to achieve a transparent city is quite opaque. Current regulations often only mandate that information be made accessible in the case of personal data collection. While such standards might encourage anonymization techniques, they do not enforce that publicly collected data be made publicly visible or an issue of public concern. This paper covers three main needs for data transparency in public space. The first, why data visibility is important, sets the stage for why transparency cannot solely be based on personal as opposed to anonymous data collection as well as what counts as making data transparent. The second concern, how to make data visible onsite, addresses the issue of how to create public space that communicates its sensing capabilities without overwhelming the public. The final section, what regulations are necessary for data visibility, argues that for a transparent public space government needs to step in to regulate contextual open data sharing, data registries, signage, and data literacy education.  


Sign in / Sign up

Export Citation Format

Share Document