scholarly journals Data Civil Rights - Housing Primer

2017 ◽  
Author(s):  
Alex Rosenblat ◽  
Kate Wikelius ◽  
danah boyd ◽  
Seeta Peña Gangadharan ◽  
Corrine Yu

Data has always played an important role in housing policies, practices, and financing. Housing advocates worry that new sources of data are being used to extend longstanding discriminatory practices, particularly as it affects those who have access to credit for home ownership as well as the ways in which the rental market is unfolding. Open data practices, while potentially shedding light on housing inequities, are currently more theoretical than actionable. Far too little is known about the ways in which data analytics and other data-related practices may expand or relieve inequities in housing.

2017 ◽  
Author(s):  
Alex Rosenblat ◽  
Rob Randhava ◽  
danah boyd ◽  
Seeta Peña Gangadharan ◽  
Corrine Yu

Data has always played an important role in housing policies, practices, and financing. Housing advocates worry that new sources of data are being used to extend longstanding discriminatory practices, particularly as it affects those who have access to credit for home ownership as well as the ways in which the rental market is unfolding. Open data practices, while potentially shedding light on housing inequities, are currently more theoretical than actionable. Far too little is known about the ways in which data analytics and other data-related practices may expand or relieve inequities in housing.


2018 ◽  
Vol 119 (1/2) ◽  
pp. 121-134 ◽  
Author(s):  
Christine Urquhart

Purpose This paper aims to examine the principles that underpin library assessment, methods used for impact and performance evaluation and how academic libraries should use the findings, and it discusses how value frameworks help. Design/methodology/approach This is a literature review covering aspects of value (value propositions, value co-creation), value frameworks (including the 2015 ACRL framework, Holbrook typology with worked example), data analytics and collaborative projects including LibQUAL+ initiatives and the use of balanced scorecard principles (including a values scorecard). Findings The use of data analytics in library assessment requires collaboration among library services to develop reliable data sets. Scorecards help ongoing impact and performance evaluation. Queries that arise may require a framework, or logic model, to formulate suitable questions and assemble evidence (qualitative and quantitative) to answer new questions about the value of library services. The perceived value framework of Holbrook’s typology, the values scorecard and the ACRL framework all support the deeper level of inquiry required. Research limitations/implications Includes examples of possible application of frameworks. Practical implications A value framework might help data analytic approaches in combining qualitative and quantitative data. Social implications Impact assessment may require assessing how value is co-created with library users in use of e-resources and open data. Originality/value The study contrasts the varying approaches to impact evaluation and library assessment in academic libraries, and it examines more in-depth value frameworks.


2019 ◽  
pp. 253-262
Author(s):  
Keeanga-Yamahtta Taylor

Homeownership in the U.S. is often touted as a means to escape poverty, build wealth, and fully participate in American society. However, racism in the broader American society ultimately resulted in a racist housing market that excludes Black people from homeownership and depresses the value of property inhabited by African Americans. The perception that Black buyers are risky has continued to fuel predatory practices in real estate. The author notes that African Americans should not be limited to the rental market because of inequality in the housing market. Instead, she suggests people should question American society, a society in which full citizenship is reliant upon home ownership.


2020 ◽  
Vol 14 (4) ◽  
pp. 623-637
Author(s):  
Anne L. Washington

Purpose Open data resources contain few signals for assessing their suitability for data analytics. The purpose of this paper is to characterize the uncertainty experienced by open data consumers with a framework based on economic theory. Design/methodology/approach Drawing on information asymmetry theory about market exchanges, this paper investigates the practical challenges faced by data consumers seeking to reuse open data. An inductive qualitative analysis of over 2,900 questions asked between 2013 and 2018 on an internet forum identified how a community of 15,000 open data consumers expressed uncertainty about data sources. Findings Open data consumers asked direct questions that expressed uncertainty about the availability, interoperability and interpretation of data resources. Questions focused on future value and some requests were devoted to seeking data that matched known sources. The study proposes a data signal framework that explains uncertainty about open data within the context of control and visibility. Originality/value The proposed framework bridges digital government practice to information signaling theory. The empirical evidence substantiates market aspects of open data portals. This paper provided a needed case study of how data consumers experience uncertainty. The study integrates established theories about risk to improve the reuse of open data.


2020 ◽  
pp. 016555152091851 ◽  
Author(s):  
A Y M Atiquil Islam ◽  
Khurshid Ahmad ◽  
Muhammad Rafi ◽  
Zheng JianMing

The concept of big data has been extensively considered as a technological modernisation in organisations and educational institutes. Thus, the purpose of this study is to determine whether the modified technology acceptance model (MTAM) is viable for evaluating the performance of librarians in the use of big data analytics in academic libraries. This study used an empirical research method for collecting data from 211 librarians working in Pakistan’s universities. On the basis of the findings of the MTAM analysis by structural equation modelling, the performances of the academic libraries were comprehended through the process of big data. The main influential components of the performance analysis in this study were the big data analytics capabilities, perceived ease of access and the usefulness of big data practices in academic libraries. Subsequently, the utilisation of big data was significantly affected by skills, perceived ease of access and the usefulness of academic libraries. The results also suggested that the various components of the academic libraries lead to effective organisational performance when linked to big data analytics.


2019 ◽  
Vol 37 (1) ◽  
pp. 30-42 ◽  
Author(s):  
Miguel-Angel Sicilia ◽  
Anna Visvizi

PurposeThe purpose of this paper is to employ the case of Organization for Economic Cooperation and Development (OECD) data repositories to examine the potential of blockchain technology in the context of addressing basic contemporary societal concerns, such as transparency, accountability and trust in the policymaking process. Current approaches to sharing data employ standardized metadata, in which the provider of the service is assumed to be a trusted party. However, derived data, analytic processes or links from policies, are in many cases not shared in the same form, thus breaking the provenance trace and making the repetition of analysis conducted in the past difficult. Similarly, it becomes tricky to test whether certain conditions justifying policies implemented still apply. A higher level of reuse would require a decentralized approach to sharing both data and analytic scripts and software. This could be supported by a combination of blockchain and decentralized file system technology.Design/methodology/approachThe findings presented in this paper have been derived from an analysis of a case study, i.e., analytics using data made available by the OECD. The set of data the OECD provides is vast and is used broadly. The argument is structured as follows. First, current issues and topics shaping the debate on blockchain are outlined. Then, a redefinition of the main artifacts on which some simple or convoluted analytic results are based is revised for some concrete purposes. The requirements on provenance, trust and repeatability are discussed with regards to the architecture proposed, and a proof of concept using smart contracts is used for reasoning on relevant scenarios.FindingsA combination of decentralized file systems and an open blockchain such as Ethereum supporting smart contracts can ascertain that the set of artifacts used for the analytics is shared. This enables the sequence underlying the successive stages of research and/or policymaking to be preserved. This suggests that, in turn, andex post, it becomes possible to test whether evidence supporting certain findings and/or policy decisions still hold. Moreover, unlike traditional databases, blockchain technology makes it possible that immutable records can be stored. This means that the artifacts can be used for further exploitation or repetition of results. In practical terms, the use of blockchain technology creates the opportunity to enhance the evidence-based approach to policy design and policy recommendations that the OECD fosters. That is, it might enable the stakeholders not only to use the data available in the OECD repositories but also to assess corrections to a given policy strategy or modify its scope.Research limitations/implicationsBlockchains and related technologies are still maturing, and several questions related to their use and potential remain underexplored. Several issues require particular consideration in future research, including anonymity, scalability and stability of the data repository. This research took as example OECD data repositories, precisely to make the point that more research and more dialogue between the research and policymaking community is needed to embrace the challenges and opportunities blockchain technology generates. Several questions that this research prompts have not been addressed. For instance, the question of how the sharing economy concept for the specifics of the case could be employed in the context of blockchain has not been dealt with.Practical implicationsThe practical implications of the research presented here can be summarized in two ways. On the one hand, by suggesting how a combination of decentralized file systems and an open blockchain, such as Ethereum supporting smart contracts, can ascertain that artifacts are shared, this paper paves the way toward a discussion on how to make this approach and solution reality. The approach and architecture proposed in this paper would provide a way to increase the scope of the reuse of statistical data and results and thus would improve the effectiveness of decision making as well as the transparency of the evidence supporting policy.Social implicationsDecentralizing analytic artifacts will add to existing open data practices an additional layer of benefits for different actors, including but not limited to policymakers, journalists, analysts and/or researchers without the need to establish centrally managed institutions. Moreover, due to the degree of decentralization and absence of a single-entry point, the vulnerability of data repositories to cyberthreats might be reduced. Simultaneously, by ensuring that artifacts derived from data based in those distributed depositories are made immutable therein, full reproducibility of conclusions concerning the data is possible. In the field of data-driven policymaking processes, it might allow policymakers to devise more accurate ways of addressing pressing issues and challenges.Originality/valueThis paper offers the first blueprint of a form of sharing that complements open data practices with the decentralized approach of blockchain and decentralized file systems. The case of OECD data repositories is used to highlight that while data storing is important, the real added value of blockchain technology rests in the possible change on how we use the data and data sets in the repositories. It would eventually enable a more transparent and actionable approach to linking policy up with the supporting evidence. From a different angle, throughout the paper the case is made that rather than simply data, artifacts from conducted analyses should be made persistent in a blockchain. What is at stake is the full reproducibility of conclusions based on a given set of data, coupled with the possibility ofex posttesting the validity of the assumptions and evidence underlying those conclusions.


2017 ◽  
Vol 49 (9) ◽  
pp. 2046-2064 ◽  
Author(s):  
N Henry ◽  
J Pollard ◽  
P Sissons ◽  
J Ferreira ◽  
M Coombes

In 2013, the UK Government announced that seven of the nation’s largest banks had agreed to publish their lending data at the local level across Great Britain. The release of such area based lending data has been welcomed by advocacy groups and policy makers keen to better understand and remedy geographies of financial exclusion. This paper makes three contributions to debates about financial exclusion. First, it provides the first exploratory spatial analysis of the personal lending data made available; it scrutinises the parameters and robustness of the dataset and evaluates the extent to which the data increase transparency in UK personal lending markets. Second, it uses the data to provide a geographical overview of patterns of personal lending across Great Britain. Third, it uses this analysis to revisit the analytical and political limitations of ‘open data’ in addressing the relationship between access to finance and economic marginalisation. Although a binary policy imaginary of ‘inclusion-exclusion’ has historically driven advocacy for data disclosure, recent literatures on financial exclusion generate the need for more complex and variegated understandings of economic marginalisation. The paper questions the relationship between transparency and data disclosure, the policy push for financial inclusion, and patterns of indebtedness and economic marginalisation in a world where ‘fringe finance’ has become mainstream. Drawing on these literatures, this analysis suggests that data disclosure, and the transparency it affords, is a necessary but not sufficient tool in understanding the distributional implications of variegated access to credit.


Author(s):  
Daniela Espinoza-Molina ◽  
Charalampos Nikolaou ◽  
Corneliu Octavian Dumitru ◽  
Konstantina Bereta ◽  
Manolis Koubarakis ◽  
...  

2020 ◽  
Author(s):  
Denis Cousineau

Born-Open Data experiments are encouraged for better open science practices. To be adopted, Born-Open data practices must be easy to implement. Herein, I introduce a package for E-Prime such that the data files are automatically saved on a GitHub repository. The BornOpenData package for E-Prime works seamlessly and performs the upload as soon as the experiment is finished so that there is no additional steps to perform beyond placing a package call within E-Prime. Because E-Prime files are not standard tab-separated files, I also provide an R function that retrieves the data directly from GitHub into a data frame ready to be analyzed. At this time, there are no standards as to what should constitute an adequate open-access data repository so I propose a few suggestions that any future Born-Open data system could follow for easier use by the research community.


Sign in / Sign up

Export Citation Format

Share Document