New development: Leveraging ‘big data’ analytics in the public sector

2016 ◽  
Vol 36 (5) ◽  
pp. 385-390 ◽  
Author(s):  
Pandula Gamage
2018 ◽  
Vol 331 ◽  
pp. 301-311
Author(s):  
Gergely László Szὄke

Big Data is clearly one of the most used buzzwords nowadays, but it really seems that the phenomenon of Big Data will have a huge effect on many different fields, and may be regarded as the new wave of the information revolution started in the 60s of the last century. The potential of exploiting Big Data promises significant benefits (and also new challenges) both in the private and the public sector – this essay will focus on this latter. After a short introduction about Big Data, this paper will first sum up the potential use of Big Data analytics in the public sector. Then I will focus on a specific issue within this scope, namely, how the use of Big Data and algorithm-based decision-making may affect transparency and access to these data. I will focus on the question why the transparency of the algorithms is raised at all, and what the current legal framework for the potential accessibility to them is.


2020 ◽  
pp. 217-230
Author(s):  
Philip Garnett ◽  
Sarah M. Hughes

In this chapter, Garnett and Hughes focus on the role of big data in accessing information from public inquiries. Looking at the Chelsea Manning court martial in the US and the Leveson Inquiry in the UK, they argue that the manner in which information pertaining to inquiries is made public is, at best, unsatisfactory. They propose a variety of means to make this information more accessible and hence more transparent to the public through employing big data techniques.


EDPACS ◽  
2021 ◽  
pp. 1-20
Author(s):  
Zam Zarina Abdul Jabar ◽  
Muslihah Wook ◽  
Omar Zakaria ◽  
Suzaimah Ramli ◽  
Noor Afiza Mat Razali

Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
Sara J Kavanagh ◽  
Benjamin Bray ◽  
Lizz Paley ◽  
James T Campbell ◽  
Emma Vestesson ◽  
...  

Introduction: The Sentinel Stroke National Audit Programme (SSNAP) is the new national stroke register of England and Wales. It has been designed to harness the power of “Big Data” to produce near real-time data collection, analysis and reporting. Sophisticated data visualization is used to provide customized analytics for clinical teams, administrators, healthcare funders and stroke survivors and carers. Methods: A portfolio of cutting edge data visualisation outputs, including team level slidedecks, performance charts, dashboards , and interactive maps, was produced. Visualisations for patients and the public were co-designed with stroke survivors. Stakeholder feedback regarding accessibility and usefulness of the resources was sought via online polls. Results: Key SSNAP results are made accessible electronically every three months in a range of bespoke graphical formats. Individualised slidedecks and data summaries are produced for every hospital, funding group, and region to enable provider level performance and quality reporting and regional and national benchmarking. Dynamic maps enhance dissemination and use of results. Real time root cause analysis tools help teams identify areas of improvement. Feedback reports unprecedented utility of these resources for clinical teams, funders, regional and national health bodies, patients and the public in identifying areas of good practice and requiring improvements, highlighting variations, and driving change. Conclusion: SSNAP is a potential new model of healthcare quality measurement that uses recent developments in big data analytics and visualization to provide information on stroke care quality that is more useful to stakeholders. Similar approaches could be used in other healthcare settings and populations.


2017 ◽  
Author(s):  
Robert Brauneis ◽  
Ellen P. Goodman

Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain.


Sign in / Sign up

Export Citation Format

Share Document