scholarly journals Deconstructing datafication’s brave new world

2018 ◽  
Vol 20 (12) ◽  
pp. 4473-4491 ◽  
Author(s):  
Nick Couldry ◽  
Jun Yu

As World Economic Forum’s definition of personal data as ‘the new “oil” – a valuable resource of the 21st century’ shows, large-scale data processing is increasingly considered the defining feature of contemporary economy and society. Commercial and governmental discourse on data frequently argues its benefits, and so legitimates its continuous and large-scale extraction and processing as the starting point for developments in specific industries, and potentially as the basis for societies as a whole. Against the background of the General Data Protection Regulation, this article unravels how general discourse on data covers over the social practices enabling collection of data, through the analysis of high-profile business reports and case studies of health and education sectors. We show how conceptualisation of data as having a natural basis in the everyday world protects data collection from ethical questioning while endorsing the use and free flow of data within corporate control, at the expense of its potentially negative impacts on personal autonomy and human freedom.

2021 ◽  
pp. 1-9
Author(s):  
Andrew Cormack

Europe’s General Data Protection Regulation (GDPR) has a fearsome reputation as “the law that can fine you €20 million.” But behind that scary slogan lies a text that can be a very helpful guide to designing data processing systems. This paper explores that side of the GDPR: how understanding it can produce more effective - and more trustworthy - systems. Three popular myths often take designers down the wrong track: that GDPR is about stopping processing, is about users, and is about consent. Instead we consider, from a design perspective, the GDPR’s source material, its Principles, and its Lawful Bases for processing. Three examples - from the field of education, but widely applicable - show how “thinking with GDPR” has improved both the effectiveness and safety of large-scale data processing systems.


2008 ◽  
Vol 25 (5) ◽  
pp. 287-300 ◽  
Author(s):  
B. Martin ◽  
A. Al‐Shabibi ◽  
S.M. Batraneanu ◽  
Ciobotaru ◽  
G.L. Darlea ◽  
...  

Author(s):  
Masato Matsumoto ◽  
Kyle Ruske

<p>Condition ratings of bridge components in the Federal Highway Administration (FHWA)’s Structural Inventory and Appraisal database are determined by bridge inspectors in the field, often by visual confirmation or direct- contact sounding techniques. However, the determination of bridge condition ratings is generally subjective depending on individual inspectors’ knowledge and experience, as well as varying field conditions. There are also limitations to access, unsafe working conditions, and negative impacts of lane closures to account for. This paper describes an alternative method to obtaining informative and diagnostic inspection data for concrete bridge decks: mobile nondestructive bridge deck evaluation technology. The technology uses high- definition infrared and visual imaging to monitor bridge conditions over long-term (or desired) intervals. This combination of instruments benefits from rapid and large-scale data acquisition capabilities. Through its implementation in Japan over the course of two decades, the technology is opening new possibilities in a field with much untapped potential. Findings and lessons learned from our experience in the states of Virginia and Pennsylvania are described as examples of highway-speed mobile nondestructive evaluation in action. To validate the accuracy of delamination detection by the visual and infrared scanning, findings were proofed by physical sounding of the target deck structures.</p>


2014 ◽  
Vol 26 (6) ◽  
pp. 1316-1331 ◽  
Author(s):  
Gang Chen ◽  
Tianlei Hu ◽  
Dawei Jiang ◽  
Peng Lu ◽  
Kian-Lee Tan ◽  
...  

2018 ◽  
Vol 7 (2.31) ◽  
pp. 240
Author(s):  
S Sujeetha ◽  
Veneesa Ja ◽  
K Vinitha ◽  
R Suvedha

In the existing scenario, a patient has to go to the hospital to take necessary tests, consult a doctor and buy prescribed medicines or use specified healthcare applications. Hence time is wasted at hospitals and in medical shops. In the case of healthcare applications, face to face interaction with the doctor is not available. The downside of the existing scenario can be improved by the Medimate: Ailment diffusion control system with real time large scale data processing. The purpose of medimate is to establish a Tele Conference Medical System that can be used in remote areas. The medimate is configured for better diagnosis and medical treatment for the rural people. The system is installed with Heart Beat Sensor, Temperature Sensor, Ultrasonic Sensor and Load Cell to monitor the patient’s health parameters. The voice instructions are updated for easier access.  The application for enabling video and voice communication with the doctor through Camera and Headphone is installed at both the ends. The doctor examines the patient and prescribes themedicines. The medical dispenser delivers medicine to the patient as per the prescription. The QR code will be generated for each prescription by medimate and that QR code can be used forthe repeated medical conditions in the future. Medical details are updated in the server periodically.  


2021 ◽  
Vol 15 ◽  
Author(s):  
Jianwei Zhang ◽  
Xubin Zhang ◽  
Lei Lv ◽  
Yining Di ◽  
Wei Chen

Background: Learning discriminative representation from large-scale data sets has made a breakthrough in decades. However, it is still a thorny problem to generate representative embedding from limited examples, for example, a class containing only one image. Recently, deep learning-based Few-Shot Learning (FSL) has been proposed. It tackles this problem by leveraging prior knowledge in various ways. Objective: In this work, we review recent advances of FSL from the perspective of high-dimensional representation learning. The results of the analysis can provide insights and directions for future work. Methods: We first present the definition of general FSL. Then we propose a general framework for the FSL problem and give the taxonomy under the framework. We survey two FSL directions: learning policy and meta-learning. Results: We review the advanced applications of FSL, including image classification, object detection, image segmentation and other tasks etc., as well as the corresponding benchmarks to provide an overview of recent progress. Conclusion: FSL needs to be further studied in medical images, language models, and reinforcement learning in future work. In addition, cross-domain FSL, successive FSL, and associated FSL are more challenging and valuable research directions.


Author(s):  
Amir Basirat ◽  
Asad I. Khan ◽  
Heinz W. Schmidt

One of the main challenges for large-scale computer clouds dealing with massive real-time data is in coping with the rate at which unprocessed data is being accumulated. Transforming big data into valuable information requires a fundamental re-think of the way in which future data management models will need to be developed on the Internet. Unlike the existing relational schemes, pattern-matching approaches can analyze data in similar ways to which our brain links information. Such interactions when implemented in voluminous data clouds can assist in finding overarching relations in complex and highly distributed data sets. In this chapter, a different perspective of data recognition is considered. Rather than looking at conventional approaches, such as statistical computations and deterministic learning schemes, this chapter focuses on distributed processing approach for scalable data recognition and processing.


Author(s):  
Manjunath Thimmasandra Narayanapppa ◽  
T. P. Puneeth Kumar ◽  
Ravindra S. Hegadi

Recent technological advancements have led to generation of huge volume of data from distinctive domains (scientific sensors, health care, user-generated data, finical companies and internet and supply chain systems) over the past decade. To capture the meaning of this emerging trend the term big data was coined. In addition to its huge volume, big data also exhibits several unique characteristics as compared with traditional data. For instance, big data is generally unstructured and require more real-time analysis. This development calls for new system platforms for data acquisition, storage, transmission and large-scale data processing mechanisms. In recent years analytics industries interest expanding towards the big data analytics to uncover potentials concealed in big data, such as hidden patterns or unknown correlations. The main goal of this chapter is to explore the importance of machine learning algorithms and computational environment including hardware and software that is required to perform analytics on big data.


Sign in / Sign up

Export Citation Format

Share Document