Can Workplace Tracking Ever Empower? Collective Sensemaking for the Responsible Use of Sensor Data at Work

2021 ◽  
Vol 5 (GROUP) ◽  
pp. 1-21
Author(s):  
Naja Holten Møller ◽  
Gina Neff ◽  
Jakob Grue Simonsen ◽  
Jonas Christoffer Villumsen ◽  
Pernille Bjørn

People are increasingly subject to the tracking of data about them at their workplaces. Sensor tracking is used by organizations to generate data on the movement and interaction of their employees to monitor and manage workers, and yet this data also poses significant risks to individual employees who may face harms from such data, and from data errors, to their job security or pay as a result of such analyses. Working with a large hospital, we developed a set of intervention strategies to enable what we call "collective sensemaking" describing worker contestation of sensor tracking data. We did this by participating in the sensor data science team, analyzing data on badges that employees wore over a two-week period, and then bringing the results back to the employees through a series of participatory workshops. We found three key aspects of collective sensemaking important for understanding data from the perspectives of stakeholders: 1) data shadows for tempering possibilities for design with the realities of data tracking; 2) data transducers for converting our assumptions about sensor tracking, and 3) data power for eliciting worker inclusivity and participation. We argue that researchers face what Dourish (2019) called the "legitimacy trap" when designing with large datasets and that research about work should commit to complementing data-driven studies with in-depth insights to make them useful for all stakeholders as a corrective to the underlying power imbalance that tracked workers face.

2021 ◽  
Vol 5 (3) ◽  
pp. 1-30
Author(s):  
Gonçalo Jesus ◽  
António Casimiro ◽  
Anabela Oliveira

Sensor platforms used in environmental monitoring applications are often subject to harsh environmental conditions while monitoring complex phenomena. Therefore, designing dependable monitoring systems is challenging given the external disturbances affecting sensor measurements. Even the apparently simple task of outlier detection in sensor data becomes a hard problem, amplified by the difficulty in distinguishing true data errors due to sensor faults from deviations due to natural phenomenon, which look like data errors. Existing solutions for runtime outlier detection typically assume that the physical processes can be accurately modeled, or that outliers consist in large deviations that are easily detected and filtered by appropriate thresholds. Other solutions assume that it is possible to deploy multiple sensors providing redundant data to support voting-based techniques. In this article, we propose a new methodology for dependable runtime detection of outliers in environmental monitoring systems, aiming to increase data quality by treating them. We propose the use of machine learning techniques to model each sensor behavior, exploiting the existence of correlated data provided by other related sensors. Using these models, along with knowledge of processed past measurements, it is possible to obtain accurate estimations of the observed environment parameters and build failure detectors that use these estimations. When a failure is detected, these estimations also allow one to correct the erroneous measurements and hence improve the overall data quality. Our methodology not only allows one to distinguish truly abnormal measurements from deviations due to complex natural phenomena, but also allows the quantification of each measurement quality, which is relevant from a dependability perspective. We apply the methodology to real datasets from a complex aquatic monitoring system, measuring temperature and salinity parameters, through which we illustrate the process for building the machine learning prediction models using a technique based on Artificial Neural Networks, denoted ANNODE ( ANN Outlier Detection ). From this application, we also observe the effectiveness of our ANNODE approach for accurate outlier detection in harsh environments. Then we validate these positive results by comparing ANNODE with state-of-the-art solutions for outlier detection. The results show that ANNODE improves existing solutions regarding accuracy of outlier detection.


2020 ◽  
Vol 185 ◽  
pp. 116282
Author(s):  
Cheng Yang ◽  
Glen T. Daigger ◽  
Evangelia Belia ◽  
Branko Kerkez

2021 ◽  
Vol 11 (7) ◽  
pp. 3110
Author(s):  
Karina Gibert ◽  
Xavier Angerri

In this paper, the results of the project INSESS-COVID19 are presented, as part of a special call owing to help in the COVID19 crisis in Catalonia. The technological infrastructure and methodology developed in this project allows the quick screening of a territory for a quick a reliable diagnosis in front of an unexpected situation by providing relevant decisional information to support informed decision-making and strategy and policy design. One of the challenges of the project was to extract valuable information from direct participatory processes where specific target profiles of citizens are consulted and to distribute the participation along the whole territory. Having a lot of variables with a moderate number of citizens involved (in this case about 1000) implies the risk of violating statistical secrecy when multivariate relationships are analyzed, thus putting in risk the anonymity of the participants as well as their safety when vulnerable populations are involved, as is the case of INSESS-COVID19. In this paper, the entire data-driven methodology developed in the project is presented and the dealing of the small subgroups of population for statistical secrecy preserving described. The methodology is reusable with any other underlying questionnaire as the data science and reporting parts are totally automatized.


Author(s):  
Naipeng Li ◽  
Yaguo Lei ◽  
Nagi Gebraeel ◽  
Zhijian Wang ◽  
Xiao Cai ◽  
...  

2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110194
Author(s):  
Myron A Godinho ◽  
Ann Borda ◽  
Timothy Kariotis ◽  
Andreea Molnar ◽  
Patty Kostkova ◽  
...  

Engaging citizens with digital technology to co-create data, information and knowledge has widely become an important strategy for informing the policy response to COVID-19 and the ‘infodemic’ of misinformation in cyberspace. This move towards digital citizen participation aligns well with the United Nations’ agenda to encourage the use of digital tools to enable data-driven, direct democracy. From data capture to information generation, and knowledge co-creation, every stage of the data lifecycle bears important considerations to inform policy and practice. Drawing on evidence of participatory policy and practice during COVID-19, we outline a framework for citizen ‘e-participation’ in knowledge co-creation across every stage of the policy cycle. We explore how coupling the generation of information with that of social capital can provide opportunities to collectively build trust in institutions, accelerate recovery and facilitate the ‘e-society’. We outline the key aspects of realising this vision of data-driven direct democracy by discussing several examples. Sustaining participatory knowledge co-creation beyond COVID-19 requires that local organisations and institutions (e.g. academia, health and welfare, government, business) incorporate adaptive learning mechanisms into their operational and governance structures, their integrated service models, as well as employing emerging social innovations.


2021 ◽  
Author(s):  
MUTHU RAM ELENCHEZHIAN ◽  
VAMSEE VADLAMUDI ◽  
RASSEL RAIHAN ◽  
KENNETH REIFSNIDER

Our community has a widespread knowledge on the damage tolerance and durability of the composites, developed over the past few decades by various experimental and computational efforts. Several methods have been used to understand the damage behavior and henceforth predict the material states such as residual strength (damage tolerance) and life (durability) of these material systems. Electrochemical Impedance Spectroscopy (EIS) and Broadband Dielectric Spectroscopy (BbDS) are such methods, which have been proven to identify the damage states in composites. Our previous work using BbDS method has proven to serve as precursor to identify the damage levels, indicating the beginning of end of life of the material. As a change in the material state variable is triggered by damage development, the rate of change of these states indicates the rate of damage interaction and can effectively predict impending failure. The Data-Driven Discovery of Models (D3M) [1] aims to develop model discovery systems, enabling users with domain knowledge but no data science background to create empirical models of real, complex processes. These D3M methods have been developed severely over the years in various applications and their implementation on real-time prediction for complex parameters such as material states in composites need to be trusted based on physics and domain knowledge. In this research work, we propose the use of data-driven methods combined with BbDS and progressive damage analysis to identify and hence predict material states in composites, subjected to fatigue loads.


2021 ◽  
Author(s):  
Karen Triep ◽  
Alexander Benedikt Leichtle ◽  
Martin Meister ◽  
Georg Martin Fiedler ◽  
Olga Endrich

BACKGROUND The criteria for the diagnosis of kidney disease outlined in “The Kidney Disease: Improving Global Outcomes (KDIGO)” are based on a patient’s current, historical and baseline data. The diagnosis of acute (AKI), chronic (CKD) and acute-on-chronic kidney disease requires past measurements of creatinine and back-calculation and the interpretation of several laboratory values over a certain period. Diagnosis may be hindered by unclear definition of the individual creatinine baseline and rough ranges of norm values set without adjustment for age, ethnicity, comorbidities and treatment. Classification of the correct diagnosis and the sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach and the patient’s outcome. OBJECTIVE With the help of a complex rule-engine a data-driven approach to assign the diagnoses acute, chronic and acute-on-chronic kidney disease is applied. METHODS Real-time and retrospective data from the hospital’s Clinical Data Warehouse of in- and outpatient cases treated between 2014 – 2019 is used. Delta serum creatinine, baseline values and admission and discharge data are analyzed. A KDIGO based standard query language (SQL) algorithm applies specific diagnosis (ICD) codes to inpatient stays. To measure the effect on diagnosis, Text Mining on discharge documentation is conducted. RESULTS We show that this approach yields an increased number of diagnoses as well as higher precision in documentation and coding (unspecific diagnosis ICD N19* coded in % of N19 generated 17.8 in 2016, 3.3 in 2019). CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patients’ outcome will be the next step of the project.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


Author(s):  
Xiangxue Zhao ◽  
Shapour Azarm ◽  
Balakumar Balachandran

Online prediction of dynamical system behavior based on a combination of simulation data and sensor measurement data has numerous applications. Examples include predicting safe flight configurations, forecasting storms and wildfire spread, estimating railway track and pipeline health conditions. In such applications, high-fidelity simulations may be used to accurately predict a system’s dynamical behavior offline (“non-real time”). However, due to the computational expense, these simulations have limited usage for online (“real-time”) prediction of a system’s behavior. To remedy this, one possible approach is to allocate a significant portion of the computational effort to obtain data through offline simulations. The obtained offline data can then be combined with online sensor measurements for online estimation of the system’s behavior with comparable accuracy as the off-line, high-fidelity simulation. The main contribution of this paper is in the construction of a fast data-driven spatiotemporal prediction framework that can be used to estimate general parametric dynamical system behavior. This is achieved through three steps. First, high-order singular value decomposition is applied to map high-dimensional offline simulation datasets into a subspace. Second, Gaussian processes are constructed to approximate model parameters in the subspace. Finally, reduced-order particle filtering is used to assimilate sparsely located sensor data to further improve the prediction. The effectiveness of the proposed approach is demonstrated through a case study. In this case study, aeroelastic response data obtained for an aircraft through simulations is integrated with measurement data obtained from a few sparsely located sensors. Through this case study, the authors show that along with dynamic enhancement of the state estimates, one can also realize a reduction in uncertainty of the estimates.


Sign in / Sign up

Export Citation Format

Share Document