scholarly journals IoT Data Qualification for a Logistic Chain Traceability Smart Contract

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2239
Author(s):  
Mohamed Ahmed ◽  
Chantal Taconet ◽  
Mohamed Ould ◽  
Sophie Chabridon ◽  
Amel Bouzeghoub

In the logistic chain domain, the traceability of shipments in their entire delivery process from the shipper to the consignee involves many stakeholders. From the traceability data, contractual decisions may be taken such as incident detection, validation of the delivery or billing. The stakeholders require transparency in the whole process. The combination of the Internet of Things (IoT) and the blockchain paradigms helps in the development of automated and trusted systems. In this context, ensuring the quality of the IoT data is an absolute requirement for the adoption of those technologies. In this article, we propose an approach to assess the data quality (DQ) of IoT data sources using a logistic traceability smart contract developed on top of a blockchain. We select the quality dimensions relevant to our context, namely accuracy, completeness, consistency and currentness, with a proposition of their corresponding measurement methods. We also propose a data quality model specific to the logistic chain domain and a distributed traceability architecture. The evaluation of the proposal shows the capacity of the proposed method to assess the IoT data quality and ensure the user agreement on the data qualification rules. The proposed solution opens new opportunities in the development of automated logistic traceability systems.

Author(s):  
Prameela Singu ◽  
Rayees Farooq

The objective of the study is to develop a data quality matrix, which can be used to measure the quality of data and response rate from respondents. The study is exploratory in nature, which applied the systematic review of literature extracted from different database. The study found that all the quadrants of the matrix (e.g., active, risky, and non-functional and deferential) have importance depending upon the nature of the study. The study further suggests that risky situation can be improved through enhancing the quality of data collected. The proposed matrix is very helpful in understanding the quantity and quality dimensions of the data in survey research. It helps to interpret survey results to fit between data representativeness and desired research outcomes.


Author(s):  
Mª Ángeles Moraga ◽  
Angélica Caro

Web portals are emerging Internet-based applications that enable access to different sources (providers). Through portals the organizations develop their businesses within what is a more and more competitive environment. A decisive factor for this competitiveness and for achieving the users’ loyalties is portal quality. In addition, we live in an information society, and the ability to rapidly define and assess data quality of Web portals for decision making provides a potential strategic advantage. With this in mind, our work was focused on quality of Web portals. In this article we present a part of it: a portal quality model and the first phases in the developing of a data quality model for Web portals.


2015 ◽  
Vol 24 (3) ◽  
pp. 361-369
Author(s):  
Saúl Fagúndez ◽  
Joaquín Fleitas ◽  
Adriana Marotta

AbstractThe use of sensors has had an enormous increment in the last years, becoming a valuable tool in many different areas. In this kind of scenario, the quality of data becomes an extremely important issue; however, not much attention has been paid to this specific topic, with only a few existing works that focus on it. In this paper, we present a proposal for managing data streams from sensors that are installed in patients’ homes in order to monitor their health. It focuses on processing the sensors’ data streams, taking into account data quality. In order to achieve this, a data quality model for this kind of data streams and an architecture for the monitoring system are proposed. Moreover, our work introduces a mechanism for avoiding false alarms generated by data quality problems.


2020 ◽  
Vol 12 (18) ◽  
pp. 7491 ◽  
Author(s):  
Juan José Bullón Pérez ◽  
Araceli Queiruga-Dios ◽  
Víctor Gayoso Martínez ◽  
Ángel Martín del Rey

Traceability and monitoring of industrial processes are becoming more important to assure the value of final products. Blockchain technology emerged as part of a movement linked to criptocurrencies and the Internet of Things, providing nice-to-have features such as traceability, authenticity and security to sectors willing to use this technology. In the retail industry, blockchain offers users the possibility to monitor details about time and place of elaboration, the origin of raw materials, the quality of materials involved in the manufacturing processes, information on the people or companies that work on it, etc. It allows to control and monitor textile articles, from their production or importing initial steps, up to their acquisition by the end consumer, using the blockchain as a means of tracking and identification during the whole process. This technology can also be used by the apparel industry in general and, more specifically, for ready-to-wear clothing, for tracing suppliers and customers along the entire logistics chain. The goal of this paper is to introduce the more recent traceability schemes for the apparel industry together with the proposal of a framework for ready-to-wear clothing which allows to ensure the transparency in the supply chain, clothing authenticity, reliability and integrity, and validity of the retail final products, and of the elements that compose the whole supply chain. In order to illustrate the proposal, a case study on a women’s shirt from an apparel and fashion company, where a private and open blockchain is used for tracing the product, is included. Blockchain actors are proposed for each product stage.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8429
Author(s):  
Ala Arman ◽  
Pierfrancesco Bellini ◽  
Daniele Bologna ◽  
Paolo Nesi ◽  
Gianni Pantaleo ◽  
...  

The Internet of things has produced several heterogeneous devices and data models for sensors/actuators, physical and virtual. Corresponding data must be aggregated and their models have to be put in relationships with the general knowledge to make them immediately usable by visual analytics tools, APIs, and other devices. In this paper, models and tools for data ingestion and regularization are presented to simplify and enable the automated visual representation of corresponding data. The addressed problems are related to the (i) regularization of the high heterogeneity of data that are available in the IoT devices (physical or virtual) and KPIs (key performance indicators), thus allowing such data in elements of hypercubes to be reported, and (ii) the possibility of providing final users with an index on views and data structures that can be directly exploited by graphical widgets of visual analytics tools, according to different operators. The solution analyzes the loaded data to extract and generate the IoT device model, as well as to create the instances of the device and generate eventual time series. The whole process allows data for visual analytics and dashboarding to be prepared in a few clicks. The proposed IoT device model is compliant with FIWARE NGSI and is supported by a formal definition of data characterization in terms of value type, value unit, and data type. The resulting data model has been enforced into the Snap4City dashboard wizard and tool, which is a GDPR-compliant multitenant architecture. The solution has been developed and validated by considering six different pilots in Europe for collecting big data to monitor and reason people flows and tourism with the aim of improving quality of service; it has been developed in the context of the HERIT-DATA Interreg project and on top of Snap4City infrastructure and tools. The model turned out to be capable of meeting all the requirements of HERIT-DATA, while some of the visual representation tools still need to be updated and furtherly developed to add a few features.


Symmetry ◽  
2018 ◽  
Vol 10 (12) ◽  
pp. 757 ◽  
Author(s):  
Olegas Prentkovskis ◽  
Živko Erceg ◽  
Željko Stević ◽  
Ilija Tanackov ◽  
Marko Vasiljević ◽  
...  

The daily requirements and needs imposed on the executors of logistics services imply the need for a higher level of quality. In this, the proper execution of all sustainability processes and activities plays an important role. In this paper, a new methodology for improving the measurement of the quality of the service consisting of three phases has been developed. The first phase is the application of the Delphi method to determine the quality dimension ranking. After that, in the second phase, using the FUCOM (full consistency method), we determined the weight coefficients of the quality dimensions. The third phase represents determining the level of quality using the SERVQUAL (service quality) model, or the difference between the established gaps. The new methodology considers the assessment of the quality dimensions of a large number of participants (customers), on the one hand, and experts’ assessments on the other hand. The methodology was verified through the research carried out in an express post company. After processing and analyzing the collected data, the Cronbach alpha coefficient for each dimension of the SERVQUAL model for determining the reliability of the response was calculated. To determine the validity of the results and the developed methodology, an extensive statistical analysis (ANOVA, Duncan, Signum, and chi square tests) was carried out. The integration of certain methods and models into the new methodology has demonstrated greater objectivity and more precise results in determining the level of quality of sustainability processes and activities.


2018 ◽  
Author(s):  
Robab Abdolkhani ◽  
Kathleen Gray ◽  
Ann Borda ◽  
Ruth De Souza

BACKGROUND The proliferation of advanced wearable medical technologies is increasing the production of Patient-Generated Health Data (PGHD). However, there is lack of evidence on whether the quality of the data generated from wearables can be effectively used for patient care. In order for PGHD to be utilized for decision making by health providers, it needs to be of high quality, that is, it must comply with standards defined by health care organizations and be accurate, consistent, complete and unbiased. Although medical wearables record highly accurate data, there are other technology issues as well as human factors that affect PGHD quality when it is collected and shared under patients’ control to ultimately used by health care providers. OBJECTIVE This paper explores human factors and technology factors that impact on the quality of PGHD from medical wearables for effective use in clinical care. METHODS We conducted semi-structured interviews with 17 PGHD stakeholders in Australia, the US, and the UK. Participants include ten health care providers working with PGHD from medical wearables in diabetes, sleep disorders, and heart arrhythmia, five health IT managers, and two executives. The participants were interviewed about seven data quality dimensions including accuracy, accessibility, coherence, institutional environment, interpretability, relevancy, and timeliness. Open coding of the interview data identified several technology and human issues related to the data quality dimensions regarding the clinical use of PGHD. RESULTS The overarching technology issues mentioned by participants include lack of advanced functionalities such as real-time alerts for patients as well as complicated settings which can result in errors. In terms of PGHD coherence, different wearables have different data capture mechanisms for the same health condition that create different formats which result in difficult PGHD interpretation and comparison. Another technology issue that is relevant to the current ICT infrastructure of the health care settings is lack of possibility in real-time PGHD access by health care providers which reduce the value of PGHD use. Besides, health care providers addressed a challenge on where PGHD is stored and who truthfully owns the data that affect the feasibility of PGHD access. The human factors included a lack of digital health literacy among patients which shape both the patients’ motivation and their behaviors toward PGHD collection. For example, the gaps in data recording shown in the results indicate the wearable was not used for a time duration. Participants also identified the cost of devices as a barrier to the long-term engagement and use of wearables. CONCLUSIONS Using PGHD garnered from medical wearables is problematic in clinical contexts due to low-quality data influenced by technology and human factors. At present, no guidelines have been defined to assess PGHD quality. Hence, there is a need for new solutions to overcome the existing technology and human-related barriers to enhance PGHD quality.


Author(s):  
Á. Barsi ◽  
Zs. Kugler ◽  
I. László ◽  
Gy. Szabó ◽  
H. M. Abdulmutalib

The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications.<br> Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users’ needs.<br> The present paper gives the theoretic overview of the issue, besides selected, practice-oriented approaches are evaluated too, finally widely-used dimension metrics like Root Mean Squared Error (RMSE) or confusion matrix are discussed. The authors present data quality features of well-defined and poorly defined object. The central part of the study is the land cover mapping, describing its accuracy management model, presented relevance and uncertainty measures of its influencing quality dimensions. In the paper theory is supported by a case study, where the remote sensing technology is used for supporting the area-based agricultural subsidies of the European Union, in Hungarian administration.


Author(s):  
Alejandro Vaisman

Today, information and timely decisions are crucial for an organization’s success. A decision support system (DSS) is a software tool that provides information allowing its users to make decisions timely and cost effectively. This is highly conditioned by the quality of the data involved, usually stored in a data warehouse, and by a sound and complete requirements analysis. In this chapter we show that conventional techniques for requirements elicitation cannot be used in DSS, and present a methodology denoted DSS-METRIQ, aimed at providing a single data quality-based procedure for complete and consistent elicitation of functional (queries) and nonfunctional (data quality) requirements. The outcomes of the process are a set of requirement documents and a specification of the operational data sources that can satisfy such requirements. We review the state-of-the-art in the field, and show that in spite of the tools and methodologies already proposed for the modeling and design of decision support systems, DSS-METRIQ is the first one that supports the whole process by means of an integral technique.


Author(s):  
Hamed Taherdoost ◽  
Ali Hassan

E-service quality is known as a critical factor for successful implementation and decent performance of any business in electronic environment. Although many researches have been carried out in the field of service quality, there is a clear need for a theoretical model that integrates all aspects of e-service quality. This chapter responded to this need by developing a theoretical model to assess the quality of e-service. In the first phase, e-service quality dimensions were extracted from the literature. Exploratory factor analysis was applied to cluster the factors effectively in developing the conceptual model. Confirmatory approach was conducted with structural equation modeling to test and validate the proposed model. The contribution of this research is the integration of various relevant dimensions affecting e-service quality into a unified e-service quality model (eSQM).


Sign in / Sign up

Export Citation Format

Share Document