scholarly journals Multivariate Statistics Between Two-Observation Spaces

Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractAs mentioned in the previous chapter, industrial data are usually divided into two categories, process data and quality data, belonging to different measurement spaces. The vast majority of smart manufacturing problems, such as soft measurement, control, monitoring, optimization, etc., inevitably require modeling the data relationships between the two kinds of measurement variables. This chapter’s subject is to discover the correlation between the sets in different observation spaces.

Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractThe observation data collected from continuous industrial processes usually have two main categories: process data and quality data, and the corresponding industrial data analysis is mainly for the two types of data based on the multivariate statistical techniques.


2021 ◽  
Vol 11 (3) ◽  
pp. 1312
Author(s):  
Ana Pamela Castro-Martin ◽  
Horacio Ahuett-Garza ◽  
Darío Guamán-Lozada ◽  
Maria F. Márquez-Alderete ◽  
Pedro D. Urbina Coronado ◽  
...  

Industry 4.0 (I4.0) is built upon the capabilities of Internet of Things technologies that facilitate the recollection and processing of data. Originally conceived to improve the performance of manufacturing facilities, the field of application for I4.0 has expanded to reach most industrial sectors. To make the best use of the capabilities of I4.0, machine architectures and design paradigms have had to evolve. This is particularly important as the development of certain advanced manufacturing technologies has been passed from large companies to their subsidiaries and suppliers from around the world. This work discusses how design methodologies, such as those based on functional analysis, can incorporate new functions to enhance the architecture of machines. In particular, the article discusses how connectivity facilitates the development of smart manufacturing capabilities through the incorporation of I4.0 principles and resources that in turn improve the computing capacity available to machine controls and edge devices. These concepts are applied to the development of an in-line metrology station for automotive components. The impact on the design of the machine, particularly on the conception of the control, is analyzed. The resulting machine architecture allows for measurement of critical features of all parts as they are processed at the manufacturing floor, a critical operation in smart factories. Finally, this article discusses how the I4.0 infrastructure can be used to collect and process data to obtain useful information about the process.


2018 ◽  
Vol 34 (3) ◽  
pp. 581-597 ◽  
Author(s):  
Asaph Young Chun ◽  
Steven G. Heeringa ◽  
Barry Schouten

Abstract We discuss an evidence-based approach to guiding real-time design decisions during the course of survey data collection. We call it responsive and adaptive design (RAD), a scientific framework driven by cost-quality tradeoff analysis and optimization that enables the most efficient production of high-quality data. The notion of RAD is not new; nor is it a silver bullet to resolve all the difficulties of complex survey design and challenges. RAD embraces precedents and variants of responsive design and adaptive design that survey designers and researchers have practiced over decades. In this paper, we present the four pillars of RAD: survey process data and auxiliary information, design features and interventions, explicit quality and cost metrics, and a quality-cost optimization tailored to survey strata. We discuss how these building blocks of RAD are addressed by articles published in the 2017 JOS special issue and this special section. It is a tale of the three perspectives filling in each other. We carry over each of these three perspectives to articulate the remaining challenges and opportunities for the advancement of RAD. We recommend several RAD ideas for future research, including survey-assisted population modeling, rigorous optimization strategies, and total survey cost modeling.


2013 ◽  
Vol 22 (08) ◽  
pp. 1350070 ◽  
Author(s):  
RAFAL CUPEK ◽  
ADAM ZIEBINSKI ◽  
MACIEJ FRANEK

Contemporary computer systems used in the industry are characterized by both an increase in the scale of supported industrial processes measured by the number of control devices and the increasing demand for information describing the underlying processes measured by the number of tags used in Supervisory Control and Data Acquisition (SCADA) systems and Manufacturing Execution Systems (MES). Classical industrial data servers based on the PC architecture are unreliable, expensive to operate and difficult to manage. The alternative is a new standard OPC UA communication interface that simplifies the communication protocol and increases the flexibility of the process data description which allow for the direct implementation of OPC UA servers in embedded devices. This paper presents an innovative approaches in the field of industrial data servers which are used for communication between control systems and SCADA or MES systems. The prototype industrial data server architecture has been implemented, run and tested on the platform of embedded system based on FPGA matrix with built-in FPGA Microblaze processor. The presented experimental results allow to evaluate the applicability of the proposed solution, the limits of the presented architecture and may be used for the improvement of the embedded industrial data servers' structure in subsequent implementations.


2020 ◽  
Author(s):  
Milka Gesicho ◽  
Ankica Babic ◽  
Martin Were

Abstract Background The District Health Information Software 2 (DHIS2) is widely used by countries for national-level aggregate reporting of health data. To best leverage DHIS2 data for decision-making, countries need to ensure that data within their systems are of the highest quality. Comprehensive, systematic and transparent data cleaning approaches form a core component of preparing DHIS2 data for use. Unfortunately, there is paucity of exhaustive and systematic descriptions of data cleaning processes employed on DHIS2-based data. In this paper, we describe results of systematic data cleaning approach applied on a national-level DHIS2 instance, using Kenya as the case example. Methods Broeck et al’s framework, involving repeated cycles of a three-phase process (data screening, data diagnosis and data treatment), was employed on six HIV indicator reports collected monthly from all care facilities in Kenya from 2011 to 2018. This resulted to repeated facility reporting instances. Quality dimensions evaluated included reporting rate, reporting timeliness, and indicator completeness of submitted reports each done per facility per year. The various error types were categorized, and Friedman analyses of variance conducted to examine differences in distribution of facilities by error types. Data cleaning was done during the treatment phases. Results A generic five-step data cleaning sequence was developed and applied in cleaning HIV indicator data reports extracted from DHIS2. Initially, 93,179 facility reporting instances were extracted from year 2011 to 2018. 50.23% of these instances submitted no reports and were removed. Of the remaining reporting instances, there was over reporting in 0.03%. Quality issues related to timeliness included scenarios where reports were empty or had data but were never on time. Percentage of reporting instances in these scenarios varied by reporting type. Of submitted reports empty reports also varied by report type and ranged from 1.32–18.04%. Report quality varied significantly by facility distribution (p = 0.00) and report type. Conclusions The case instance of Kenya reveals significant data quality issues for HIV reported data that were not detected by the inbuilt error detection procedures within DHIS2. More robust and systematic data cleaning processes should be integrated to current DHIS2 implementations to ensure highest quality data.


2021 ◽  
Author(s):  
Kiran Chaudhary ◽  
Mansaf Alam ◽  
Mabrook S. Al-Rakhami ◽  
Abdu Gumaei

Abstract Almost many consumers are inclined by social media to purchase the product and spend more money on purchasing. We got the data from social media to analyse the consumer behaviour. We have considered the consumer data from Facebook, Twitter, LinkedIn and YouTube. There is diversity and high-speed, high volume data is coming from social media, so we used big data technology. Big Data Technology is the recent technology is used in various field of research. In this paper we have used the concept of big data technology to process data and analyse to predict the consumer behaviour on social media. We have analysed the consumer behaviour based on certain parameter and criteria. we have analysed the consumer perception, attitude towards the social media. For doing the prediction we have pre-process the data to make the quality data so that we can take the quality decision based on outcome of our model. We have used the predictive big data analytics technique to analyse the consumer behaviour prediction in this paper.


Author(s):  
Mohamed Kashef ◽  
Yongkang Liu ◽  
Karl Montgomery ◽  
Richard Candell

Abstract Despite the huge efforts to deploy wireless communications technologies in smart manufacturing scenarios, some manufacturing sectors are still slow to massive adoption. This slowness of widespread adoption of wireless technologies in cyber-physical systems (CPSs) is partly due to not fully understanding the detailed impact of wireless deployment on the physical processes especially in the cases that require low latency and high reliability communications. In this article, we introduce an approach to integrate wireless network traffic data and physical processes data to evaluate the impact of wireless communications on the performance of a manufacturing factory work cell. The proposed approach is introduced through the discussion of an engineering use case. A testbed that emulates a robotic manufacturing factory work cell is constructed using two collaborative-grade robot arms, machine emulators, and wireless communication devices. All network traffic data are collected and physical process data, including the robots and machines states and various supervisory control commands, is also collected and synchronized with the network data. The data are then integrated where redundant data are removed and correlated activities are connected in a graph database. A data model is proposed, developed, and elaborated; the database is then populated with events from the testbed, and the resulting graph is presented. Query commands are then presented as a means to examine and analyze network performance and relationships within the components of the network. Moreover, we detail the way by which this approach is used to study the impact of wireless communications on the physical processes and illustrate the impact of various wireless network parameters on the performance of the emulated manufacturing work cell. This approach can be deployed as a building block for various descriptive and predictive wireless analysis tools for CPS.


2019 ◽  
Vol 11 (5) ◽  
pp. 1490 ◽  
Author(s):  
Jonghyuk Kim ◽  
Hyunwoo Hwangbo

In this study, real-time preventive measures were formulated for a crusher process that is impossible to automate, due to the impossibility of installing sensors during the production of plastic films, and a real-time early warning system for semi-automated processes subsequently developed. First, the flow of a typical film process was ascertained. Second, a sustainable plan for real-time forecasting in a process that cannot be automated was developed using the semi-automation method flexible structure production control (FSPC). Third, statistical early selection of the process variables that are most probably responsible for failure was performed during data preprocessing. Then, a new, unified dataset was created using the link reordering method to transform the time sequence of the continuous process into one time zone. Fourth, a sustainable prediction algorithm was developed using the association rule method along with traditional statistical techniques, and verified using actual data. Finally, the overall developed logic was applied to new production process data to verify its prediction accuracy. The developed real-time early warning system for semi-automated processes contributes significantly to the smart manufacturing process both theoretically and practically.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Janet E. Squires ◽  
Alison M. Hutchinson ◽  
Anne-Marie Bostrom ◽  
Kelly Deis ◽  
Peter G. Norton ◽  
...  

Researchers strive to optimize data quality in order to ensure that study findings are valid and reliable. In this paper, we describe a data quality control program designed to maximize quality of survey data collected using computer-assisted personal interviews. The quality control program comprised three phases: (1) software development, (2) an interviewer quality control protocol, and (3) a data cleaning and processing protocol. To illustrate the value of the program, we assess its use in the Translating Research in Elder Care Study. We utilize data collected annually for two years from computer-assisted personal interviews with 3004 healthcare aides. Data quality was assessed using both survey and process data. Missing data and data errors were minimal. Mean and median values and standard deviations were within acceptable limits. Process data indicated that in only 3.4% and 4.0% of cases was the interviewer unable to conduct interviews in accordance with the details of the program. Interviewers’ perceptions of interview quality also significantly improved between Years 1 and 2. While this data quality control program was demanding in terms of time and resources, we found that the benefits clearly outweighed the effort required to achieve high-quality data.


Sign in / Sign up

Export Citation Format

Share Document