Zuverlässigkeitsanalyse von komplexen Datenverarbeitungsstrukturen mit Hilfe von Fehlerbäumen / Reliability analysis of complex data processing structures by means of fault trees

1977 ◽  
Vol 19 (1-6) ◽  
Author(s):  
W. G. SCHNEEWEISS
Author(s):  
Abou_el_ela Abdou Hussein

Day by day advanced web technologies have led to tremendous growth amount of daily data generated volumes. This mountain of huge and spread data sets leads to phenomenon that called big data which is a collection of massive, heterogeneous, unstructured, enormous and complex data sets. Big Data life cycle could be represented as, Collecting (capture), storing, distribute, manipulating, interpreting, analyzing, investigate and visualizing big data. Traditional techniques as Relational Database Management System (RDBMS) couldn’t handle big data because it has its own limitations, so Advancement in computing architecture is required to handle both the data storage requisites and the weighty processing needed to analyze huge volumes and variety of data economically. There are many technologies manipulating a big data, one of them is hadoop. Hadoop could be understand as an open source spread data processing that is one of the prominent and well known solutions to overcome handling big data problem. Apache Hadoop was based on Google File System and Map Reduce programming paradigm. Through this paper we dived to search for all big data characteristics starting from first three V's that have been extended during time through researches to be more than fifty six V's and making comparisons between researchers to reach to best representation and the precise clarification of all big data V’s characteristics. We highlight the challenges that face big data processing and how to overcome these challenges using Hadoop and its use in processing big data sets as a solution for resolving various problems in a distributed cloud based environment. This paper mainly focuses on different components of hadoop like Hive, Pig, and Hbase, etc. Also we institutes absolute description of Hadoop Pros and cons and improvements to face hadoop problems by choosing proposed Cost-efficient Scheduler Algorithm for heterogeneous Hadoop system.


Diagnostics ◽  
2020 ◽  
Vol 10 (12) ◽  
pp. 1052
Author(s):  
Petr G. Lokhov ◽  
Oxana P. Trifonova ◽  
Dmitry L. Maslov ◽  
Elena E. Balashova

In metabolomics, mass spectrometry is used to detect a large number of low-molecular substances in a single analysis. Such a capacity could have direct application in disease diagnostics. However, it is challenging because of the analysis complexity, and the search for a way to simplify it while maintaining the diagnostic capability is an urgent task. It has been proposed to use the metabolomic signature without complex data processing (mass peak detection, alignment, normalization, and identification of substances, as well as any complex statistical analysis) to make the analysis more simple and rapid. Methods: A label-free approach was implemented in the metabolomic signature, which makes the measurement of the actual or conditional concentrations unnecessary, uses only mass peak relations, and minimizes mass spectra processing. The approach was tested on the diagnosis of impaired glucose tolerance (IGT). Results: The label-free metabolic signature demonstrated a diagnostic accuracy for IGT equal to 88% (specificity 85%, sensitivity 90%, and area under receiver operating characteristic curve (AUC) of 0.91), which is considered to be a good quality for diagnostics. Conclusions: It is possible to compile label-free signatures for diseases that allow for diagnosing the disease in situ, i.e., right at the mass spectrometer without complex data processing. This achievement makes all mass spectrometers potentially versatile diagnostic devices and accelerates the introduction of metabolomics into medicine.


2012 ◽  
Vol 253-255 ◽  
pp. 2091-2096
Author(s):  
Yan Feng Tang ◽  
Hui Mei Li ◽  
Xiang Kai Liu ◽  
Shao Qing Liu

Bayesian method was introduced and leaded into the vehicle fault data processing. The parameter estimation and the selection of the optimal distribution model based on Bayesian method were studied, and an example was given. The references are provided for the application of Bayesian method in the large complicated systems, such as vehicle equipments.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5125 ◽  
Author(s):  
Liu ◽  
Li ◽  
Yu ◽  
Zhao ◽  
Zhang

Radio frequency identification (RFID) has shown its potential in human–machine interaction thanks to its inherent function of identification and relevant physical information of signals, but complex data processing and undesirable input accuracy restrict its application and promotion in practical use. This paper proposes a novel finger-controlled passive RFID tag design for human–machine interaction. The tag antenna is based on a dipole antenna with a separated T-match structure, which is able to adjust the state of the tag by the press of a finger. The state of the proposed tag can be recognized directly by the code received by the RFID reader, and no complex data processing is needed. Since the code is hardly affected by surroundings, the proposed tag is suitable to be used as a wireless switch or control button in multiple scenarios. Moreover, arrays of the proposed tag with rational tag arrangements could contribute to a series of manual control devices, such as a wireless keyboard, a remote controller, and a wireless gamepad, without batteries. A 3 × 4 array of the finger-controlled tag is presented to constitute a simple passive RFID keyboard as an example of the applications of the proposed tag array and it refers to the arrangement of a keypad and can achieve precise, convenient, quick, and practical commands and text input into machines by pressing the tags with fingers. Simulations and measurements of the proposed tag and tag array have been carried out to validate their performances in human–machine interaction.


2017 ◽  
Vol 50 (3) ◽  
pp. 959-966 ◽  
Author(s):  
J. Filik ◽  
A. W. Ashton ◽  
P. C. Y. Chang ◽  
P. A. Chater ◽  
S. J. Day ◽  
...  

A software package for the calibration and processing of powder X-ray diffraction and small-angle X-ray scattering data is presented. It provides a multitude of data processing and visualization tools as well as a command-line scripting interface for on-the-fly processing and the incorporation of complex data treatment tasks. Customizable processing chains permit the execution of many data processing steps to convert a single image or a batch of raw two-dimensional data into meaningful data and one-dimensional diffractograms. The processed data files contain the full data provenance of each process applied to the data. The calibration routines can run automatically even for high energies and also for large detector tilt angles. Some of the functionalities are highlighted by specific use cases.


2012 ◽  
Vol 461 ◽  
pp. 418-420
Author(s):  
Yi Min Mo ◽  
Xin Shun Tong ◽  
Li Hua Yang

The wide application of information technology has greatly improve the work efficiency but also caused a large and complex data accumulation. How to get the valuable information from vast amounts of data are the key issues in data processing. This paper studied the application of data mining technology in tobacco commercial enterprise from three aspects: market demand forecasting, customer relationship management and historical data processing. Analysis of how to use data mining technology to make full use of large amounts of data to provide a basis for tobacco commercial enterprise’s decision-making.


Sign in / Sign up

Export Citation Format

Share Document