AIDS data processing and analysis system

1967 ◽  
Author(s):  
C. FAIRBANK
2010 ◽  
Vol 36 ◽  
pp. 103-108
Author(s):  
Kuan Fang He ◽  
Ping Yu Zhu ◽  
Xue Jun Li ◽  
D.M. Xiao

Distributed optical fiber sensing technology in the dam safety monitoring has been applied. The most typical application is the Swiss distributed optical fiber DiTeSt-STA202 analyzer. The instrument can simultaneously measure the distribution of thousands of points in a fiber strian and temperature, and it is suitable for measurement of objects up to several kilometers, but it’s functions limit to the data acquisition and the most primitive conversion of the strain and temperature without the deeper data processing functions, and the interface is less of user-friendly nature and poor adaptability. In this paper, The secondary development of DiTeSt-STA202 is done by Delphi and ADO technology based on analysis of the data storage types and formats. The data processing and analysis system is achieved in an efficient and accurate way.


Author(s):  
Ponomarenko ◽  
Teleus

The subject of the research is the approach to the possibility of using business intelligence for integrated data processing and analysis in order to optimize the company’s activities. The purpose of writing this article is to study the concept of the BI-systems peculiarities use as one of the advanced approaches to the pro- cessing and analysis of large data sets that are continuously accumulated from various sources. Methodology. The research methodology is system-structural and comparative analyzes (to study the application of BI-systems in the process of working with large data sets); monograph (the study of various software solutions in the market of business intelligence); economic analysis (when assessing the pos- sibility of using business intelligence systems to strengthen the competitive position of companies). The scientific novelty consists the features of using the business analytics model in modern conditions to optimize the activities of companies through the use of complex information, which in many cases is unstructured, are identified. The main directions of working with big data are disclosed, starting from the stage of collection and storage in specialized repositories, and ending with a comprehensive analysis of information. The main advantages of using dashboards in the process of demonstrating research results are given. A comprehensive analysis of software products in the business intelligence market has been carried out. Conclusions. The use of business intelligence allows companies to optimize their activities by making effective management decisions. The availability of a large number of BI tools al- lows company to adapt the analysis system in accordance with available data and existing needs of the company. Software solutions make it possible to build dash- boards with the settings of the selected system of indicators.


2017 ◽  
Vol 14 (1) ◽  
pp. 64-68 ◽  
Author(s):  
Peng Shi ◽  
Li Li

The functions of the network analysis system include detection and analysis of network data stream. According to the results of the network analysis, we monitor the network accident and avoid the security risks. This can improve the network performance and increase the network availability. As the data flow in the network is constantly produced, the biggest characteristic of network analysis system is that it is a real-time system. Because of the high requirements of the network data analysis and network fault processing, the system requires very high processing efficiency of the real time data of network. Stream computing is a technique specifically for processing real-time data streams. Its idea is that the value of the data is reduced with the lapse of time, so as long as the data appearing, it must be processed as soon as possible. So we use the technology of stream computing to design network analysis system to meet the needs of real-time capability. Moreover, the stream computing framework has been widely welcomed in the field because of its good expansibility, ease of use and flexibility. In this paper, firstly, we introduce the characteristics of the data processing based on stream computing and the traditional data processing separately. We point out their difference and introduce the technique of stream computing. Then, we introduce the architecture of network analysis system designed base on the technique of stream computing. The architecture includes two main components that are logic processing layer and communication layer. We describe the characteristics of each component and functional characteristics in detail, and we introduce the system load balancing algorithm. Finally, by experiments, we verify the effectiveness of the system’s characteristics of dynamic expansion and load balancing.


2015 ◽  
Vol 17 (4) ◽  
pp. 327-330 ◽  
Author(s):  
Xiaodan Zhang ◽  
Chundong Hu ◽  
Peng Sheng ◽  
Yuanzhe Zhao ◽  
Deyun Wu ◽  
...  

2020 ◽  
Author(s):  
Alicia Clum ◽  
Marcel Huntemann ◽  
Brian Bushnell ◽  
Brian Foster ◽  
Bryce Foster ◽  
...  

ABSTRACTThe DOE JGI Metagenome Workflow performs metagenome data processing, including assembly, structural, functional, and taxonomic annotation, and binning of metagenomic datasets that are subsequently included into the Integrated Microbial Genomes and Microbiomes (IMG/M) comparative analysis system (I. Chen, K. Chu, K. Palaniappan, M. Pillay, A. Ratner, J. Huang, M. Huntemann, N. Varghese, J. White, R. Seshadri, et al, Nucleic Acids Rsearch, 2019) and provided for download via the Joint Genome Institute (JGI) Data Portal (https://genome.jgi.doe.gov/portal/). This workflow scales to run on thousands of metagenome samples per year, which can vary by the complexity of microbial communities and sequencing depth. Here we describe the different tools, databases, and parameters used at different steps of the workflow, to help with interpretation of metagenome data available in IMG and to enable researchers to apply this workflow to their own data. We use 20 publicly available sediment metagenomes to illustrate the computing requirements for the different steps and highlight the typical results of data processing. The workflow modules for read filtering and metagenome assembly are available as a Workflow Description Language (WDL) file (https://code.jgi.doe.gov/BFoster/jgi_meta_wdl.git). The workflow modules for annotation and binning are provided as a service to the user community at https://img.jgi.doe.gov/submit and require filling out the project and associated metadata descriptions in Genomes OnLine Database (GOLD) (S. Mukherjee, D. Stamatis, J. Bertsch, G. Ovchinnikova, H. Katta, A. Mojica, I Chen, and N. Kyrpides, and T. Reddy, Nucleic Acids Research, 2018).IMPORTANCEThe DOE JGI Metagenome Workflow is designed for processing metagenomic datasets starting from Illumina fastq files. It performs data pre-processing, error correction, assembly, structural and functional annotation, and binning. The results of processing are provided in several standard formats, such as fasta and gff and can be used for subsequent integration into the Integrated Microbial Genome (IMG) system where they can be compared to a comprehensive set of publicly available metagenomes. As of 7/30/2020 7,155 JGI metagenomes have been processed by the JGI Metagenome Workflow.


Author(s):  
Ding Xiong ◽  
Lu Yan ◽  
Peng Qiong

This article is about how physiological status data is more important for athlete support training and competition. The Physiological Plan system is designed and implemented in this article, and the system is divided into the hardware layer, the data processing layer, the algorithm layer and the interface layer. The hardware layer adopts the Berkeley Tricorder platform. The data processing layer integrates the data of each sensor on the adapter mode. The algorithm layer includes a filtering algorithm, a peak detection algorithm and an Outlier detection algorithm. At the interface level, the coach interacts with the athlete, and the results are presented to the coach. In the system, the ECG, EMG and 3D acceleration of the athletes can be collected and analyzed at the same time, and the resultant data analysis are fed back to the coaches, these can solve problems in the athletes' physical data which cannot be collected and analyzed in real time. The experimental results show that the system can effectively assist the coaches in monitoring and analyzing the state of the athletes during training and competition.


1998 ◽  
Vol 14 (2) ◽  
pp. 211-222 ◽  
Author(s):  
Aki Salo ◽  
Paul N. Grimshaw

Eight trials each of 7 athletes (4 women and 3 men) were videotaped and digitized in order to investigate the variation sources and kinematic variability of video motion analysis in sprint hurdles. Mean coefficients of variation (CVs) of individuals ranged from 1.0 to 92.2% for women and from 1.2 to 209.7% for men. There were 15 and 14 variables, respectively, in which mean CVs revealed less than 5% variation. In redigitizing, CVs revealed <1.0% for 12 variables for the women's trials and 10 variables for the men's trials. These results, together with variance components (between-subjects, within-subject, and redigitizing), showed that one operator and the analysis system together produced repeatable values for most of the variables. The most repeatable variables by this combination were displacement variables. However, further data processing (e.g., differentiation) appeared to have some unwanted effects on repeatability. Regarding the athletes' skill, CVs showed that athletes can reproduce most parts of their performance within certain (reasonably low) limits.


Sign in / Sign up

Export Citation Format

Share Document