scholarly journals Towards a computing model for the LHCb Upgrade

2019 ◽  
Vol 214 ◽  
pp. 03045
Author(s):  
Concezio Bozzi ◽  
Stefan Roiser

The LHCb experiment will be upgraded for data taking in the LHC Run 3. The foreseen trigger output bandwidth trigger ofa few GB/s will result in data sets of tens of PB per year, which need to be efficiently streamed and stored offline for low-latency data analysis. In addition, simulation samples of up to two orders of magnitude larger than those currently simulated are envisaged, with big impact on offline computing and storage resources. This contribution discusses the offline computing model and the required offline resources for the LHCb upgrade, as resulting from the above requirements.

2014 ◽  
Vol 31 (8) ◽  
pp. 085012 ◽  
Author(s):  
Carlos Filipe Da Silva Costa ◽  
César Augusto Costa ◽  
Odylio Denys Aguiar

2022 ◽  
Vol 2146 (1) ◽  
pp. 012016
Author(s):  
Tianjun Wang ◽  
Cengceng Wang ◽  
Jiangtao Guo ◽  
dildar alim

Abstract Today, people are in an information explosion society, and visualization technology(VT) is an inevitable product of the development of the information society. With the emergence of multimedia products such as computers, networks, and communications, humans are paying more and more attention to data processing. Many countries in the world have already begun research in this area and have achieved remarkable results. VT is a core part of data analysis, also known as information processing and storage technology. It has a very extensive and important application in the field of data management. However, because the key information hidden in the data is often immersed in the massive data, it is necessary to filter the data information efficiently, and the visualization data analysis technology is a crucial part. This article adopts the experimental analysis method, which aims to provide a new method to solve the problems of traditional technology and the challenges that may arise in the future by further understanding the existing visual data analysis technology and development trend. According to the research results, the recognition rate of the optimized color visualization features under different classifiers is higher than that of the original emotional features. It can be seen that visual analysis technology is not limited to data sets with physical meaning, but can also be applied to abstract feature sets such as emotional features.


2010 ◽  
Vol 41 (01) ◽  
Author(s):  
HP Müller ◽  
A Unrath ◽  
A Riecker ◽  
AC Ludolph ◽  
J Kassubek

2018 ◽  
Vol 2 (1) ◽  
pp. 43
Author(s):  
Suwignyo Suwignyo ◽  
Abdul Rachim ◽  
Arizal Sapitri

Ice is a water that cooled below 0 °C and used for complement in drink. Ice can be found almost everywhere, including in the Wahid Hasyim Sempaja Roadside. From the preliminary test, obtained 5 samples ice cube were contaminated by Escherichia coli. The purpose of this study was to determine relationship between hygiene and sanitation with presence of Eschericia coli in ice cube of home industry at Wahid Hasyim Roadside Samarinda. This research used quantitative with survey methode. The population in this study was all of the seller in 2nd Wahid Hasyim Roadside. Sample was taken by Krejcie and Morgan so the there were 44 samples and used Cluster Random Sampling. The instruments are questionnaries, observation and laboratory test. Data analysis was carried out univariate and bivariate (using Fisher test p= 0.05). The conclusion of this study there are a relation between chosing raw material (p=0,03) and saving raw material (p=0,03) with presence of Eschericia coli. There was no relation between processing raw material into ice cube with presence of Eschericia coli (p=0,15).Advice that can be given to ice cube should maintain hygiene and sanitation of the selection, processing and storage of ice cube.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 2134-2144 ◽  
Author(s):  
Chengyuan Huang ◽  
Jiao Zhang ◽  
Tao Huang

2014 ◽  
Vol 1 (2) ◽  
pp. 293-314 ◽  
Author(s):  
Jianqing Fan ◽  
Fang Han ◽  
Han Liu

Abstract Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogenous assumptions in most statistical methods for Big Data cannot be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.


2018 ◽  
Vol 20 (1) ◽  
Author(s):  
Tiko Iyamu

Background: Over the years, big data analytics has been statically carried out in a programmed way, which does not allow for translation of data sets from a subjective perspective. This approach affects an understanding of why and how data sets manifest themselves into various forms in the way that they do. This has a negative impact on the accuracy, redundancy and usefulness of data sets, which in turn affects the value of operations and the competitive effectiveness of an organisation. Also, the current single approach lacks a detailed examination of data sets, which big data deserve in order to improve purposefulness and usefulness.Objective: The purpose of this study was to propose a multilevel approach to big data analysis. This includes examining how a sociotechnical theory, the actor network theory (ANT), can be complementarily used with analytic tools for big data analysis.Method: In the study, the qualitative methods were employed from the interpretivist approach perspective.Results: From the findings, a framework that offers big data analytics at two levels, micro- (strategic) and macro- (operational) levels, was developed. Based on the framework, a model was developed, which can be used to guide the analysis of heterogeneous data sets that exist within networks.Conclusion: The multilevel approach ensures a fully detailed analysis, which is intended to increase accuracy, reduce redundancy and put the manipulation and manifestation of data sets into perspectives for improved organisations’ competitiveness.


Author(s):  
Gautam Das

In recent years, advances in data collection and management technologies have led to a proliferation of very large databases. These large data repositories typically are created in the hope that, through analysis such as data mining and decision support, they will yield new insights into the data and the real-world processes that created them. In practice, however, while the collection and storage of massive datasets has become relatively straightforward, effective data analysis has proven more difficult to achieve. One reason that data analysis successes have proven elusive is that most analysis queries, by their nature, require aggregation or summarization of large portions of the data being analyzed. For multi-gigabyte data repositories, this means that processing even a single analysis query involves accessing enormous amounts of data, leading to prohibitively expensive running times. This severely limits the feasibility of many types of analysis applications, especially those that depend on timeliness or interactivity.


F1000Research ◽  
2014 ◽  
Vol 3 ◽  
pp. 146 ◽  
Author(s):  
Guanming Wu ◽  
Eric Dawson ◽  
Adrian Duong ◽  
Robin Haw ◽  
Lincoln Stein

High-throughput experiments are routinely performed in modern biological studies. However, extracting meaningful results from massive experimental data sets is a challenging task for biologists. Projecting data onto pathway and network contexts is a powerful way to unravel patterns embedded in seemingly scattered large data sets and assist knowledge discovery related to cancer and other complex diseases. We have developed a Cytoscape app called “ReactomeFIViz”, which utilizes a highly reliable gene functional interaction network and human curated pathways from Reactome and other pathway databases. This app provides a suite of features to assist biologists in performing pathway- and network-based data analysis in a biologically intuitive and user-friendly way. Biologists can use this app to uncover network and pathway patterns related to their studies, search for gene signatures from gene expression data sets, reveal pathways significantly enriched by genes in a list, and integrate multiple genomic data types into a pathway context using probabilistic graphical models. We believe our app will give researchers substantial power to analyze intrinsically noisy high-throughput experimental data to find biologically relevant information.


Sign in / Sign up

Export Citation Format

Share Document