Research on the load management and assessment system for the high energy-consuming enterprises based on key index data analysis

Author(s):  
Fei Hao ◽  
Zhen Yuan ◽  
Quan Gu ◽  
Kai Huang
2021 ◽  
pp. 102877
Author(s):  
Sunwoo Lee ◽  
Kai-yuan Hou ◽  
Kewei Wang ◽  
Saba Sehrish ◽  
Marc Paterno ◽  
...  

Author(s):  
Valentina Avati ◽  
Milosz Blaszkiewicz ◽  
Enrico Bocchi ◽  
Luca Canali ◽  
Diogo Castro ◽  
...  

2018 ◽  
Vol 1 (2) ◽  
Author(s):  
Agista Ayu Aksari

On 1st July 2012 SOE (State-Owned Enterprises)become the Value Added Tax (VAT) collector. According to the regulation of the Ministher of Finance No.85/PMK.03/2012 about the appointment of the State Owned Enterprises to collect, deposit and reporting Value Added Tax (VAT) and Sales Tax on Luxurious Goods, and precedures for collecting, depositing and reporting. The purpose of this research is to determine the difference between SOE as a Value Added Tax collector and not as a Value Added Tax collector.The object of this research is PT Pelabuhan Indonesia III cabang Benoa. The data analysis in this research is to analyze the calculation and reportig of VAT before being VAT collector and when it became VAT collector.The result of this research it is known that are the application of the value added tax on PT Pelabuhan Indonesia III Cabang Benoa before becoming tax collector is charged directly by fiskus and has official assessment system and as a PT Pelabuhan Indonesia III Cabang Benoa has a self assessment system whereby PT Pelabuahan Indonesia III Cabang Benoa became ILL wapu. Differnce in PT Pelabuhan Indonesia III Cabang Benoa as a collector, and the collector Is a time before becoming a collector has aself just my assessment system whereas before becoming a collector has official assessment system. Tax eceipt when it became a collector of VAT using duplicate counts 3 before becoming a collector only uses 2 of the double. For SSp before becoming a duplicate while using 4 collector as a collector to use duplicate. DOI 10.5281/zenodo.1214932


2021 ◽  
Vol 251 ◽  
pp. 04020
Author(s):  
Yu Hu ◽  
Ling Li ◽  
Haolai Tian ◽  
Zhibing Liu ◽  
Qiulan Huang ◽  
...  

Daisy (Data Analysis Integrated Software System) has been designed for the analysis and visualisation of X-ray experiments. To address the requirements of the Chinese radiation facilities community, spanning an extensive range from purely algorithmic problems to scientific computing infrastructure, Daisy sets up a cloud-native platform to support on-site data analysis services with fast feedback and interaction. Furthermore, the plug-in based application is convenient to process the expected high throughput data flow in parallel at next-generation facilities such as the High Energy Photon Source (HEPS). The objectives, functionality and architecture of Daisy are described in this article.


10.14311/1718 ◽  
2013 ◽  
Vol 53 (1) ◽  
Author(s):  
Aleksander Filip Żarnecki ◽  
Lech Wiktor Piotrowski ◽  
Lech Mankiewicz ◽  
Sebastian Małek

The Luiza analysis framework for GLORIA is based on the Marlin package, which was originally developed for data analysis in the new High Energy Physics (HEP) project, International Linear Collider (ILC). The HEP experiments have to deal with enormous amounts of data and distributed data analysis is therefore essential. The Marlin framework concept seems to be well suited for the needs of GLORIA. The idea (and large parts of the code) taken from Marlin is that every computing task is implemented as a processor (module) that analyzes the data stored in an internal data structure, and the additional output is also added to that collection. The advantage of this modular approach is that it keeps things as simple as possible. Each step of the full analysis chain, e.g. from raw images to light curves, can be processed step-by-step, and the output of each step is still self consistent and can be fed in to the next step without any manipulation.


2020 ◽  
Vol 245 ◽  
pp. 06042
Author(s):  
Oliver Gutsche ◽  
Igor Mandrichenko

A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approach allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.


Author(s):  
Tadeusz Wibig

Standard experimental data analysis is based mainly on conventional, deterministic inference. The complexity of modern physics problems has become so large that new ideas in the field are received with the highest of appreciation. In this paper, the author has analyzed the problem of contemporary high-energy physics concerning the estimation of some parameters of the observed complex phenomenon. This article confronts the Natural and Artificial Networks performance with the standard statistical method of the data analysis and minimization. The general concept of the relations between CI and standard (external) classical and modern informatics was realized and studied by utilizing of Natural Neural Networks (NNN), Artificial Neural Networks (ANN) and MINUIT minimization package from CERN. The idea of Autonomic Computing was followed by using brains of high school students involved in the Roland Maze Project. Some preliminary results of the comparison are given and discussed.


2011 ◽  
Vol 128-129 ◽  
pp. 1217-1221
Author(s):  
Quan Le Liu ◽  
Wei Chen

The quantity of official cars increased with the speed exceeding 20% every year which need much more energy be consumed to meet the official car needs. To investigate the energy saving potential of official cars in China, This paper introduced the strategy method with systemic viewpoint to reduce official cars energy consumption through analyzing the reason of high energy consuming of official cars. The resulted showed that only reduce the quantities and maintenance cost, and decline the displacement and using frequency can realize fuel efficiency of official cars.


Sign in / Sign up

Export Citation Format

Share Document