configuration file
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 9)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Vol 2066 (1) ◽  
pp. 012055
Author(s):  
Xiao Tao

Abstract With the advent of the era of big data, substation automation technology has reached a certain level after more than ten years of application development. In recent years, in the transformation and construction of the transmission and distribution network, a large number of substations have adopted modern technology, which has greatly improved the technological modernization of transmission and distribution and transformer construction, enhanced the reliability of transmission and distribution and transformer scheduling, and reduced the substation. The total cost of construction and the application of electronic transformers have effectively promoted the research on digital substations. In order to complete the real-time monitoring and testing of the substation system, so as to more quickly grasp the working status of the substation equipment, this paper proposes a research and design of an optical fiber signal analyzer for substation system testing. By analyzing the IEC 61850 protocol, the characteristics of the embedded system and the embedded operating system, the software modular design of the sampling value module of the signal analyzer is carried out, and the analysis of the substation configuration file SCD is designed and realized. The research results show that in practical applications, it is necessary to parse the CID files one by one, which will cause cumbersome work and easy omissions. Based on the above considerations, the SCD configuration file containing the entire site information was finally selected.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Jialu Zhang ◽  
Ruzica Piskac ◽  
Ennan Zhai ◽  
Tianyin Xu

The behavior of large systems is guided by their configurations: users set parameters in the configuration file to dictate which corresponding part of the system code is executed. However, it is often the case that, although some parameters are set in the configuration file, they do not influence the system runtime behavior, thus failing to meet the user’s intent. Moreover, such misconfigurations rarely lead to an error message or raising an exception. We introduce the notion of silent misconfigurations which are prohibitively hard to identify due to (1) lack of feedback and (2) complex interactions between configurations and code. This paper presents ConfigX, the first tool for the detection of silent misconfigurations. The main challenge is to understand the complex interactions between configurations and the code that they affected. Our goal is to derive a specification describing non-trivial interactions between the configuration parameters that lead to silent misconfigurations. To this end, ConfigX uses static analysis to determine which parts of the system code are associated with configuration parameters. ConfigX then infers the connections between configuration parameters by analyzing their associated code blocks. We design customized control- and data-flow analysis to derive a specification of configurations. Additionally, we conduct reachability analysis to eliminate spurious rules to reduce false positives. Upon evaluation on five real-world datasets across three widely-used systems, Apache, vsftpd, and PostgreSQL, ConfigX detected more than 2200 silent misconfigurations. We additionally conducted a user study where we ran ConfigX on misconfigurations reported on user forums by real-world users. ConfigX easily detected issues and suggested repairs for those misconfigurations. Our solutions were accepted and confirmed in the interaction with the users, who originally posted the problems.


2021 ◽  
Vol 14 (9) ◽  
pp. 5487-5506
Author(s):  
Haipeng Lin ◽  
Daniel J. Jacob ◽  
Elizabeth W. Lundgren ◽  
Melissa P. Sulprizio ◽  
Christoph A. Keller ◽  
...  

Abstract. Emissions are a central component of atmospheric chemistry models. The Harmonized Emissions Component (HEMCO) is a software component for computing emissions from a user-selected ensemble of emission inventories and algorithms. It allows users to re-grid, combine, overwrite, subset, and scale emissions from different inventories through a configuration file and with no change to the model source code. The configuration file also maps emissions to model species with appropriate units. HEMCO can operate in offline stand-alone mode, but more importantly it provides an online facility for models to compute emissions at runtime. HEMCO complies with the Earth System Modeling Framework (ESMF) for portability across models. We present a new version here, HEMCO 3.0, that features an improved three-layer architecture to facilitate implementation into any atmospheric model and improved capability for calculating emissions at any model resolution including multiscale and unstructured grids. The three-layer architecture of HEMCO 3.0 includes (1) the Data Input Layer that reads the configuration file and accesses the HEMCO library of emission inventories and other environmental data, (2) the HEMCO Core that computes emissions on the user-selected HEMCO grid, and (3) the Model Interface Layer that re-grids (if needed) and serves the data to the atmospheric model and also serves model data to the HEMCO Core for computing emissions dependent on model state (such as from dust or vegetation). The HEMCO Core is common to the implementation in all models, while the Data Input Layer and the Model Interface Layer are adaptable to the model environment. Default versions of the Data Input Layer and Model Interface Layer enable straightforward implementation of HEMCO in any simple model architecture, and options are available to disable features such as re-gridding that may be done by independent couplers in more complex architectures. The HEMCO library of emission inventories and algorithms is continuously enriched through user contributions so that new inventories can be immediately shared across models. HEMCO can also serve as a general data broker for models to process input data not only for emissions but for any gridded environmental datasets. We describe existing implementations of HEMCO 3.0 in (1) the GEOS-Chem “Classic” chemical transport model with shared-memory infrastructure, (2) the high-performance GEOS-Chem (GCHP) model with distributed-memory architecture, (3) the NASA GEOS Earth System Model (GEOS ESM), (4) the Weather Research and Forecasting model with GEOS-Chem (WRF-GC), (5) the Community Earth System Model Version 2 (CESM2), and (6) the NOAA Global Ensemble Forecast System – Aerosols (GEFS-Aerosols), as well as the planned implementation in the NOAA Unified Forecast System (UFS). Implementation of HEMCO in CESM2 contributes to the Multi-Scale Infrastructure for Chemistry and Aerosols (MUSICA) by providing a common emissions infrastructure to support different simulations of atmospheric chemistry across scales.


2021 ◽  
Author(s):  
Haipeng Lin ◽  
Daniel J. Jacob ◽  
Elizabeth W. Lundgren ◽  
Melissa P. Sulprizio ◽  
Christoph A. Keller ◽  
...  

Abstract. Emissions are a central component of atmospheric chemistry models. The Harmonized Emissions Component (HEMCO) is a software component for computing emissions from a user-selected ensemble of emission inventories and algorithms. While available in standalone mode, HEMCO also provides a general on-line facility for models to compute emissions at runtime. It allows users to re-grid, combine, overwrite, subset, and scale emissions from different inventories through a configuration file and with no change to the model source code. The configuration file also maps emissions to model species with appropriate units. HEMCO complies with the Earth System Modeling Framework (ESMF) for portability across models. We present here a new version HEMCO 3.0 that features an improved three-layer architecture to facilitate implementation into any atmospheric model, and improved capability for calculating emissions at any model resolution including multiscale and unstructured grids. The three-layer architecture of HEMCO 3.0 includes (1) a Data Input Layer that reads the configuration file and accesses the HEMCO library of emission inventories and other environmental data; (2) the HEMCO Core that computes emissions on the user-selected HEMCO grid; and (3) the Model Interface Layer that re-grids (if needed) and serves the data to the atmospheric model, and also serves model data to the HEMCO Core for computing emissions dependent on model state (such as from dust, vegetation, etc.). The HEMCO Core is common to the implementation in all models, while the Data Input Layer and the Model Interface Layer are adaptable to the model environment. Default versions of the Data Input Layer and Model Interface Layer enable straightforward implementation of HEMCO in any simple model architecture, and options are available to disable features such as re-gridding that may be done by independent couplers in more complex architectures. The HEMCO library of emission inventories and algorithms is continuously enriched through user contributions, so that new inventories can be immediately shared across models. HEMCO can also serve as a general data broker for models to process input data not only for emissions but for any gridded environmental datasets. We describe existing implementations of HEMCO 3.0 in (1) the GEOS-Chem “Classic” chemical transport model with shared-memory infrastructure, (2) the high-performance GEOS-Chem (GCHP) model with distributed-memory architecture, (3) the NASA GEOS Earth System Model (GEOS ESM), (4) the Weather Research and Forecasting model with GEOS-Chem (WRF-GC), (5) the Community Earth System Model Version 2 (CESM2), and (6) the NOAA Global Ensemble Forecast System – Aerosols (GEFS-Aerosols), and the planned implementation in the NOAA Unified Forecast System (UFS). Implementation of HEMCO in the CESM2 model contributes to the Multi-Scale Infrastructure for Chemistry and Aerosols (MUSICA) by providing a common emissions infrastructure to support different simulations of atmospheric chemistry across scales.


2021 ◽  
pp. 181-193
Author(s):  
Shashank Shukla
Keyword(s):  

2021 ◽  
pp. 239-253
Author(s):  
Shashank Shukla
Keyword(s):  

2020 ◽  
Vol 96 (3s) ◽  
pp. 270-276
Author(s):  
Н.М. Малышев ◽  
С.В. Рыбкин

Создан модуль САПР сквозного проектирования с возможностью отладки и верификации проектов для программируемых логических интегральных схем (ПЛИС). На основе модуля создан синтаксический анализатор HDL-кода с формированием на его основе дерева разбора с дальнейшей компиляцией во внутренние объекты. Кроме этого, в работе освещаются вопросы разработки синтеза HDL-абстракций в библиотечные компоненты устройств производителя. Освещаются методы создания синтезатора. The paper deals with EDA (Electronic Design Automation) for designing, verifying and modeling FPGA configuration file. Based on that module HDL syntax analyzer has been created which helps to parse and compile code in internal objects. Besides, the paper presents solutions for HDL synthesis in library-typed component of vendor’s devices, as well as highlights methods of developing a synthesizer.


Author(s):  
Dr. Manish L Jivtode

The Broker Architecture became popular involving client and server. Representational State Transfer(REST) architecture is the architecture of World Wide Web. REST uses HTTP protocol based on Servlet and ASMX technology is replaced by WCF web service technology. SOAP and REST are two kinds of WCF web services. REST is lightweight compared to SOAP and hence emerged as the popular technology for building distributed applications in the cloud. In this paper, conducted by exposing a HTTP endpoint address, HTTP relay binding (webHttpRelayBinding) and CRUD contract through interface. The interface is decorated using WebGet and WebInvoke attributes. WCF configuration file created using XML tags for use with REST web service.


One of the factors for the reliability of the services is authentication, which decides who can access what services. Since big data offers a wide variety of services, authentication becomes one of the main criteria for consideration. This chapter outlines the features of the security services in terms of the requirements and the issues in the business services. This chapter also gives a little background about the services in the cloud and the interaction between clients and services in the cloud, emphasizing the security services. The authentication procedure with the authentication protocol, Kerberos SPNEGO, which is offered as a security service in Hadoop, is introduced. The configuration details in a typical browser (Mozilla Firefox) are detailed. The usage of the Linux command curl is introduced in this chapter. The command to key distribution center “kinit” is outlined. Also, the procedure for accessing the server within the Java code is given. A section on server-side configuration speaks about the Maven repository, which holds all the necessary library Jar files organized as local, central, and remote. The explanation for the configuration is given with a typical XML file. Also, the usage of Simple Logging Facade for Java is introduced. The configuration has many parameters with its values and they are tabulated for better perception. The use of LDAP server, which is one of the lightweight directory access protocols, is introduced. Also, the provision for multi-scheme configuration is outlined with an example configuration file. The facilities available to provide advanced security features using signer secret provide are highlighted with appropriate examples for the parameter name and parameter value.


Sign in / Sign up

Export Citation Format

Share Document