Decontamination Method Data Base

Author(s):  
E. P. Emets ◽  
G. Yu. Kolomeytsev ◽  
P. P. Polouektov ◽  
V. V. Shirokov ◽  
A. N. Yakovets

Abstract The Decontamination Method Data-Base (DMDb) is a versatile decision support information system. It is a computer based decision making tool which assists in the selection of best choice of metal surface decontamination technologies, procedures and agents. It is meant for educated and skilled experts in radiochemistry and for specialists in physics, chemistry, physical chemistry and other fields. The database has been developed to meet typical demands of users specialising in the decontamination of radiolagically contaminated materials and considers: • detergent compositions for a specific contaminant; • detergent compositions for a specific surface being decontaminated; • detergent compositions for cleaning a specific surface of a specific contaminant; • decontamination parameters in view of the composition of a detergent; • corrosion effects in different conditions; • etc. In this connection, the major data entity of the database is a unit record describing a decontamination technology for a specific contaminated surface. The unit record contains information on the surface material, contamination, decontamination effectiveness, corrosion impacts and a literature source. The data presentation makes possible the realization of a record selection algorithm for a “lay” user to support the decontamination decision approaching the best one. The analysis of the unit record subject and structure served as the basis for developing relational model “Tables – Relationships” and a tentative alternative of a user interface. Experimental records were entered to optimize users’ inquiries on the composition and corrosion effects of detergents under various conditions.

2013 ◽  
Vol 734-737 ◽  
pp. 3071-3074
Author(s):  
Guo Dong Zhang ◽  
Zhong Liu

Aiming at the phenomenon that the chaff and corner reflector released by surface ship can influence the selection of missile seeker, this paper proposed a multi-target selection method based on the prior information of false targets distribution and Support Vector Machine (SVM). By analyzing the false targets distribution law we obtain two classification principles, which are used to train the SVM studies the true and false target characteristics. The trained SVM is applied to the seeker in the target selection. This method has advantages of simple programming and high classification accuracy, and the simulation experiment in this paper confirms the correctness and effectiveness of this method.


The Ring ◽  
2015 ◽  
Vol 37 (1) ◽  
pp. 3-18
Author(s):  
Leonid Dinevich

Abstract The algorithm for bird radar echo selection was developed in Israel and has been successfully used for many years to monitor birds in periods of massive intercontinental migration in order to ensure flight safety in civil and military aviation. However, it has been found that under certain meteorological conditions the bird echo selection algorithm does not filter out false signals formed by atomized clouds and atmospheric inhomogeneities. Although the algorithm is designed to identify and sift false signals, some useful echoes from smaller birds are erroneously sifted as well. This paper presents some additional features of radar echoes reflected from atmospheric formations that can be taken into account to prevent the loss of useful bird echoes. These additional features are based on the use of polarization, fluctuation and Doppler characteristics of a reflected signal. By taking these features into account we can reduce the number of false signals and increase the accuracy of the bird echo selection algorithm. The paper presents methods for using radar echoes to identify species and sizes of birds, together with recommendations on using the data to ensure flight safety during periods of massive intercontinental bird migration.


2016 ◽  
Vol 27 (2) ◽  
pp. 27-48
Author(s):  
András Benczúr ◽  
Gyula I. Szabó

This paper introduces a generalized data base concept that unites relational and semi structured data models. As an important theoretical result we could find a quadratic decision algorithm for the implication problem of functional and join dependencies defined on the united data model. As practical contribution we presented a normal form for the new data model as a tool for data base design. With our novel representations of regular expressions, a more effective searching method could be developed. XML elements are described by XML schema languages such as a DTD or an XML Schema definition. The instances of these elements are semi-structured tuples. A semi-structured tuple is an ordered list of (attribute: value) pairs. We may think of a semi-structured tuple as a sentence of a formal language, where the values are the terminal symbols and the attribute names are the non-terminal symbols. In the authors' former work (Szabó and Benczúr, 2015) they introduced the notion of the extended tuple as a sentence from a regular language generated by a grammar where the non-terminal symbols of the grammar are the attribute names of the tuple. Sets of extended tuples are the extended relations. The authors then introduced the dual language, which generates the tuple types allowed to occur in extended relations. They defined functional dependencies (regular FD - RFD) over extended relations. In this paper they rephrase the RFD concept by directly using regular expressions over attribute names to define extended tuples. By the help of a special vertex labeled graph associated to regular expressions the specification of substring selection for the projection operation can be defined. The normalization for regular schemas is more complex than it is in the relational model, because the schema of an extended relation can contain an infinite number of tuple types. However, the authors can define selection, projection and join operations on extended relations too, so a lossless-join decomposition can be performed. They extended their previous model to deal with XML schema indicators too, e.g., with numerical constraints. They added line and set constructors too, in order to extend their model with more general projection and selection operators. This model establishes a query language with table join functionality for collected XML element data.


1970 ◽  
Vol 3 (2) ◽  
pp. 142
Author(s):  
Pauline Atherton ◽  
Karen B. Miller

<p class="p1">A project at Syracuse University utilizing MOLDS, a generalized computer-based interactive retrieval program, with a portion of the Library of Congress MARC Pilot Project tapes as a data base. The system, written in FORTRAN, was used in both a batch and an on-line mode<span class="s1">. </span><span class="s2">It </span>formed part of a computer laboratory for library science students during 1968-1969. This report describes the system and its components and points out its advantages and disadvantages.</p>


1987 ◽  
Vol 51 (2) ◽  
pp. 121-133
Author(s):  
Paul S. Speck

This section is based on a selection of article abstracts from a comprehensive business literature data base. Marketing-related abstracts from over 125 journals (both academic and trade) are reviewed by JM staff. Descriptors for each entry are assigned by JM staff. Each issue of this section represents three months of entries into the data base. JM wishes to thank Data Courier Inc for use of the ABI/INFORM business data base. Each entry has an identifying number. Cross-references appear immediately under each subject heading. Requests for specific articles should be directed to the specific publication named or to Data Courier Inc. (800/626–2823). Abstracts of the articles are contained in the ABI/INFORM data base which is available through many on-line search vendors.


Sign in / Sign up

Export Citation Format

Share Document