variable list
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 3)

H-INDEX

3
(FIVE YEARS 2)

Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 1022 ◽  
Author(s):  
Igal Sason

This paper is focused on the derivation of data-processing and majorization inequalities for f-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form the major part of this manuscript.


2019 ◽  
Vol 93 (11) ◽  
pp. 2293-2313 ◽  
Author(s):  
R. Zajdel ◽  
K. Sośnica ◽  
M. Drożdżewski ◽  
G. Bury ◽  
D. Strugarek

Abstract The Satellite Laser Ranging (SLR) network struggles with some major limitations including an inhomogeneous global station distribution and uneven performance of SLR sites. The International Laser Ranging Service (ILRS) prepares the time-variable list of the most well-performing stations denoted as ‘core sites’ and recommends using them for the terrestrial reference frame (TRF) datum realization in SLR processing. Here, we check how different approaches of the TRF datum realization using minimum constraint conditions (MCs) and the selection of datum-defining stations affect the estimated SLR station coordinates, the terrestrial scale, Earth rotation parameters (ERPs), and geocenter coordinates (GCC). The analyses are based on the processing of the SLR observations to LAGEOS-1/-2 collected between 2010 and 2018. We show that it is essential to reject outlying stations from the reference frame realization to maintain a high quality of SLR-based products. We test station selection criteria based on the Helmert transformation of the network w.r.t. the a priori SLRF2014 coordinates to reject misbehaving stations from the list of datum-defining stations. The 25 mm threshold is optimal to eliminate the epoch-wise temporal deviations and to provide a proper number of datum-defining stations. According to the station selection algorithm, we found that some of the stations that are not included in the list of ILRS core sites could be taken into account as potential core stations in the TRF datum realization. When using a robust station selection for the datum definition, we can improve the station coordinate repeatability by 8%, 4%, and 6%, for the North, East and Up components, respectively. The global distribution of datum-defining stations is also crucial for the estimation of ERPs and GCC. When excluding just two core stations from the SLR network, the amplitude of the annual signal in the GCC estimates is changed by up to 2.2 mm, and the noise of the estimated pole coordinates is substantially increased.


JAMIA Open ◽  
2019 ◽  
Vol 2 (4) ◽  
pp. 516-520
Author(s):  
Katelyn A McKenzie ◽  
Suzanne L Hunt ◽  
Genevieve Hulshof ◽  
Dinesh Pal Mudaranthakam ◽  
Kayla Meyer ◽  
...  

Abstract Objective Managing registries with continual data collection poses challenges, such as following reproducible research protocols and guaranteeing data accessibility. The University of Kansas (KU) Alzheimer’s Disease Center (ADC) maintains one such registry: Curated Clinical Cohort Phenotypes and Observations (C3PO). We created an automated and reproducible process by which investigators have access to C3PO data. Materials and Methods Data was input into Research Electronic Data Capture. Monthly, data part of the Uniform Data Set (UDS), that is data also collected at other ADCs, was uploaded to the National Alzheimer’s Coordinating Center (NACC). Quarterly, NACC cleaned, curated, and returned the UDS to the KU Data Management and Statistics (DMS) Core, where it was stored in C3PO with other quarterly curated site-specific data. Investigators seeking to utilize C3PO submitted a research proposal and requested variables via the publicly accessible and searchable data dictionary. The DMS Core used this variable list and an automated SAS program to create a subset of C3PO. Results C3PO contained 1913 variables stored in 15 datasets. From 2017 to 2018, 38 data requests were completed for several KU departments and other research institutions. Completing data requests became more efficient; C3PO subsets were produced in under 10 seconds. Discussion The data management strategy outlined above facilitated reproducible research practices, which is fundamental to the future of research as it allows replication and verification to occur. Conclusion We created a transparent, automated, and efficient process of extracting subsets of data from a registry where data was changing daily.


2017 ◽  
Vol 66 (4) ◽  
pp. 3012-3023 ◽  
Author(s):  
Leonel Arevalo ◽  
Rodrigo C. de Lamare ◽  
Raimundo Sampaio-Neto

2012 ◽  
Vol 239-240 ◽  
pp. 1399-1403
Author(s):  
Ai Zhen Liu ◽  
Li Yun Chen ◽  
Xing Yue Du ◽  
Xiu Feng Gao

The rational number of Integrated Testing Vehicle is the key to decrease missile testing cost and testing time. Based on analyzing the missile testing character, mathematics model was designed and optimal method based on Genetic Algorithm was proposed. Aiming at the question particularity, variable-list encoding method, special cross operator based on one chromosome, special transformation operator and mutation operator were designed. The experimental results show that the optimal solutions of this algorithm can get a perfect balance between cost and testing time, and can meet the character of accurate support in the future war


2005 ◽  
Vol 20 (1) ◽  
pp. 14-23 ◽  
Author(s):  
Tamara L. Thomas ◽  
Edbert B. Hsu ◽  
Hong K. Kim ◽  
Sara Colli ◽  
Guillermo Arana ◽  
...  

AbstractObjectives:No universally accepted methods for objective evaluation of the function of the Incident Command System (ICS) in disaster exercises currently exist. An ICS evaluation method for disaster simulations was derived and piloted.Methods:A comprehensive variable list for ICS function was created and four distinct ICS evaluation methods (quantitative and qualitative) were derived and piloted prospectively during an exercise. Delay times for key provider-victim interactions were recorded through a system of data collection using participant and observer-based instruments. Two different post exercise surveys (commanders, other participants) were used to assess knowledge and perceptions of assigned roles, organization, and communications. Direct observation by trained observers and a structured debriefing session also were employed.Results:A total of 45 volunteers participated in the exercise that included 20 mock victims. First, mean, and last victim delay times (from exercise initiation) were 2.1, 4.0, and 9.3 minutes (min) until triage, and 5.2, 11.9, and 22.0 min for scene evacuation, respectively. First, mean, and last victim delay times to definitive treatment were 6.0, 14.5, and 25.0 min. Mean time to triage (and range) for scene Zones I (nearest entrance), II (intermediate) and III (ground zero) were 2.9 (2.0–4.0), 4.1 (3.0–5.0) and 5.2 (3.0–9.0) min, respectively. The lowest acuity level (Green) victims had the shortest mean times for triage (3.5 min), evacuation (4.0 min), and treatment (10.0 min) while the highest acuity level (Red) victims had the longest mean times for all measures; patterns consistent with independent rather than ICS-directed rescuer activities. Specific ICS problem areas were identified.Conclusions:A structured, objective, quantitative evaluation of ICS function can identify deficiencies that can become the focus for subsequent improvement efforts.


SIMULATION ◽  
1972 ◽  
Vol 18 (5) ◽  
pp. 179-187 ◽  
Author(s):  
Per A. Holst

An interpretive hybrid programming language, HYBRID, has been developed by The Foxboro Company, for use in a small hybrid computer system. The TELETYPE* oriented language comprises a command structure resembling FOCAL and offers hybrid instructions similar to HOI. It provides features seldom found in software for small (mini) computers with limited core, such as computed (variable) part and step numbers in go-to instructions, foreground- background operational modes, monitor functions, and a full assortment of hybrid and interrupt handling capabilities. The HYBRID program occupies approximately 4K of core in a PDP-7t 18-bit computer and includes routines for drum (disc) and paper tape input- output of source and data files. It permits the use of assembly (machine) language routines to be called by source program commands. The variable list (Symbol Table) is referenced in a binary search mode, which results in fast execution speed as well as efficient running of the source program. The language contains a number of error-checking and source-program protection provisions to make it as user-friendly and fault-tolerant as possible .


Sign in / Sign up

Export Citation Format

Share Document