The Sage System Training Program for the Air Defense Command

Author(s):  
John T. Rowell ◽  
Eugene R. Streich

This article describes the development and implementation of a program to train a large-scale, semi-automatic data processing system known as SAGE (Semi-Automatic Ground Environment). Particular attention is given to describing the the air defense system, the methodology used to satisfy the training requirements, the conduct of the training program in operational settings, and results of various studies of training effectiveness. Of significance was the emergence of a unique conceptual formulation of system training principles and of associated simulation techniques.

2016 ◽  
Vol 23 (6) ◽  
pp. 595-604 ◽  
Author(s):  
Jae Hyoung Cho ◽  
Hun-Sung Kim ◽  
Seung Hyun Yoo ◽  
Chang Hee Jung ◽  
Woo Je Lee ◽  
...  

Introduction The aim of this study was to improve the quality of diabetes control and evaluate the efficacy of an Internet-based integrated healthcare system for diabetes management and safety. Methods We conducted a large-scale, multi-centre, randomized clinical trial involving 484 patients. Patients in the intervention group ( n = 244) were treated with the Internet-based system for six months, while the control group ( n = 240) received the usual outpatient management over the same period. HbA1c, blood chemistries, anthropometric parameters, and adverse events were assessed at the beginning of the study, after three months, and the end of the study. Results There were no initial significant differences between the groups with respect to demographics and clinical parameters. Upon six-month follow-up, HbA1c levels were significantly decreased from 7.86 ± 0.69% to 7.55 ± 0.86% within the intervention group ( p < 0.001) compared to 7.81 ± 0.66% to 7.70 ± 0.88% within the control group. Postprandial glucose reduction was predominant. A subgroup with baseline HbA1c higher than 8% and good compliance achieved a reduction of HbA1c by 0.8 ± 1.05%. Glucose control and waist circumference reduction were more effective in females and subjects older than 40 years of age. There were no adverse events associated with the intervention. Discussion This e-healthcare system was effective for glucose control and body composition improvement without associated adverse events in a multi-centre trial. This system may be effective in improving diabetes control in the general population.


2021 ◽  
Vol 48 (3) ◽  
pp. 128-129
Author(s):  
Sounak Kar ◽  
Robin Rehrmann ◽  
Arpan Mukhopadhyay ◽  
Bastian Alt ◽  
Florin Ciucu ◽  
...  

We analyze a data-processing system with n clients producing jobs which are processed in batches by m parallel servers; the system throughput critically depends on the batch size and a corresponding sub-additive speedup function that arises due to overhead amortization. In practice, throughput optimization relies on numerical searches for the optimal batch size which is computationally cumbersome. In this paper, we model this system in terms of a closed queueing network assuming certain forms of service speedup; a standard Markovian analysis yields the optimal throughput in w n4 time. Our main contribution is a mean-field model that has a unique, globally attractive stationary point, derivable in closed form. This point characterizes the asymptotic throughput as a function of the batch size that can be calculated in O(1) time. Numerical settings from a large commercial system reveal that this asymptotic optimum is accurate in practical finite regimes.


2016 ◽  
Vol 16 (1) ◽  
Author(s):  
Sumona Chaudhury ◽  
Lauren Arlington ◽  
Shelby Brenan ◽  
Allan Kaijunga Kairuki ◽  
Amunga Robson Meda ◽  
...  

2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


2014 ◽  
Vol 11 (S308) ◽  
pp. 87-96
Author(s):  
Oliver Hahn

AbstractI review the nature of three-dimensional collapse in the Zeldovich approximation, how it relates to the underlying nature of the three-dimensional Lagrangian manifold and naturally gives rise to a hierarchical structure formation scenario that progresses through collapse from voids to pancakes, filaments and then halos. I then discuss how variations of the Zeldovich approximation (based on the gravitational or the velocity potential) have been used to define classifications of the cosmic large-scale structure into dynamically distinct parts. Finally, I turn to recent efforts to devise new approaches relying on tessellations of the Lagrangian manifold to follow the fine-grained dynamics of the dark matter fluid into the highly non-linear regime and both extract the maximum amount of information from existing simulations as well as devise new simulation techniques for cold collisionless dynamics.


IEEE Access ◽  
2015 ◽  
Vol 3 ◽  
pp. 2341-2351 ◽  
Author(s):  
Zhuofeng Zhao ◽  
Weilong Ding ◽  
Jianwu Wang ◽  
Yanbo Han

1993 ◽  
Vol 2 (4) ◽  
pp. 133-144 ◽  
Author(s):  
Jon B. Weissman ◽  
Andrew S. Grimshaw ◽  
R.D. Ferraro

The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes and extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the programming tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used. The vehicle for our demonstration is a 2D electromagnetic finite element scattering code we have implemented in Mentat, an object-oriented parallel processing system. We briefly describe the application. Mentat, the implementation, and present performance results for both a Mentat and a hand-coded parallel Fortran version.


Sign in / Sign up

Export Citation Format

Share Document