A Rocks Based Visualization Cluster Platform Design and Application for Bridge Health Monitoring

2012 ◽  
Vol 178-181 ◽  
pp. 2213-2218
Author(s):  
Feng Chen ◽  
Ming Liu ◽  
Xiao Ying Han ◽  
Wei Chen

As high performance computing (HPC) becomes a part of the scientific computing landscape, visualizing HPC has become a critical field of its own. This paper describes a visualization cluster solution developed for bridge health monitoring system. First, LCD display, computer with NVIDIA graphic cards, 1G switch and 10G switch are used to build hardware platform; Secondly, Linux operation system, Rocks management software, CGLX middle software is used to display multi-media and 3D data; Finally, OpenSenceGraph 3D graphics engine is used to write high-performance parallel 3D programs. This approach can be used not only for parallel computing, but also for parallel 3D modeling and display. Some application result on bridge health monitoring is given in the end.

Author(s):  
Vijayalakshmi Kakulapati ◽  
V.V.S.S.S. Balaram ◽  
P. Vijay Krishna

This research will identify different patient's online behaviours and similarities that can help patient-clinician communications to improve to discover the future or additional threats to raise awareness of causes and consequences. To scale the model, the prototype will rely on the High Performance Computing (HPC) platform running Hadoop file system for storing patient data at distributed locations and Map-reduce paradigm with machine learning algorithms will be deployed to detect the symptoms. In this approach the authors protect online data of patients from privacy issues. In this, the author's effort this difficulty by means of a new advance utilising new similarity measures between patients. The authors are also providing a research investigation on grouping behavior which is affecting by diverse series demonstration, diverse distance similarity measures, the number of genuine patients, and the number of online doctors obtainable, similarity among patient symptoms, minimizing the feasibility, the number of patients for sittings, and the number of clusters to form.


Author(s):  
Mark H. Ellisman

The increased availability of High Performance Computing and Communications (HPCC) offers scientists and students the potential for effective remote interactive use of centralized, specialized, and expensive instrumentation and computers. Examples of instruments capable of remote operation that may be usefully controlled from a distance are increasing. Some in current use include telescopes, networks of remote geophysical sensing devices and more recently, the intermediate high voltage electron microscope developed at the San Diego Microscopy and Imaging Resource (SDMIR) in La Jolla. In this presentation the imaging capabilities of a specially designed JEOL 4000EX IVEM will be described. This instrument was developed mainly to facilitate the extraction of 3-dimensional information from thick sections. In addition, progress will be described on a project now underway to develop a more advanced version of the Telemicroscopy software we previously demonstrated as a tool to for providing remote access to this IVEM (Mercurio et al., 1992; Fan et al., 1992).


MRS Bulletin ◽  
1997 ◽  
Vol 22 (10) ◽  
pp. 5-6
Author(s):  
Horst D. Simon

Recent events in the high-performance computing industry have concerned scientists and the general public regarding a crisis or a lack of leadership in the field. That concern is understandable considering the industry's history from 1993 to 1996. Cray Research, the historic leader in supercomputing technology, was unable to survive financially as an independent company and was acquired by Silicon Graphics. Two ambitious new companies that introduced new technologies in the late 1980s and early 1990s—Thinking Machines and Kendall Square Research—were commercial failures and went out of business. And Intel, which introduced its Paragon supercomputer in 1994, discontinued production only two years later.During the same time frame, scientists who had finished the laborious task of writing scientific codes to run on vector parallel supercomputers learned that those codes would have to be rewritten if they were to run on the next-generation, highly parallel architecture. Scientists who are not yet involved in high-performance computing are understandably hesitant about committing their time and energy to such an apparently unstable enterprise.However, beneath the commercial chaos of the last several years, a technological revolution has been occurring. The good news is that the revolution is over, leading to five to ten years of predictable stability, steady improvements in system performance, and increased productivity for scientific applications. It is time for scientists who were sitting on the fence to jump in and reap the benefits of the new technology.


Sign in / Sign up

Export Citation Format

Share Document