Allocating processing power to minimize time costs in parallel software systems

Author(s):  
B. Qin ◽  
H.A. Sholl ◽  
R.A. Ammar
Author(s):  
Abraham Pouliakis ◽  
Stavros Archondakis ◽  
Efrossyni Karakitsou ◽  
Petros Karakitsos

Cloud computing is changing the way enterprises, institutions, and people understand, perceive, and use current software systems. Cloud computing is an innovative concept of creating a computer grid using the Internet facilities aiming at the shared use of resources such as computer software and hardware. Cloud-based system architectures provide many advantages in terms of scalability, maintainability, and massive data processing. By means of cloud computing technology, cytopathologists can efficiently manage imaging units by using the latest software and hardware available without having to pay for it at non-affordable prices. Cloud computing systems used by cytopathology departments can function on public, private, hybrid, or community models. Using cloud applications, infrastructure, storage services, and processing power, cytopathology laboratories can avoid huge spending on maintenance of costly applications and on image storage and sharing. Cloud computing allows imaging flexibility and may be used for creating a virtual mobile office. Security and privacy issues have to be addressed in order to ensure Cloud computing wide implementation in the near future. Nowadays, cloud computing is not widely used for the various tasks related to cytopathology; however, there are numerous fields for which it can be applied. The envisioned advantages for the everyday practice in laboratories' workflow and eventually for the patients are significant. This is explored in this chapter.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Tao Sun ◽  
Xinming Ye

Modeling and testing for parallel software systems are very difficult, because the number of states and execution sequences expands significantly caused by parallel behaviors. In this paper, a model reduction method based on Coloured Petri Net (CPN) is shown, which could generate a functionality-equivalent and trace-equivalent model with smaller scale. Model-based testing for parallel software systems becomes much easier after the model is reduced by the reduction method. Specifically, a formal model for software system specification is constructed based on CPN. Then the places in the model are divided into input places, output places, and internal places; the transitions in the model are divided into input transitions, output transitions, and internal transitions. Internal places and internal transitions could be reduced if preconditions are matching, and some other operations should be done for functionality equivalence and trace equivalence. If the place and the transition are in a parallel structure, then many execution sequences will be removed from the state space. We have proved the equivalence and have analyzed the reduction effort, so that we could get the same testing result with much lower testing workload. Finally, some practices and a performance analysis show that the method is effective.


2021 ◽  
Author(s):  
Ricardo Paharsingh

Cloud computing services are built on the premise of high availability. These services are sold to customers who are expecting a reduced cost particularly in the area of failures and maintenance. At the Infrastructure as a Service (IaaS) layer resources is sold to customers as virtual machines (VMs) with CPU and memory specifications. Both these resources are not necessarily guaranteed. This is because virtual machines can share the same hardware resources. If resources aren't allocated properly, one virtual machine for example, may use up too much CPU power reducing the processing power available to other virtual machines. This can result in response time failures. In this research a framework is developed that integrates hardware, software and response time failures. Response time failures occur when a request is made to a server and does not complete on time. The framework allows the cloud purchaser to test the system under stressed conditions, allocating more or less virtual machines to determine the availability of the system. The framework also allows the cloud provider to separately evaluate the availability of the hardware and other software systems.


1989 ◽  
Vol 31 (4-5) ◽  
pp. 485-495 ◽  
Author(s):  
G. Fox ◽  
W. Furmanski ◽  
J. Koller

2014 ◽  
Vol 23 (01) ◽  
pp. 206-211 ◽  
Author(s):  
L. Lenert ◽  
G. Lopez-Campos ◽  
L. J. Frey

Summary Objectives: Given the quickening speed of discovery of variant disease drivers from combined patient genotype and phenotype data, the objective is to provide methodology using big data technology to support the definition of deep phenotypes in medical records. Methods: As the vast stores of genomic information increase with next generation sequencing, the importance of deep phenotyping increases. The growth of genomic data and adoption of Electronic Health Records (EHR) in medicine provides a unique opportunity to integrate phenotype and genotype data into medical records. The method by which collections of clinical findings and other health related data are leveraged to form meaningful phenotypes is an active area of research. Longitudinal data stored in EHRs provide a wealth of information that can be used to construct phenotypes of patients. We focus on a practical problem around data integration for deep phenotype identification within EHR data. The use of big data approaches are described that enable scalable markup of EHR events that can be used for semantic and temporal similarity analysis to support the identification of phenotype and genotype relationships. Conclusions: Stead and colleagues’ 2005 concept of using light standards to increase the productivity of software systems by riding on the wave of hardware/processing power is described as a harbinger for designing future healthcare systems. The big data solution, using flexible markup, provides a route to improved utilization of processing power for organizing patient records in genotype and phenotype research.


2015 ◽  
pp. 1312-1332
Author(s):  
Abraham Pouliakis ◽  
Stavros Archondakis ◽  
Efrossyni Karakitsou ◽  
Petros Karakitsos

Cloud computing is changing the way enterprises, institutions, and people understand, perceive, and use current software systems. Cloud computing is an innovative concept of creating a computer grid using the Internet facilities aiming at the shared use of resources such as computer software and hardware. Cloud-based system architectures provide many advantages in terms of scalability, maintainability, and massive data processing. By means of cloud computing technology, cytopathologists can efficiently manage imaging units by using the latest software and hardware available without having to pay for it at non-affordable prices. Cloud computing systems used by cytopathology departments can function on public, private, hybrid, or community models. Using cloud applications, infrastructure, storage services, and processing power, cytopathology laboratories can avoid huge spending on maintenance of costly applications and on image storage and sharing. Cloud computing allows imaging flexibility and may be used for creating a virtual mobile office. Security and privacy issues have to be addressed in order to ensure Cloud computing wide implementation in the near future. Nowadays, cloud computing is not widely used for the various tasks related to cytopathology; however, there are numerous fields for which it can be applied. The envisioned advantages for the everyday practice in laboratories' workflow and eventually for the patients are significant. This is explored in this chapter.


2021 ◽  
Author(s):  
Ricardo Paharsingh

Cloud computing services are built on the premise of high availability. These services are sold to customers who are expecting a reduced cost particularly in the area of failures and maintenance. At the Infrastructure as a Service (IaaS) layer resources is sold to customers as virtual machines (VMs) with CPU and memory specifications. Both these resources are not necessarily guaranteed. This is because virtual machines can share the same hardware resources. If resources aren't allocated properly, one virtual machine for example, may use up too much CPU power reducing the processing power available to other virtual machines. This can result in response time failures. In this research a framework is developed that integrates hardware, software and response time failures. Response time failures occur when a request is made to a server and does not complete on time. The framework allows the cloud purchaser to test the system under stressed conditions, allocating more or less virtual machines to determine the availability of the system. The framework also allows the cloud provider to separately evaluate the availability of the hardware and other software systems.


Sign in / Sign up

Export Citation Format

Share Document