scholarly journals A Comparative Analysis of Distributed and Parallel Computing

In the age of emerging technologies, the amount of data is increasing very rapidly. Due to massive increase of data the level of computations are increasing. Computer executes instructions sequentially. But the time has now changed and innovation has been advanced. We are currently managing gigantic data centers that perform billions of executions on consistent schedule. Truth be- hold, if we dive deep into the processor engineering and mechanism, even a successive machine works parallel. Parallel computing is growing faster as a substitute of distributing computing. The performance to functionality ratio of parallel systems is high. Also, the I/O usage of parallel systems is lower because of ability to perform all operations simultaneously. On the other hand, the performance to functionality ratio of distributed systems is low. The I/O usage of distributed systems is higher because of incapability to perform all operations simultaneously. In this paper, an overview of distributed and parallel computing is described. The basic concept of these two computing is discussed. In addition to this, pros and cons of distributed and parallel computing models are described. Through many aspects, we can conclude that parallel systems are better than distributed systems.

Author(s):  
Annika Hinze ◽  
Jean Bacon ◽  
Alejandro Buchmann ◽  
Sharma Chakravarthy ◽  
Mani Chandi ◽  
...  

This chapter is a panel discussion in writing. The field of event-based systems finds researchers from a number of different backgrounds: distributed systems, streaming data, databases, middleware, and sensor networks. One of the consequences is that everyone comes to the field with a slightly different mindset and different expectations and goals. In this chapter, we try to capture some of the voices that are influential in our field. Seven panellists from academia and industry were invited to answer and discuss questions about event-based systems. The questions were distributed via email, to which each participant replied their initial set of answers. In a second round every panelist was given the opportunity to expand their statement and discuss the contributions of the other panellists. The questions asked can be grouped into two types. Questions in the first group refer to each participant’s understanding of the basic concepts of event-based systems (EBS), the pros and cons of EBS, typical assumptions of the field and how they understood EBS to fit into the overall landscape of software architectures. The second group of questions pointed to the future of EBS, possible killer applications and the challenges that EBS researchers in academia and industry need to address in the medium and long term. The next section gives each panellist’s initial statements as well as their comments to other participants’ contributions. Each participant’s section starts with a short introduction of the panellist and their work. In the final section, we compare and reflect on the statements and discussions that are presented by the seven panellists.


2017 ◽  
Vol 19 (01) ◽  
pp. 23-32 ◽  
Author(s):  
Anna H. Glenngård ◽  
Anders Anell

AimTo study (a) the covariation between patient reported experience measures (PREMs) and registered process measures of access and continuity when ranking providers in a primary care setting, and (b) whether registered process measures or PREMs provided more or less information about potential linkages between levels of access and continuity and explaining variables.BackgroundAccess and continuity are important objectives in primary care. They can be measured through registered process measures or PREMs. These measures do not necessarily converge in terms of outcomes. Patient views are affected by factors not necessarily reflecting quality of services. Results from surveys are often uncertain due to low response rates, particularly in vulnerable groups. The quality of process measures, on the other hand, may be influenced by registration practices and are often more easy to manipulate. With increased transparency and use of quality measures for management and governance purposes, knowledge about the pros and cons of using different measures to assess the performance across providers are important.MethodsFour regression models were developed with registered process measures and PREMs of access and continuity as dependent variables. Independent variables were characteristics of providers as well as geographical location and degree of competition facing providers. Data were taken from two large Swedish county councils.FindingsAlthough ranking of providers is sensitive to the measure used, the results suggest that providers performing well with respect to one measure also tended to perform well with respect to the other. As process measures are easier and quicker to collect they may be looked upon as the preferred option. PREMs were better than process measures when exploring factors that contributed to variation in performance across providers in our study; however, if the purpose of comparison is continuous learning and development of services, a combination of PREMs and registered measures may be the preferred option. Above all, our findings points towards the importance of a pre-analysis of the measures in use; to explore the pros and cons if measures are used for different purposes before they are put into practice.


VLSI Design ◽  
1999 ◽  
Vol 9 (1) ◽  
pp. 29-54 ◽  
Author(s):  
Sotirios G. Ziavras

Extensive comparative analysis is carried out of various mesh-connected architectures that contain sparse broadcast buses for low-cost, high-performance parallel computing. The two basic architectures differ in the implementation of bus intersections. The first architecture simply allows row/column bus crossovers, whereas the second architecture implements such intersections with switches that introduce further flexibility. Both architectures have lower cost than the mesh with multiple broadcast, which has buses spanning each row and each column, but the former architectures maintain to high extent the powerful properties of the latter mesh. The architecture that employs switches for the creation of separable buses is even shown to often perform better than the higher-cost mesh with multiple broadcast. Architectures with separable buses that employ store-and-forward routing often perform better than architectures with contiguous buses that employ the high-cost wormhole routing technique. These architectures are evaluated in reference to cost, and efficiency in implementing several important operations and application algorithms. The results prove that these architectures are very promising alternatives to the mesh with multiple broadcast while their implementation is cost-effective and feasible.


MAUSAM ◽  
2022 ◽  
Vol 53 (2) ◽  
pp. 119-126
Author(s):  
R. K. MALL ◽  
B. R. D. GUPTA

Actual evapotranspiration of wheat crop during different year from 1978-79 to 1992-93 was measured daily in Varanasi, Uttar Pradesh using lysimeter. In this study three evapotranspiration computing models namely Doorenbos and Pruitt, Thornthwaite and Soil Plant Atmosphere Water (SPAW) have been used. Comparisons of these three methods show that the SPAW model is better than the other two methods for evapotraspiration estimation. In the present study the MBE (Mean-Bias-Error), RMSE (Root Mean Square Error) and t-statistic have also been obtained for better evaluations of a model performance.


2018 ◽  
Vol 2 (2) ◽  
pp. 38
Author(s):  
Radinka Dynand Mahessara ◽  
Budi Rustandi Kartawinata

This research is based on the phenomenon of new investment instrument called Cryptocurrency-Bitcoin which becomes popular in recent years. Based on that phenomenon, this research tries to search which is the best investment instrument from three instruments such as Bitcoin, stocks, and gold. The research method used is quantitative method with the type of research is comparative descriptive. In this study, the authors use Sharpe, Treynor and Jensen index approach to evaluate the performance. This study intends to do a comparison from the index using the data of the investment instrument in the period of January 2014 to August 2017. The study sample was taken from price result from every investment instrument during the period of study. The result shows that Bitcoin is the best instrument because based on Sharpe, Treynor, and Jensen value the return is better than the other two instruments.


2014 ◽  
Vol 26 (7) ◽  
pp. 1340-1361 ◽  
Author(s):  
Xiangxiang Zeng ◽  
Xingyi Zhang ◽  
Tao Song ◽  
Linqiang Pan

Spiking neural P systems with weights are a new class of distributed and parallel computing models inspired by spiking neurons. In such models, a neuron fires when its potential equals a given value (called a threshold). In this work, spiking neural P systems with thresholds (SNPT systems) are introduced, where a neuron fires not only when its potential equals the threshold but also when its potential is higher than the threshold. Two types of SNPT systems are investigated. In the first one, we consider that the firing of a neuron consumes part of the potential (the amount of potential consumed depends on the rule to be applied). In the second one, once a neuron fires, its potential vanishes (i.e., it is reset to zero). The computation power of the two types of SNPT systems is investigated. We prove that the systems of the former type can compute all Turing computable sets of numbers and the systems of the latter type characterize the family of semilinear sets of numbers. The results show that the firing mechanism of neurons has a crucial influence on the computation power of the SNPT systems, which also answers an open problem formulated in Wang, Hoogeboom, Pan, Păun, and Pérez-Jiménez ( 2010 ).


Author(s):  
A. V. Crewe

We have become accustomed to differentiating between the scanning microscope and the conventional transmission microscope according to the resolving power which the two instruments offer. The conventional microscope is capable of a point resolution of a few angstroms and line resolutions of periodic objects of about 1Å. On the other hand, the scanning microscope, in its normal form, is not ordinarily capable of a point resolution better than 100Å. Upon examining reasons for the 100Å limitation, it becomes clear that this is based more on tradition than reason, and in particular, it is a condition imposed upon the microscope by adherence to thermal sources of electrons.


Author(s):  
Maxim B. Demchenko ◽  

The sphere of the unknown, supernatural and miraculous is one of the most popular subjects for everyday discussions in Ayodhya – the last of the provinces of the Mughal Empire, which entered the British Raj in 1859, and in the distant past – the space of many legendary and mythological events. Mostly they concern encounters with inhabitants of the “other world” – spirits, ghosts, jinns as well as miraculous healings following magic rituals or meetings with the so-called saints of different religions (Hindu sadhus, Sufi dervishes),with incomprehensible and frightening natural phenomena. According to the author’s observations ideas of the unknown in Avadh are codified and structured in Avadh better than in other parts of India. Local people can clearly define if they witness a bhut or a jinn and whether the disease is caused by some witchcraft or other reasons. Perhaps that is due to the presence in the holy town of a persistent tradition of katha, the public presentation of plots from the Ramayana epic in both the narrative and poetic as well as performative forms. But are the events and phenomena in question a miracle for the Avadhvasis, residents of Ayodhya and its environs, or are they so commonplace that they do not surprise or fascinate? That exactly is the subject of the essay, written on the basis of materials collected by the author in Ayodhya during the period of 2010 – 2019. The author would like to express his appreciation to Mr. Alok Sharma (Faizabad) for his advice and cooperation.


HortScience ◽  
1998 ◽  
Vol 33 (3) ◽  
pp. 452c-452 ◽  
Author(s):  
Schuyler D. Seeley ◽  
Raymundo Rojas-Martinez ◽  
James Frisby

Mature peach trees in pots were treated with nighttime temperatures of –3, 6, 12, and 18 °C for 16 h and a daytime temperature of 20 °C for 8 h until the leaves abscised in the colder treatments. The trees were then chilled at 6 °C for 40 to 70 days. Trees were removed from chilling at 40, 50, 60, and 70 days and placed in a 20 °C greenhouse under increasing daylength, spring conditions. Anthesis was faster and shoot length increased with longer chilling treatments. Trees exposed to –3 °C pretreatment flowered and grew best with 40 days of chilling. However, they did not flower faster or grow better than the other treatments with longer chilling times. There was no difference in flowering or growth between the 6 and 12 °C pretreatments. The 18 °C pretreatment resulted in slower flowering and very little growth after 40 and 50 days of chilling, but growth was comparable to other treatments after 70 days of chilling.


2020 ◽  
Vol 27 (3) ◽  
pp. 178-186 ◽  
Author(s):  
Ganesan Pugalenthi ◽  
Varadharaju Nithya ◽  
Kuo-Chen Chou ◽  
Govindaraju Archunan

Background: N-Glycosylation is one of the most important post-translational mechanisms in eukaryotes. N-glycosylation predominantly occurs in N-X-[S/T] sequon where X is any amino acid other than proline. However, not all N-X-[S/T] sequons in proteins are glycosylated. Therefore, accurate prediction of N-glycosylation sites is essential to understand Nglycosylation mechanism. Objective: In this article, our motivation is to develop a computational method to predict Nglycosylation sites in eukaryotic protein sequences. Methods: In this article, we report a random forest method, Nglyc, to predict N-glycosylation site from protein sequence, using 315 sequence features. The method was trained using a dataset of 600 N-glycosylation sites and 600 non-glycosylation sites and tested on the dataset containing 295 Nglycosylation sites and 253 non-glycosylation sites. Nglyc prediction was compared with NetNGlyc, EnsembleGly and GPP methods. Further, the performance of Nglyc was evaluated using human and mouse N-glycosylation sites. Results: Nglyc method achieved an overall training accuracy of 0.8033 with all 315 features. Performance comparison with NetNGlyc, EnsembleGly and GPP methods shows that Nglyc performs better than the other methods with high sensitivity and specificity rate. Conclusion: Our method achieved an overall accuracy of 0.8248 with 0.8305 sensitivity and 0.8182 specificity. Comparison study shows that our method performs better than the other methods. Applicability and success of our method was further evaluated using human and mouse N-glycosylation sites. Nglyc method is freely available at https://github.com/bioinformaticsML/ Ngly.


Sign in / Sign up

Export Citation Format

Share Document