Web Distributed Computing Systems Implementation and Modeling

Author(s):  
Fabio Boldrin ◽  
Chiara Taddia ◽  
Gianluca Mazzini

This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of JavaScript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex® technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.

Author(s):  
Fabio Boldrin ◽  
Chiara Taddia ◽  
Gianluca Mazzini

This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of Javascript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex®technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.


2019 ◽  
Vol 11 (1) ◽  
pp. 38-45
Author(s):  
Himawan Wijaya

The change in the behavior of internet users from using computers or laptops to mobile internet users makes changes in the way the browser and also the web pages display information. Internet users generally want a quick access time when visiting a website page to get the desired information. In the research conducted in the writing of this journal, the researchers wanted to show and explain the several important factors that influence the speed of access from a website page, as well as analyzing based on technical factors. Where the main discussion in this study will focus more on the evaluation of technical factors starting from the programming side (server side programming and client side programming) and also the design of the user interface using web pages using minify CSS along with the use of AJAX technology. The results to be achieved from this study are to identify how much influence the technical factors mentioned above have on the speed of visitor access to a web page, apart from other technical factors such as internet network speed, devices and areas where users can access website page.


2012 ◽  
pp. 24-47
Author(s):  
V. Gimpelson ◽  
G. Monusova

Using different cross-country data sets and simple econometric techniques we study public attitudes towards the police. More positive attitudes are more likely to emerge in the countries that have better functioning democratic institutions, less prone to corruption but enjoy more transparent and accountable police activity. This has a stronger impact on the public opinion (trust and attitudes) than objective crime rates or density of policemen. Citizens tend to trust more in those (policemen) with whom they share common values and can have some control over. The latter is a function of democracy. In authoritarian countries — “police states” — this tendency may not work directly. When we move from semi-authoritarian countries to openly authoritarian ones the trust in the police measured by surveys can also rise. As a result, the trust appears to be U-shaped along the quality of government axis. This phenomenon can be explained with two simple facts. First, publicly spread information concerning police activity in authoritarian countries is strongly controlled; second, the police itself is better controlled by authoritarian regimes which are afraid of dangerous (for them) erosion of this institution.


2020 ◽  
Author(s):  
Andri Nirwana

Abstract: The phenomenon of the people who forcibly took covid's corpse 19 from the hospital to be taken care of by Fardhu Kifayah by his family and the community, became a conclusion that there was community doubt about the management of Tajhiz Mayat conducted by the hospital. Coupled with the circulation of the video of the Ruku movement 'in the corpse prayer conducted by unscrupulous parties at the Hospital, became added doubts from the public against the hospital. To solve this problem, this research uses a Descriptive Analysis approach, namely by formulating a question, namely How to arrange Covid 19's body in Banda Aceh and this question will be answered with several theories and data sets from the field. So it was concluded in a conclusion that answered the formulation of the problems mentioned. Theoretically the spread of covid 19 is very fast, the size of the virus is only 0.1 micrometer and is in body fluids, especially nasopharyngeal fluid and oropharyngeal fluids of infected people, fluids in the body of covid 19 bodies can get out through every gap of the body such as mouth, nose, eye and rectum, because it requires special techniques in its management. Fardhu kifayah to covid 19 bodies should be carried out by trained Ustad and trained health workers, so that the spread stopped. The results of this study concluded that the management of the Moslem bodies died at Zainal Abidin Hospital in Banda Aceh was in accordance with the Fatwa of the Aceh Ulama Council (MPU) and the bodies were handled by trained Ustad and health workers.


Author(s):  
Kostyantyn Kharchenko

The approach to organizing the automated calculations’ execution process using the web services (in particular, REST-services) is reviewed. The given solution will simplify the procedure of introduction of the new functionality in applied systems built according to the service-oriented architecture and microservice architecture principles. The main idea of the proposed solution is in maximum division of the server-side logic development and the client-side logic, when clients are used to set the abstract computation goals without any dependencies to existing applied services. It is proposed to rely on the centralized scheme to organize the computations (named as orchestration) and to put to the knowledge base the set of rules used to build (in multiple steps) the concrete computational scenario from the abstract goal. It is proposed to include the computing task’s execution subsystem to the software architecture of the applied system. This subsystem is composed of the service which is processing the incoming requests for execution, the service registry and the orchestration service. The clients send requests to the execution subsystem without any references to the real-world services to be called. The service registry searches the knowledge base for the corresponding input request template, then the abstract operation description search for the request template is performed. Each abstract operation may already have its implementation in the form of workflow composed of invocations of the real applied services’ operations. In case of absence of the corresponding workflow in the database, this workflow implementation could be synthesized dynamically according to the input and output data and the functionality description of the abstract operation and registered applied services. The workflows are executed by the orchestrator service. Thus, adding some new functions to the client side can be possible without any changes at the server side. And vice versa, adding new services can impact the execution of the calculations without updating the clients.


2021 ◽  
pp. 089443932110122
Author(s):  
Dennis Assenmacher ◽  
Derek Weber ◽  
Mike Preuss ◽  
André Calero Valdez ◽  
Alison Bradshaw ◽  
...  

Computational social science uses computational and statistical methods in order to evaluate social interaction. The public availability of data sets is thus a necessary precondition for reliable and replicable research. These data allow researchers to benchmark the computational methods they develop, test the generalizability of their findings, and build confidence in their results. When social media data are concerned, data sharing is often restricted for legal or privacy reasons, which makes the comparison of methods and the replicability of research results infeasible. Social media analytics research, consequently, faces an integrity crisis. How is it possible to create trust in computational or statistical analyses, when they cannot be validated by third parties? In this work, we explore this well-known, yet little discussed, problem for social media analytics. We investigate how this problem can be solved by looking at related computational research areas. Moreover, we propose and implement a prototype to address the problem in the form of a new evaluation framework that enables the comparison of algorithms without the need to exchange data directly, while maintaining flexibility for the algorithm design.


2021 ◽  
pp. 053901842110191
Author(s):  
Loes Knaapen

When science is evaluated by bureaucrats and administrators, it is usually done by quantified performance metrics, for the purpose of economic productivity. Olof Hallonsten criticizes both the means (quantification) and purpose (economization) of such external evaluation. I share the concern that such neoliberal performance metrics are shallow, over-simplified and inaccurate, but differ in how best to oppose this reductionism. Hallonsten proposes to replace quantitative performance metrics with qualitative in-depth evaluation of science, which would keep evaluation internal to scientific communities. I argue that such qualitative internal evaluation will not be enough to challenge current external evaluation since it does little to counteract neoliberal politics, and fails to provide the accountability that science owes the public. To assure that the many worthy purposes of science (i.e. truth, democracy, well-being, justice) are valued and pursued, I argue science needs more and more diverse external evaluation. The diversification of science evaluation can go in many directions: towards both quantified performance metrics and qualitative internal assessments and beyond economic productivity to value science’s broader societal contributions. In addition to administrators and public servants, science evaluators must also include diverse counterpublics of scientists: civil society, journalists, interested lay public and scientists themselves. More diverse external evaluation is perhaps no more accurate than neoliberal quantified metrics, but by valuing the myriad contributions of science and the diversity of its producers and users, it is hopefully less partial and perhaps more just.


Sign in / Sign up

Export Citation Format

Share Document