scholarly journals Literature Review on Cloud Application

Author(s):  
N. Swetha ◽  
Dr. V. Divya

The software that runs its processing logic is a cloud application. In this the data is stored between two systems: client-side and server-side. End-users local hardware and remote server is also a part where some processing is done. However, most data storage exists on a remote server which is one of the major perk of using cloud application. In some cases a local device with no storage space is built with cloud application. Using web browser cloud application interacts with its users; this facility makes the organizations to switch their infrastructure to the cloud for gaining the benefit of digital transformations. In cloud applications it is easier for the clients to move or manage their data safely and it also provides the flexibility required for the emerging organizations to sustain in the digital market. As the cloud applications are emerged with sophistication many papers were employed on its branches. This research paper emphasizes on the evolution and long-term trends of cloud applications. Findings from the paper enable the enterprise with perplexity to decide on adopting cloud.

Author(s):  
Subrata Acharya

There is a need to be able to verify plaintext HTTP content transfers. Common sense dictates authentication and sensitive content should always be protected by SSL/HTTPS, but there is still great exploitation potential in the modification of static content in transit. Pre-computed signatures and client-side verification offers integrity protection of HTTP content in applications where SSL is not feasible. In this chapter, the authors demonstrate a mechanism by which a Web browser or other HTTP client can verify that content transmitted over an untrusted channel has not been modified. Verifiable HTTP is not intended to replace SSL. Rather, it is intended to be used in applications where SSL is not feasible, specifically, when serving high-volume static content and/or content from non-secure sources such as Content Distribution Networks. Finally, the authors find content verification is effective with server-side overhead similar to SSL. With future optimization such as native browser support, content verification could achieve comparable client-side efficiency.


2020 ◽  
Vol 1 (2) ◽  
pp. 127
Author(s):  
Indra Gita Anugrah ◽  
Muhamad Aldi Rifai Imam Fakhruddin

The security of an application is the most important problem in an information system integration process. The authentication and authorization process is usually carried out using Single Sign On (SSO). Authentication and authorization methods are used to secure data in a system. The authentication and authorization processes are carried out on the client side (web browser) in the form of a session and on the server side (web server) in the form of cookies. Sessions and cookies are valuable assets in the authentication and authorization process because they contain the data required for the login process so that the session and cookies need to be secured. Session is a combination of username and password data that has been encrypted while cookies store login information data so that they are still in a state of gaining access according to the privileges given to the user. So important is the role of sessions and cookies in the authentication and authorization process, so we need a way to secure data on sessions and cookies. One way to secure data is to use the REst API and Auth Token.


2021 ◽  
Vol 2094 (3) ◽  
pp. 032045
Author(s):  
A Y Unger

Abstract A new design pattern intended for distributed cloud-based information systems is proposed. Pattern is based on the traditional client-server architecture. The server side is divided into three principal components: data storage, application server and cache server. Each component can be used to deploy parts of several independent information systems, thus realizing shared-resource approach. A strategy of separation of competencies between the client and the server is proposed. The strategy assumes that the client side is responsible for application logic and the server side is responsible for data storage consistency and data access control. Data protection is ensured by means of two particular approaches: at the entity level and at the transaction level. The application programming interface to access data is presented at the level of identified transaction descriptors.


Author(s):  
J. Masó ◽  
A. Zabala ◽  
I. Serral ◽  
X. Pons

<p><strong>Abstract.</strong> Current map viewers that run on modern web browsers are mainly requesting images generated on the fly in the server side and transferred in pictorial format that they can display (PNG or JPEG). In OGC WMS standard this is done for the whole map view while in WMTS is done per tiles. The user cannot fine tune personalized visualization or data analysis in the client side. Remote sensing data is structured in bands that are visualize individually (manually adjusting contrast), create RGB combinations or present spectral indices. When these operations are not available in map browsers professional are forced to download hundreds of gigabytes of remote sensing imagery to take a good look at the data before deciding if it fits for a purpose. A possible solution is to create a web service that is able to perform these operations on the server side (https://www.sentinel-hub.com). This paper proposes that the server should communicate the data values to the client in a format that the client can directly process using two new additions in HTML5: canvas edition and array buffers. In the client side, the user can interact with a JavaScript interface changing symbolizations and doing some analytical operations without having to request any data again to the server. As a bonus, the user is able to perform queries to the data in a more dynamic way, applying spatial filters, creating histograms, generating animations of a time series or performing complex calculations among bands of the different loaded datasets.</p>


Author(s):  
Xunhua Wang ◽  
Hua Lin

Unlike existing password authentication mechanisms on the web that use passwords for client-side authentication only, password-authenticated key exchange (PAKE) protocols provide mutual authentication. In this article, we present an architecture to integrate existing PAKE protocols to the web. Our integration design consists of the client-side part and the server-side part. First, we implement the PAKE client-side functionality with a web browser plug-in, which provides a secure implementation base. The plug-in has a log-in window that can be customized by a user when the plug-in is installed. By checking the user-specific information in a log-in window, an ordinary user can easily detect a fake log-in window created by mobile code. The server-side integration comprises a web interface and a PAKE server. After a successful PAKE mutual authentication, the PAKE plug-in receives a one-time ticket and passes it to the web browser. The web browser authenticates itself by presenting this ticket over HTTPS to the web server. The plug-in then fades away and subsequent web browsing remains the same as usual, requiring no extra user education. Our integration design supports centralized log-ins for web applications from different web sites, making it appropriate for digital identity management. A prototype is developed to validate our design. Since PAKE protocols use passwords for mutual authentication, we believe that the deployment of this design will significantly mitigate the risk of phishing attacks.


Author(s):  
Vojtěch Toman

With the growing interest in end-to-end XML web application development models, many web applications are becoming predominantly XML-based, requiring XML processing capabilities not only on the-server-side, but often also on the client-side. This paper discusses the potential benefits of using XProc for XML pipeline processing in the web browser and describes the developments of a JavaScript-based XProc implementation.


2020 ◽  
Vol 10 (5) ◽  
pp. 314
Author(s):  
Jingbin Yuan ◽  
Jing Zhang ◽  
Lijun Shen ◽  
Dandan Zhang ◽  
Wenhuan Yu ◽  
...  

Recently, with the rapid development of electron microscopy (EM) technology and the increasing demand of neuron circuit reconstruction, the scale of reconstruction data grows significantly. This brings many challenges, one of which is how to effectively manage large-scale data so that researchers can mine valuable information. For this purpose, we developed a data management module equipped with two parts, a storage and retrieval module on the server-side and an image cache module on the client-side. On the server-side, Hadoop and HBase are introduced to resolve massive data storage and retrieval. The pyramid model is adopted to store electron microscope images, which represent multiresolution data of the image. A block storage method is proposed to store volume segmentation results. We design a spatial location-based retrieval method for fast obtaining images and segments by layers rapidly, which achieves a constant time complexity. On the client-side, a three-level image cache module is designed to reduce latency when acquiring data. Through theoretical analysis and practical tests, our tool shows excellent real-time performance when handling large-scale data. Additionally, the server-side can be used as a backend of other similar software or a public database to manage shared datasets, showing strong scalability.


2015 ◽  
Vol 25 (09n10) ◽  
pp. 1611-1632 ◽  
Author(s):  
Haiping Xu ◽  
Deepti Bhalerao

Despite the popularity and many advantages of using cloud data storage, there are still major concerns about the data stored in the cloud, such as security, reliability and confidentiality. In this paper, we propose a reliable and secure distributed cloud data storage schema using Reed-Solomon codes. Different from existing approaches to achieving data reliability with redundancy at the server side, our proposed mechanism relies on multiple cloud service providers (CSP), and protects users’ cloud data from the client side. In our approach, we view multiple cloud-based storage services as virtual independent disks for storing redundant data encoded with erasure codes. Since each CSP has no access to a user’s complete data, the data stored in the cloud would not be easily compromised. Furthermore, the failure or disconnection of a CSP will not result in the loss of a user’s data as the missing data pieces can be readily recovered. To demonstrate the feasibility of our approach, we developed a prototype distributed cloud data storage application using three major CSPs. The experimental results show that, besides the reliability and security related benefits of our approach, the application outperforms each individual CSP for uploading and downloading files.


Author(s):  
Chris Maloney ◽  
Alf Eaton ◽  
Jeff Beck

JATS4R (jats4r.org) is a group that provides guidelines for tagging scholarly articles in JATS XML to maximize machine-readability and the potential for content reuse. When the group formalizes a recommendation, we encode the rules in Schematron. For checking instance documents against the rules, we have implemented a validation tool (hosted at http://jats4r.org/validator/). When an instance document is processed, it is first parsed with a JavaScript implementation of xmllint, then validated against the DTD, if one is specified. The validator then checks the document against the Schematron rules, and generates a report in Schematron Validation Report Language XML (SVRL). To avoid the maintenance costs of hosting a server-side tool, the validation tool is written in JavaScript, using an emscripten port of libxml, and Saxon-CE as the client-side XSLT processor. This allows it to be hosted on a static site and run entirely within the user’s web browser. The XSLT files used for validation are generated from the Schematron rulesets offline, and an HTML report is generated from the SVRL validation results using a further XSLT transformation.


Author(s):  
Konstantinos Evangelidis ◽  
Theofilos Papadopoulos

Semantic Web technologies are being increasingly adopted by the geospatial community during last decade through the utilization of open standards for expressing and serving geospatial data. This was also dramatically assisted by an ever increasing access and usage of geographic mapping and location-based services via smart devices in people&rsquo;s daily activities. In this paper we explore the developmental framework of a pure Javascript client-side GIS platform exclusively based on invocable geospatial Web services. We also extend Javascript utilization on the server side by deploying a node server acting as a bridge between open source WPS libraries and popular geoprocessing engines. The vehicle for such an exploration is a cross platform Web browser capable of interpreting Javascript commands to achieve interaction with geospatial providers. The tool is a generic Web interface providing capabilities of acquiring spatial datasets, composing layouts and applying geospatial processes. In an ideal form the end-user will have to identify those services, which satisfy a geo-related need and put them in the appropriate row. The final output may act as a potential collector of freely available geospatial web services. Its server-side components may exploit geospatial processing suppliers composing that way a light-weight fully transparent open Web GIS platform.


Sign in / Sign up

Export Citation Format

Share Document