Performance Driven Development Framework for Web Applications

2017 ◽  
Vol 9 (1) ◽  
pp. 75
Author(s):  
K. S. Shailesh ◽  
P. V. Suresh

The performance of web applications is of paramount importance as it can impact end-user experience and the business revenue. Web Performance Optimization (WPO) deals with front-end performance engineering. Web performance would impact customer loyalty, SEO, web search ranking, SEO, site traffic, repeat visitors and overall online revenue. In this paper we have conducted the survey of state of the art tools, techniques, methodologies of various aspects of web performance optimization. We have identified key web performance patterns and proposed novel web performance driven development framework. We have elaborated on various techniques related to different phases of web performance driven development framework.

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-26
Author(s):  
Arjun Pitchanathan ◽  
Christian Ulmann ◽  
Michel Weber ◽  
Torsten Hoefler ◽  
Tobias Grosser

Presburger arithmetic provides the mathematical core for the polyhedral compilation techniques that drive analytical cache models, loop optimization for ML and HPC, formal verification, and even hardware design. Polyhedral compilation is widely regarded as being slow due to the potentially high computational cost of the underlying Presburger libraries. Researchers typically use these libraries as powerful black-box tools, but the perceived internal complexity of these libraries, caused by the use of C as the implementation language and a focus on end-user-facing documentation, holds back broader performance-optimization efforts. With FPL, we introduce a new library for Presburger arithmetic built from the ground up in modern C++. We carefully document its internal algorithmic foundations, use lightweight C++ data structures to minimize memory management costs, and deploy transprecision computing across the entire library to effectively exploit machine integers and vector instructions. On a newly-developed comprehensive benchmark suite for Presburger arithmetic, we show a 5.4x speedup in total runtime over the state-of-the-art library isl in its default configuration and 3.6x over a variant of isl optimized with element-wise transprecision computing. We expect that the availability of a well-documented and fast Presburger library will accelerate the adoption of polyhedral compilation techniques in production compilers.


Semantic Web ◽  
2021 ◽  
pp. 1-16
Author(s):  
Esko Ikkala ◽  
Eero Hyvönen ◽  
Heikki Rantala ◽  
Mikko Koho

This paper presents a new software framework, Sampo-UI, for developing user interfaces for semantic portals. The goal is to provide the end-user with multiple application perspectives to Linked Data knowledge graphs, and a two-step usage cycle based on faceted search combined with ready-to-use tooling for data analysis. For the software developer, the Sampo-UI framework makes it possible to create highly customizable, user-friendly, and responsive user interfaces using current state-of-the-art JavaScript libraries and data from SPARQL endpoints, while saving substantial coding effort. Sampo-UI is published on GitHub under the open MIT License and has been utilized in several internal and external projects. The framework has been used thus far in creating six published and five forth-coming portals, mostly related to the Cultural Heritage domain, that have had tens of thousands of end-users on the Web.


2021 ◽  
Vol 13 (2) ◽  
pp. 50
Author(s):  
Hamed Z. Jahromi ◽  
Declan Delaney ◽  
Andrew Hines

Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to as the Above-the-fold (ATF) time. SpeedIndex (SI) has been widely used to estimate perceived web page loading speed of ATF content and a proxy metric for Web QoE estimation. Web application developers have been actively introducing innovative interactive features, such as animated and multimedia content, aiming to capture the users’ attention and improve the functionality and utility of the web applications. However, the literature shows that, for the websites with animated content, the estimated ATF time using the state-of-the-art metrics may not accurately match completed ATF time as perceived by users. This study introduces a new metric, Plausibly Complete Time (PCT), that estimates ATF time for a user’s perception of websites with and without animations. PCT can be integrated with SI and web QoE models. The accuracy of the proposed metric is evaluated based on two publicly available datasets. The proposed metric holds a high positive Spearman’s correlation (rs=0.89) with the Perceived ATF reported by the users for websites with and without animated content. This study demonstrates that using PCT as a KPI in QoE estimation models can improve the robustness of QoE estimation in comparison to using the state-of-the-art ATF time metric. Furthermore, experimental result showed that the estimation of SI using PCT improves the robustness of SI for websites with animated content. The PCT estimation allows web application designers to identify where poor design has significantly increased ATF time and refactor their implementation before it impacts end-user experience.


MRS Bulletin ◽  
1995 ◽  
Vol 20 (8) ◽  
pp. 40-48 ◽  
Author(s):  
J.H. Westbrook ◽  
J.G. Kaufman ◽  
F. Cverna

Over the past 30 years we have seen a strong but uncoordinated effort to both increase the availability of numeric materials-property data in electronic media and to make the resultant mass of data more readily accessible and searchable for the end-user engineer. The end user is best able to formulate the question and to judge the utility of the answer for numeric property data inquiries, in contrast to textual or bibliographic data for which information specialists can expeditiously carry out searches.Despite the best efforts of several major programs, there remains a shortfall with respect to comprehensiveness and a gap between the goal of easy access to all the world's numeric databases and what can presently be achieved. The task has proven thornier and therefore much more costly than anyone envisioned, and computer access to data for materials scientists and engineers is still inadequate compared, for example, to the situation for molecular biologists or astronomers. However, progress has been made. More than 100 materials databases are listed and categorized by Wawrousek et al. that address several types of applications including: fundamental research, materials selection, component design, process control, materials identification and equivalency, expert systems, and education. Standardization is improving and access has been made more easy.In the discussion that follows, we will examine several characteristics of available information and delivery systems to assess their impact on the successes and limitations of the available products. The discussion will include the types and uses of the data, issues around data reliability and quality, the various formats in which data need to be accessed, and the various media available for delivery. Then we will focus on the state of the art by giving examples of the three major media through which broad electronic access to numeric properties has emerged: on-line systems, workstations, and disks, both floppy and CD-ROM. We will also cite some resources of where to look for numeric property data.


2022 ◽  
Vol 40 (2) ◽  
pp. 1-31
Author(s):  
Masoud Mansoury ◽  
Himan Abdollahpouri ◽  
Mykola Pechenizkiy ◽  
Bamshad Mobasher ◽  
Robin Burke

Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. A specific form of fairness is supplier exposure fairness, where the objective is to ensure equitable coverage of items across all suppliers in recommendations provided to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end user but also for other stakeholders such as item sellers or producers who desire a fair representation of their items. This type of supplier fairness is sometimes accomplished by attempting to increase aggregate diversity to mitigate popularity bias and to improve the coverage of long-tail items in recommendations. In this article, we introduce FairMatch, a general graph-based algorithm that works as a post-processing approach after recommendation generation to improve exposure fairness for items and suppliers. The algorithm iteratively adds high-quality items that have low visibility or items from suppliers with low exposure to the users’ final recommendation lists. A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, although it significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.


2021 ◽  
Vol 4 ◽  
pp. 1-8
Author(s):  
Jesse Friend ◽  
Mathias Jahnke ◽  
Niels Walen ◽  
Gernot Ramminger

Abstract. Web applications which are high functioning, efficient, and meet the performance demand of the client are essential in modern cartographic workflows. With more and more complex spatial data being integrated into web applications, such as time related features, it is essential to harmonize the means of data presentation so that the end product is aligned with the needs of the end-user. In this paper we present aWeb GIS application built as a microservice which displays various timeseries visualizations to the user to streamline intuitiveness and functionality. The prototype provides a solution which could help to understand various ways in which current web and spatial analysis methods can be combined to create visualizations that add value to existing spatial data for cartographic workflows.


2017 ◽  
Vol 14 (4) ◽  
pp. 1-32 ◽  
Author(s):  
Shashank Gupta ◽  
B. B. Gupta

This article introduces a distributed intelligence network of Fog computing nodes and Cloud data centres for smart devices against XSS vulnerabilities in Online Social Network (OSN). The cloud data centres compute the features of JavaScript, injects them in the form of comments and saved them in the script nodes of Document Object Model (DOM) tree. The network of Fog devices re-executes the feature computation and comment injection process in the HTTP response message and compares such comments with those calculated in the cloud data centres. Any divergence observed will simply alarm the signal of injection of XSS worms on the nodes of fog located at the edge of the network. The mitigation of such worms is done by executing the nested context-sensitive sanitization on the malicious variables of JavaScript code embedded in such worms. The prototype of the authors' work was developed in Java development framework and installed on the virtual machines of Cloud data centres (typically located at the core of network) and the nodes of Fog devices (exclusively positioned at the edge of network). Vulnerable OSN-based web applications were utilized for evaluating the XSS worm detection capability of the authors' framework and evaluation results revealed that their work detects the injection of XSS worms with high precision rate and less rate of false positives and false negatives.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 416
Author(s):  
Yingying Ma ◽  
Youlong Wu ◽  
Chengqiang Lu

Name ambiguity, due to the fact that many people share an identical name, often deteriorates the performance of information integration, document retrieval and web search. In academic data analysis, author name ambiguity usually decreases the analysis performance. To solve this problem, an author name disambiguation task is designed to divide documents related to an author name reference into several parts and each part is associated with a real-life person. Existing methods usually use either attributes of documents or relationships between documents and co-authors. However, methods of feature extraction using attributes cause inflexibility of models while solutions based on relationship graph network ignore the information contained in the features. In this paper, we propose a novel name disambiguation model based on representation learning which incorporates attributes and relationships. Experiments on a public real dataset demonstrate the effectiveness of our model and experimental results demonstrate that our solution is superior to several state-of-the-art graph-based methods. We also increase the interpretability of our method through information theory and show that the analysis could be helpful for model selection and training progress.


Sign in / Sign up

Export Citation Format

Share Document