Evaluating the Impact of Design Patterns on Software Maintainability: An Empirical Evaluation

Author(s):  
Hen Kian Jun ◽  
Muhammad Ehsan Rana
Author(s):  
Tran Thanh Luong ◽  
Le My Canh

JavaScript has become more and more popular in recent years because its wealthy features as being dynamic, interpreted and object-oriented with first-class functions. Furthermore, JavaScript is designed with event-driven and I/O non-blocking model that boosts the performance of overall application especially in the case of Node.js. To take advantage of these characteristics, many design patterns that implement asynchronous programming for JavaScript were proposed. However, choosing a right pattern and implementing a good asynchronous source code is a challenge and thus easily lead into less robust application and low quality source code. Extended from our previous works on exception handling code smells in JavaScript and exception handling code smells in JavaScript asynchronous programming with promise, this research aims at studying the impact of three JavaScript asynchronous programming patterns on quality of source code and application.


2021 ◽  
Vol 11 (2) ◽  
pp. 796
Author(s):  
Alhanoof Althnian ◽  
Duaa AlSaeed ◽  
Heyam Al-Baity ◽  
Amani Samha ◽  
Alanoud Bin Dris ◽  
...  

Dataset size is considered a major concern in the medical domain, where lack of data is a common occurrence. This study aims to investigate the impact of dataset size on the overall performance of supervised classification models. We examined the performance of six widely-used models in the medical field, including support vector machine (SVM), neural networks (NN), C4.5 decision tree (DT), random forest (RF), adaboost (AB), and naïve Bayes (NB) on eighteen small medical UCI datasets. We further implemented three dataset size reduction scenarios on two large datasets and analyze the performance of the models when trained on each resulting dataset with respect to accuracy, precision, recall, f-score, specificity, and area under the ROC curve (AUC). Our results indicated that the overall performance of classifiers depend on how much a dataset represents the original distribution rather than its size. Moreover, we found that the most robust model for limited medical data is AB and NB, followed by SVM, and then RF and NN, while the least robust model is DT. Furthermore, an interesting observation is that a robust machine learning model to limited dataset does not necessary imply that it provides the best performance compared to other models.


2015 ◽  
Author(s):  
Fabián Vera ◽  
Casee Lemons ◽  
Ming Zhong ◽  
William D. Holcomb ◽  
Randy F. LaFollette

Abstract This study compares reservoir characteristics, completion methods and production for 431 wells in 6 counties producing from the Wichita-Albany reservoir to assess major factors in production optimization and derive ultimate recovery estimates. The purpose of the study is to analyze completion design patterns across the study area by combining public and proprietary data for mining. Integrating several analyses of different nature and their respective methods like statistics, geology and engineering create a modern approach as well as a more holistic point of view when certain measurements are missing from the data set. Furthermore, multivariate statistical analysis allows modeling the impact of particular completion and stimulation parameters on the production outcome by averaging out the impact of all other variables in the system. In addition to completion type, more than 18 predictor variables were examined, including treatment parameters such as fracture fluid volume, year of completion, cumulative perforated length, proppant type, proppant amount, and county location, among others. In this sense, this contribution seems unique in unifying statistical, engineering, and geological perspectives into a singular point of view. This work also provides complementary views for well production consideration.


Author(s):  
Sandhya Saisubramanian ◽  
Ece Kamar ◽  
Shlomo Zilberstein

Agents operating in unstructured environments often create negative side effects (NSE) that may not be easy to identify at design time. We examine how various forms of human feedback or autonomous exploration can be used to learn a penalty function associated with NSE during system deployment. We formulate the problem of mitigating the impact of NSE as a multi-objective Markov decision process with lexicographic reward preferences and slack. The slack denotes the maximum deviation from an optimal policy with respect to the agent's primary objective allowed in order to mitigate NSE as a secondary objective. Empirical evaluation of our approach shows that the proposed framework can successfully mitigate NSE and that different feedback mechanisms introduce different biases, which influence the identification of NSE.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahdi Ghaemi Asl ◽  
Muhammad Mahdi Rashidi ◽  
Alireza Ghorbani

Purpose This paper aims to investigate the impact of market structure and market share on the performance of the Islamic banks operating in the Iranian banking system based on the structure-conduct-performance (SCP) paradigm. Design/methodology/approach The Iranian Islamic banking system’s market structure is evaluated by using the econometrics method to test the validity of the traditional SCP paradigm. For this purpose, the authors estimate a simple regression model that is consisted of several independent variables, such as the market share, bank size, real gross domestic product, liquidity and Herfindahl-Hirschman index as a proxy variable for concentration and one dependent variable, namely, the profit as a proxy for performance. The panel data includes a data sample of 22 Islamic banks operating from 2006 to 2019. Data are extracted from the balance sheet of Islamic banks and the time-series database of the Central Bank of Iran and World Bank. Findings The study’s findings indicate that both concentration and market share have a positive impact on the performance of banks in the Iranian Islamic banking system. This result is contradicted with both traditional SCP and efficient structure hypotheses; however, it confirms the existence of oligopoly or cartel in the Iranian Islamic banking system that few banks try to gain the highest share of profit and maintain their market share by colluding with each other. This result is in contradiction with other research studies about the market structure in the Iranian banking system that claimed that banks in Iran operate under monopolistic competition. In addition, it shows that the privatization of some banks in Iran does not improve and help competition in the Iranian banking system. Originality/value This paper is a pioneer empirical study analyzing the market structure, concentration and collusion based on the SCP paradigm in Iranian Islamic banking. The results of the study support the existence of collusive behavior among the Islamic bank in Iran that is not aligned with Sharia. This study clearly shows the difference between ideal Islamic banking and Islamic banking in practice in Islamic countries. This clearly indicates that only prohibiting some operations like receiving interest, gambling and bearing excessive risk is not enough. In fact, the Islamic banking system should be based on the Sharia rule in all aspects and much more modification and study have to be done to achieve an appropriate Islamic banking system. These possible modifications to overcome the issues of cartel-like market structure and collusive behavior in the Iranian Islamic banking system include making the Iranian banking system more transparent, letting foreign banks enter the Iranian banking system and minimizing the government intervention in the Iranian banking system.


Trials ◽  
2015 ◽  
Vol 16 (S2) ◽  
Author(s):  
Royes Joseph ◽  
Julius Sim ◽  
Reuben Ogollah ◽  
Martyn Lewis

2013 ◽  
Vol 427-429 ◽  
pp. 2223-2228
Author(s):  
Xue Bin Wang ◽  
Pei Peng Liu ◽  
Cheng Long Li ◽  
Qing Feng Tan

The I2P network has been widely used to protect user privacy via an open network of onion routers run by volunteers. To the best of our knowledge, the behavior of peers in the I2P network has not been well investigated in the existing researches. In this paper, we first present a simple and effective way to collect peers in the I2P network. Each day over 82% peers has been collected compared to the statistics from the statistics web site. With the data collected, we perform an empirical evaluation, revealing the phenomenon about IP address aliasing and identity aliasing. As identity aliasing makes bad impact on the availability and anonymity of the I2P network, we introduce family concept into the I2P network to mitigate the impact.


Sign in / Sign up

Export Citation Format

Share Document