SOFTWARE REUSABILITY MODEL FOR PROCEDURE BASED DOMAIN-SPECIFIC SOFTWARE COMPONENTS

Author(s):  
PARVINDER SINGH SANDHU ◽  
HARDEEP SINGH

Automatic reusability appraisal is helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable components from existing legacy systems; that can save cost of developing the software from scratch. But the issue of how to identify reusable components from existing systems has remained relatively unexplored. In this paper, we mention a two-tier approach by studying the structural attributes as well as usability or relevancy of the component to a particular domain. We evaluate Probabilistic Latent Semantic Analysis (PLSA) approach, LSA's Singular Value Decomposition (SVD) technique, LSA's Semi-Discrete Matrix Decomposition (SDD) technique and Naïve Bayes approach to determine the Domain Relevancy of software components. It exploits the fact that Feature Vector codes can be seen as documents containing terms — the identifiers present in the components — and so text modeling methods that capture co-occurrence information in low-dimensional spaces can be used. In this research work, structural attributes of software components are explored using software metrics and quality of the software is inferred by Neuro-Fuzzy (NF) Inference engine, taking the metric values as input. The influence of different factors on the reusability is studied and the condition for the optimum reusability index is derived using Taguchi Analysis. The NF system is optimized by selecting initial rule-base through modified ID3 decision tree algorithm in combination with the results of Taguchi Analysis. The calculated reusability value enables to identify a good quality code automatically. It is found that the reusability value determined is close to the manual analysis used to be performed by the programmers or repository managers. So, the system developed can be used to enhance the productivity and quality of software development.

Author(s):  
М.А. Нокель ◽  
Н.В. Лукашевич

Представлены результаты экспериментов по добавлению биграмм в тематические модели и учету сходства между ними и униграммами. Предложен новый алгоритм PLSA-SIM, являющийся модификацией алгоритма построения тематических моделей PLSA (Probabilistic Latent Semantic Analysis). Предложенный алгоритм позволяет добавлять биграммы и учитывать сходство между ними и униграммными компонентами. Исследована возможность применения ассоциативных мер для выбора и последующего включения биграмм в тематические модели. В качестве текстовых коллекций взяты русскоязычная подборка статей из электронных банковских журналов, английские части корпусов параллельных текстов Europarl и JRC-Acquiz и англоязычный архив исследовательских работ по компьютерной лингвистике ACL Anthology. Выполненные эксперименты показывают, что существует подгруппа тестируемых мер, упорядочивающих биграммы таким образом, что при последующем их добавлении в предложенный алгоритм PLSA-SIM качество получающихся тематических моделей значительно повышается. Предложен новый итеративный алгоритм PLSA-ITER без учителя, позволяющий добавлять наиболее подходящие биграммы. Эксперименты показывают дальнейшее улучшение качества тематических моделей по сравнению с исходным алгоритмом PLSA. The results of experimental study of adding bigrams and taking account of the similarity between them and unigrams are discussed. A novel PLSA-SIM algorithm based on a modification of the original PLSA (Probabilistic Latent Semantic Analysis) algorithm is proposed. The proposed algorithm incorporates bigrams and takes into account the similarity between them and unigram components. Various word association measures are analyzed to integrate top-ranked bigrams into topic models. As target text collections, articles from various Russian electronic banking magazines, English parts of parallel corpora Europarl and JRC-Acquiz, and the English digital archive of research papers in computational linguistics (ACL Anthology) are chosen. The computational experiments show that there exists a subgroup of tested measures that produce top-ranked bigrams in such a way that their inclusion into the PLSA-SIM algorithm significantly improves the quality of topic models for all collections. A novel unsupervised iterative algorithm named PLSA-ITER is also proposed for adding the most relevant bigrams. The computational experiments show a further improvement in the quality of topic models compared to the PLSA algorithm.


Author(s):  
P. Devendran ◽  
P. Ashoka Varthanan

Abstract Welding operation decides the quality of product standards in all metal work products like automobiles, aerospace vehicles, and many more. The quality of the welding process is more reliable by automating the process with robots. In this research work, the GMAW operation is automated with the “Fanuc Robot Arc mate 100iC/12” robot. The material characteristics such as ultimate tensile strength, hardness, and impact strength of weldments are predicted using a fuzzy system using triangular membership function (TrMF) and trapezoidal membership function (TMF). The simulated results are validated by comparing with experimental work, the experiments are designed using orthogonal array L18, and material characteristics are studied using fractography test. The fuzzy system is trained with experimental results using the IF-Then rule base with the help of the L18 orthogonal array. The inference system has predicted the accuracy rate of weldment mechanical properties, showing a lower error rate.


Author(s):  
Jyoti Aggarwal ◽  
Manoj Kumar

Component Based Software System (CBSS) have become most generalized and popular approach for developing reusable software applications. A software component has different important factors, but reusability is the most citing factor of any software component. Software components can be reused for the development of another software application, which further reduces the amount of time and effort of software development process. With the increase in the number of software components, requirement for identification of software metrics also increased for quantitative analysis of different aspects of components. Reusability depends on different factors and these factors have different impact on the reusability of software components. In this paper, study has been performed to identify the major reusability factors and software metrics for measuring those factors. From this research work, it will become easier to measure the reusability of software components, and software developers would be able to measure the degree of various features of any application which can be reused for developing other software applications. In this way, it would be easy and convenient to identify and compare the reusable software components and they could be reused in effective and efficient manner.


2021 ◽  
Vol 40 (3) ◽  
Author(s):  
HyunSeung Koh ◽  
Mark Fienup

Library chat services are an increasingly important communication channel to connect patrons to library resources and services. Analysis of chat transcripts could provide librarians with insights into improving services. Unfortunately, chat transcripts consist of unstructured text data, making it impractical for librarians to go beyond simple quantitative analysis (e.g., chat duration, message count, word frequencies) with existing tools. As a stepping-stone toward a more sophisticated chat transcript analysis tool, this study investigated the application of different types of topic modeling techniques to analyze one academic library’s chat reference data collected from April 10, 2015, to May 31, 2019, with the goal of extracting the most accurate and easily interpretable topics. In this study, topic accuracy and interpretability—the quality of topic outcomes—were quantitatively measured with topic coherence metrics. Additionally, qualitative accuracy and interpretability were measured by the librarian author of this paper depending on the subjective judgment on whether topics are aligned with frequently asked questions or easily inferable themes in academic library contexts. This study found that from a human’s qualitative evaluation, Probabilistic Latent Semantic Analysis (pLSA) produced more accurate and interpretable topics, which is not necessarily aligned with the findings of the quantitative evaluation with all three types of topic coherence metrics. Interestingly, the commonly used technique Latent Dirichlet Allocation (LDA) did not necessarily perform better than pLSA. Also, semi-supervised techniques with human-curated anchor words of Correlation Explanation (CorEx) or guided LDA (GuidedLDA) did not necessarily perform better than an unsupervised technique of Dirichlet Multinomial Mixture (DMM). Last, the study found that using the entire transcript, including both sides of the interaction between the library patron and the librarian, performed better than using only the initial question asked by the library patron across different techniques in increasing the quality of topic outcomes.


Software metrics has been utilized to evaluate inheritance as well as to assist the designer in order to focus on product quality as well as cost estimation in all the lifecycle stage of development of the final product. To pertain measurement through the diverse level of class hierarchy, a person can evaluate inheritance with reuse, to acquire the best computation of abstraction levels of a object oriented system. In our paper, a new metric of hierarchical inheritance is proposed that measures the quality of the program through different levels of Object-Orientedness, and we named it PLHIM: Per Level of Hierarchical Inheritance Metric. The main idea behind proposed metrics and research work was to make use of measurement as a criterion for improvement in software development at different levels to minimize risk and this has been done by taking the problems of C++ and Java.


Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


Author(s):  
Simar Preet Singh ◽  
Rajesh Kumar ◽  
Anju Sharma ◽  
S. Raji Reddy ◽  
Priyanka Vashisht

Background: Fog computing paradigm has recently emerged and gained higher attention in present era of Internet of Things. The growth of large number of devices all around, leads to the situation of flow of packets everywhere on the Internet. To overcome this situation and to provide computations at network edge, fog computing is the need of present time that enhances traffic management and avoids critical situations of jam, congestion etc. Methods: For research purposes, there are many methods to implement the scenarios of fog computing i.e. real-time implementation, implementation using emulators, implementation using simulators etc. The present study aims to describe the various simulation and emulation tools for implementing fog computing scenarios. Results: Review shows that iFogSim is the simulator that most of the researchers use in their research work. Among emulators, EmuFog is being used at higher pace than other available emulators. This might be due to ease of implementation and user-friendly nature of these tools and language these tools are based upon. The use of such tools enhance better research experience and leads to improved quality of service parameters (like bandwidth, network, security etc.). Conclusion: There are many fog computing simulators/emulators based on many different platforms that uses different programming languages. The paper concludes that the two main simulation and emulation tools in the area of fog computing are iFogSim and EmuFog. Accessibility of these simulation/emulation tools enhance better research experience and leads to improved quality of service parameters along with the ease of their usage.


Sign in / Sign up

Export Citation Format

Share Document