Constructing a Knowledge-Based Quality Attributes Relationship Matrix to Identify Conflicts in Non-Functional Requirements

2020 ◽  
Vol 17 (1) ◽  
pp. 122-129
Author(s):  
Unnati S. Shah ◽  
Sankita J. Patel ◽  
Devesh C. Jinwala

A critical success factor in requirements engineering involves recognizing conflicts in Non-functional requirements (NFRs). The recent approaches use Quality attributes relationship matrix (QARM) to identify conflicts in NFRs that represents how one quality attribute undermine (-) or support (+) achieving other types of quality attributes. However, the static QARMs are not always obvious and may vary in the context of available standards, development and the involved stakeholders. In addition, these matrixes do not contain relations for emergent quality attributes viz. Recoverability, context awareness, mobility among others. Furthermore, identifying the conflicts in NFRs using the matrix without knowing the purpose of NFRs in the system may produce the false conflict identification. Hence, the aim of our research is to identify relations and influences between NFR attributes (a.k.a. Quality attributes) from available unconstrained natural language documents through automated natural language processing and machine learning that helps to deal with false conflict identification.

2021 ◽  
Vol 54 (2) ◽  
pp. 1-37
Author(s):  
Dhivya Chandrasekaran ◽  
Vijay Mago

Estimating the semantic similarity between text data is one of the challenging and open research problems in the field of Natural Language Processing (NLP). The versatility of natural language makes it difficult to define rule-based methods for determining semantic similarity measures. To address this issue, various semantic similarity methods have been proposed over the years. This survey article traces the evolution of such methods beginning from traditional NLP techniques such as kernel-based methods to the most recent research work on transformer-based models, categorizing them based on their underlying principles as knowledge-based, corpus-based, deep neural network–based methods, and hybrid methods. Discussing the strengths and weaknesses of each method, this survey provides a comprehensive view of existing systems in place for new researchers to experiment and develop innovative ideas to address the issue of semantic similarity.


Author(s):  
Saravanakumar Kandasamy ◽  
Aswani Kumar Cherukuri

Semantic similarity quantification between concepts is one of the inevitable parts in domains like Natural Language Processing, Information Retrieval, Question Answering, etc. to understand the text and their relationships better. Last few decades, many measures have been proposed by incorporating various corpus-based and knowledge-based resources. WordNet and Wikipedia are two of the Knowledge-based resources. The contribution of WordNet in the above said domain is enormous due to its richness in defining a word and all of its relationship with others. In this paper, we proposed an approach to quantify the similarity between concepts that exploits the synsets and the gloss definitions of different concepts using WordNet. Our method considers the gloss definitions, contextual words that are helping in defining a word, synsets of contextual word and the confidence of occurrence of a word in other word’s definition for calculating the similarity. The evaluation based on different gold standard benchmark datasets shows the efficiency of our system in comparison with other existing taxonomical and definitional measures.


Author(s):  
Azleena Mohd Kassim ◽  
Yu-N Cheah

Information Technology (IT) is often employed to put knowledge management policies into operation. However, many of these tools require human intervention when it comes to deciding how the knowledge is to be managed. The Sematic Web may be an answer to this issue, but many Sematic Web tools are not readily available for the regular IT user. Another problem that arises is that typical efforts to apply or reuse knowledge via a search mechanism do not necessarily link to other pages that are relevant. Blogging systems appear to address some of these challenges but the browsing experience can be further enhanced by providing links to other relevant posts. In this chapter, the authors present a semantic blogging tool called SEMblog to identify, organize, and reuse knowledge based on the Sematic Web and ontologies. The SEMblog methodology brings together technologies such as Natural Language Processing (NLP), Sematic Web representations, and the ubiquity of the blogging environment to produce a more intuitive way to manage knowledge, especially in the areas of knowledge identification, organization, and reuse. Based on detailed comparisons with other similar systems, the uniqueness of SEMblog lies in its ability to automatically generate keywords and semantic links.


Author(s):  
Iraj Mantegh ◽  
Nazanin S. Darbandi

Robotic alternative to many manual operations falls short in application due to the difficulties in capturing the manual skill of an expert operator. One of the main problems to be solved if robots are to become flexible enough for various manufacturing needs is that of end-user programming. An end-user with little or no technical expertise in robotics area needs to be able to efficiently communicate its manufacturing task to the robot. This paper proposes a new method for robot task planning using some concepts of Artificial Intelligence. Our method is based on a hierarchical knowledge representation and propositional logic, which allows an expert user to incrementally integrate process and geometric parameters with the robot commands. The objective is to provide an intelligent and programmable agent such as a robot with a knowledge base about the attributes of human behaviors in order to facilitate the commanding process. The focus of this work is on robot programming for manufacturing applications. Industrial manipulators work with low level programming languages. This work presents a new method based on Natural Language Processing (NLP) that allows a user to generate robot programs using natural language lexicon and task information. This will enable a manufacturing operator (for example for painting) who may be unfamiliar with robot programming to easily employ the agent for the manufacturing tasks.


Author(s):  
KOH TOH TZU

Since the end of last year, the researchers at the Institute of Systems Science (ISS) started to consider a more ambitious project as part of its multilingual programming objective. This project examines the domain of Chinese Business Letter Writing. With the problem defined as generating Chinese letters to meet business needs, investigations suggest an intersection of 3 possible approaches: knowledge engineering, form processing and natural language processing. This paper attempts to report some of the findings and document the design and implementation issues that have arisen and been tackled as prototyping work progresses.


2019 ◽  
Vol 9 (2) ◽  
pp. 3985-3989 ◽  
Author(s):  
P. Sharma ◽  
N. Joshi

The purpose of word sense disambiguation (WSD) is to find the meaning of the word in any context with the help of a computer, to find the proper meaning of a lexeme in the available context in the problem area and the relationship between lexicons. This is done using natural language processing (NLP) techniques which involve queries from machine translation (MT), NLP specific documents or output text. MT automatically translates text from one natural language into another. Several application areas for WSD involve information retrieval (IR), lexicography, MT, text processing, speech processing etc. Using this knowledge-based technique, we are investigating Hindi WSD in this article. It involves incorporating word knowledge from external knowledge resources to remove the equivocalness of words. In this experiment, we tried to develop a WSD tool by considering a knowledge-based approach with WordNet of Hindi. The tool uses the knowledge-based LESK algorithm for WSD for Hindi. Our proposed system gives an accuracy of about 71.4%.


2017 ◽  
Vol 2 (3) ◽  
pp. 243
Author(s):  
Mustika Lukman Arief

<p><em>The method used in this studydescriptive qualitative approach. The data used are primary data obtained through field survey. The object of research are employees of PT. </em><em>Putra </em><em>Mustika Prajasa Cargo (PMPC) Jakarta with a population of 137 people were directly sampled. The sampling technique in this study using census method. Knowledge into a powerful force to win the competition in this knowledge economy era Executives and managers who manage and direct the organization needs the support of information and knowledge in developing corporate strategy and perform better decisions in order to maintain and build a strong competitive edge. How does an organization manage information and knowledge will be a critical success factor to become the market leader. Knowledge that contain experience, intuition, best practices and lessons learned into intangible assets which can be used to achieve the goals and objectives. The management system will process, store, retrieve, communicate and share data, information and knowledge becomes a tool to create value for the organization and shareholders and accelerate the development of culturally appropriate to become a knowledge-based organization. The results showed no major competitive advantage of knowledge, and for a company to be put into practice knowledge management operations. </em></p><p><em>Pengetahuan menjadi kekuatan yang kuat untuk memenangkan persaingan dalam pengetahuan ini ekonomi Eksekutif era dan manajer yang mengelola dan mengarahkan organisasi membutuhkan dukungan informasi dan pengetahuan dalam mengembangkan strategi perusahaan dan melakukan keputusan yang lebih baik dalam rangka untuk mempertahankan dan membangun keunggulan kompetitif yang kuat. Bagaimana sebuah organisasi mengelola informasi dan pengetahuan akan menjadi faktor penentu keberhasilan untuk menjadi pemimpin pasar. Pengetahuan yang mengandung pengalaman, intuisi, praktik terbaik dan pelajaran menjadi aset tak berwujud yang dapat digunakan untuk mencapai tujuan dan sasaran perusahaan. Sistem manajemen pengetahuan satu collect, memproses, menyimpan, mengambil, berkomunikasi dan berbagi data, informasi dan pengetahuan menjadi atool untuk menciptakan nilai bagi organisasi dan pemegang saham dan mempercepat pengembangan budaya yang tepat untuk menjadi organisasi berbasis pengetahuan. </em><em>Paper</em><em> adalah tinjauan literatur yang </em><em>ada pada diskusi masalah</em><em>.</em><em> </em><em> A</em><em>pakah</em><em> keunggulan kompetitif utama pengetahuan? Bagaimana sebuah perusahaan akan menempatkan manajemen pengetahuan dalam praktek? Apakah Praktek manajemen pengetahuan akan menghasilkan keunggulan kompetitif dan mempertahankan itu? Ini</em><em>lah</em><em> isu-isu </em><em>dan fokus </em><em>terkait dari </em><em>tulisan</em><em> ini.</em><em></em></p>


Author(s):  
Roy Rada

The techniques of artificial intelligence include knowledgebased, machine learning, and natural language processing techniques. The discipline of investing requires data identification, asset valuation, and risk management. Artificial intelligence techniques apply to many aspects of financial investing, and published work has shown an emphasis on the application of knowledge-based techniques for credit risk assessment and machine learning techniques for stock valuation. However, in the future, knowledge-based, machine learning, and natural language processing techniques will be integrated into systems that simultaneously address data identification, asset valuation, and risk management.


Author(s):  
Gorkem Eken ◽  
Gozde Bilgin ◽  
Irem Dikmen ◽  
M. Talat Birgonul

Portfolio management comprises of identifying, prioritizing, authorizing, managing, and controlling projects, programs, and other related work to achieve specific strategic business objectives. Utilizing a knowledge-based portfolio management approach can be a critical success factor for construction companies. This research aims to present a taxonomy to facilitate the learning process within a knowledge-based project portfolio management system. The taxonomy is capable of codification and classification of lessons revealed during life cycle of projects to enhance their retrieval. Within this context, following a detailed literature review process, the taxonomy is structured under four main categories as "project", "process", "actor", and "resource”. Categories provided in the taxonomy enable tagging of the lessons learned according to the intended level of detail, facilitate retrieval and reuse of the lessons learned in forthcoming projects. In this paper, we will present the structure of the proposed taxonomy and discuss how it can be used to improve portfolio management in construction.


2020 ◽  
Vol 30 (2) ◽  
pp. 155-174
Author(s):  
Tim Hutchinson

Purpose This study aims to provide an overview of recent efforts relating to natural language processing (NLP) and machine learning applied to archival processing, particularly appraisal and sensitivity reviews, and propose functional requirements and workflow considerations for transitioning from experimental to operational use of these tools. Design/methodology/approach The paper has four main sections. 1) A short overview of the NLP and machine learning concepts referenced in the paper. 2) A review of the literature reporting on NLP and machine learning applied to archival processes. 3) An overview and commentary on key existing and developing tools that use NLP or machine learning techniques for archives. 4) This review and analysis will inform a discussion of functional requirements and workflow considerations for NLP and machine learning tools for archival processing. Findings Applications for processing e-mail have received the most attention so far, although most initiatives have been experimental or project based. It now seems feasible to branch out to develop more generalized tools for born-digital, unstructured records. Effective NLP and machine learning tools for archival processing should be usable, interoperable, flexible, iterative and configurable. Originality/value Most implementations of NLP for archives have been experimental or project based. The main exception that has moved into production is ePADD, which includes robust NLP features through its named entity recognition module. This paper takes a broader view, assessing the prospects and possible directions for integrating NLP tools and techniques into archival workflows.


Sign in / Sign up

Export Citation Format

Share Document