scholarly journals Quality Assurance Techniques in SRS Documents

For development of software, the most important aspects are the software requirements. They are the foundation stone for initiating any software development process. Software requirements documents contain the needs of the customers in natural language. By using various methods like reviews, inspections, walkthroughs, the content of the software requirement can be checked manually to reduce ambiguity. In recent years there is an attempt to automate these activities as a result of advancement in automation of natural language analysis. Automation of text mining techniques and text analysis is leading to feasibility of automation of requirements documents processing. The process can be completed in minutes now which were taking weeks earlier. Automation of analysis of text has triggered numerous possibilities for quality assurance of requirements. The possibilities of automation are model checking automation, automated rule checking, automated test case execution and measurement automation. In future more tools will enter the scene for automation of requirements quality assurance. At present most of them are in experimental stage. There is a definite need for more research in this field.

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 137613-137634
Author(s):  
Issa Atoum ◽  
Mahmoud Khalid Baklizi ◽  
Izzat Alsmadi ◽  
Ahmed Ali Otoom ◽  
Taha Alhersh ◽  
...  

Author(s):  
Faisal Mokammel ◽  
Eric Coatanea ◽  
Francois Christophe ◽  
Mohamed Ba Khouya ◽  
Galina Medyna

In engineering design, the needs of stakeholders are often captured and expressed in natural language (NL). While this facilitates such tasks as sharing information with non-specialists, there are several associated problems including ambiguity, incompleteness, understandability, and testability. Traditionally, these issues were managed through tedious procedures such as reading requirements documents and looking for errors, but new approaches are being developed to assist designers in collecting, analysing, and clarifying requirements. The quality of the end-product is strongly related to the clarity of requirements and, thus, requirements should be managed carefully. This paper proposes to combine diverse requirements quality measures found from literature. These metrics are coherently integrated in a single software tool. This paper also proposes a new metric for clustering requirements based on their similarity to increase the quality of requirement model. The proposed methodology is tested on a case study and results show that this tool provides designers with insight on the quality of individual requirements as well as with a holistic assessment of the entire set of requirements.


Author(s):  
J. F. M. Burg ◽  
R. P. van de Riet

In this paper it is argued that CASE environments could and should be enhanced considerably by using theories and knowledge from linguistics. The environments should 'know' about the language of their users and the domains they are used for. By basing the modeling techniques supported by the CASE tool on linguistic theories and by incorporating Natural Language parsing and generating tools, the CASE environment is able to handle the users' language in an accurate way. More specifically, the CASE environment deals with the meaning of words, instead of the meaningless strings themselves. These meanings, which are retrieved from an online lexicon, are linked to the words used in both the requirements documents as well as in the conceptual models in order to achieve a certain degree of consistency between the two of them. The base structure of these models is automatically derived by analyzing the textual requirements documents that describe the domain under consideration. This Natural Language analysis consists of parsing the texts and retrieving the word meanings that corresponds to concepts that may be of interest for modeling this domain accurately. Furthermore, the resulting models can be validated by people who are not familiar with the modeling notations used, by paraphrasing the models to Natural Language sentences. This paper mainly focuses on the profits gained by using linguistic knowledge in CASE environments, on the philosophy behind this approach and on three specific Natural Language components: the lexicon, Natural Language analysis and text generation for requirements validation. This article is based on the paper "Truly Intelligent CASE Environments Profit from Linguistics" [7], which was presented during the SEKE conference in Madrid (June 1997).


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Fridah Katushemererwe ◽  
Andrew Caines ◽  
Paula Buttery

AbstractThis paper describes an endeavour to build natural language processing (NLP) tools for Runyakitara, a group of four closely related Bantu languages spoken in western Uganda. In contrast with major world languages such as English, for which corpora are comparatively abundant and NLP tools are well developed, computational linguistic resources for Runyakitara are in short supply. First therefore, we need to collect corpora for these languages, before we can proceed to the design of a spell-checker, grammar-checker and applications for computer-assisted language learning (CALL). We explain how we are collecting primary data for a new Runya Corpus of speech and writing, we outline the design of a morphological analyser, and discuss how we can use these new resources to build NLP tools. We are initially working with Runyankore–Rukiga, a closely-related pair of Runyakitara languages, and we frame our project in the context of NLP for low-resource languages, as well as CALL for the preservation of endangered languages. We put our project forward as a test case for the revitalization of endangered languages through education and technology.


Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-20
Author(s):  
Paulo A. M. Barbosa ◽  
Plácido R. Pinheiro ◽  
Francisca R. V. Silveira ◽  
Marum Simão Filho

During the software development process, the decision maker (DM) must master many variables inherent in this process. Software releases represent the order in which a set of requirements is implemented and delivered to the customer. Structuring and enumerating a set of releases with prioritized requirements represents a challenging task because the requirements contain their characteristics, such as technical precedence, the cost required for implementation, the importance that one or more customers add to the requirement, among other factors. To facilitate this work of selection and prioritization of releases, the decision maker may adopt some support tools. One field of study already known to solve this type of problem is the Search-Based Software Engineering (SBSE) that uses metaheuristics as a means to find reasonable solutions taking into account a set of well-defined objectives and constraints. In this paper, we seek to increase the possibilities of solving the Next Release Problem using the methods available in Verbal Decision Analysis (VDA). We generate a problem and submit it so that the VDA and SBSE methods try to resolve it. To validate this research, we compared the results obtained through VDA and compared with the SBSE results. We present and discuss the results in the respective sections.


2020 ◽  
Author(s):  
Joshua Conrad Jackson ◽  
Joseph Watts ◽  
Johann-Mattis List ◽  
Ryan Drabble ◽  
Kristen Lindquist

Humans have been using language for thousands of years, but psychologists seldom consider what natural language can tell us about the mind. Here we propose that language offers a unique window into human cognition. After briefly summarizing the legacy of language analyses in psychological science, we show how methodological advances have made these analyses more feasible and insightful than ever before. In particular, we describe how two forms of language analysis—comparative linguistics and natural language processing—are already contributing to how we understand emotion, creativity, and religion, and overcoming methodological obstacles related to statistical power and culturally diverse samples. We summarize resources for learning both of these methods, and highlight the best way to combine language analysis techniques with behavioral paradigms. Applying language analysis to large-scale and cross-cultural datasets promises to provide major breakthroughs in psychological science.


Sign in / Sign up

Export Citation Format

Share Document