scholarly journals Assessment of Software Testing and Quality Assurance in Natural Language Processing Applications and a Linguistically Inspired Approach to Improving It

Author(s):  
K. Bretonnel Cohen ◽  
Lawrence E. Hunter ◽  
Martha Palmer
2021 ◽  
Vol 23 (08) ◽  
pp. 295-304
Author(s):  
Sai Deepak Reddy Konreddy ◽  

The number of applications being built and deployed everyday are increasing by leaps and bounds. To ensure the best user/client experience, the application needs to be free of bugs and other service issues. This marks the importance of testing phase in application development and deployment phase. Basically, testing is dissected into couple of parts being Manual Testing and Automation Testing. Manual testing, which is usually, an individual tester is given software guidance to execute. The tester would post the findings as “passed” or “failed” as per the guidance. But this kind of testing is very costly and time taking process. To eliminate these short comings, automation testing was introduced but it had very little scope and applications are limited. Now, that Artificial Intelligence has been foraying into many domains and has been showing significant impact over those domains. The core principles of Natural Language Processing that can be used in Software Testing are discussed in this paper. It also provides a glimpse at how Natural Language Processing and Software Testing will evolve in the future. Here we focus mainly on test case prioritization, predicting manual test case failure and generation of test cases from requirements utilizing NLP. The research indicates that NLP will improve software testing outcomes, and NLP-based testing will usher in a coming age of software testers work in the not-too-distant times.


2021 ◽  
Author(s):  
Tajmilur Rahman ◽  
Joshua Nwokeji ◽  
Richard Matovu ◽  
Stephen Frezza ◽  
Harika Sugnanam ◽  
...  

Author(s):  
Pingchuan Ma ◽  
Shuai Wang ◽  
Jin Liu

Natural language processing (NLP) models have been increasingly used in sensitive application domains including credit scoring, insurance, and loan assessment. Hence, it is critical to know that the decisions made by NLP models are free of unfair bias toward certain subpopulation groups. In this paper, we propose a novel framework employing metamorphic testing, a well-established software testing scheme, to test NLP models and find discriminatory inputs that provoke fairness violations. Furthermore, inspired by recent breakthroughs in the certified robustness of machine learning, we formulate NLP model fairness in a practical setting as (ε, k)-fairness and accordingly smooth the model predictions to mitigate fairness violations. We demonstrate our technique using popular (commercial) NLP models, and successfully flag thousands of discriminatory inputs that can cause fairness violations. We further enhance the evaluated models by adding certified fairness guarantee at a modest cost.


2021 ◽  
Vol 28 (3) ◽  
pp. i-ii

This third issue (September 2020) of CIT. Journal of Computing and Information Technology comprises four papers from the regular section, tackling topics from the areas of software testing and debugging, machine learning, natural language processing and business management processing.


Sign in / Sign up

Export Citation Format

Share Document