Soft Computing Techniques
Recently Published Documents


TOTAL DOCUMENTS

1138
(FIVE YEARS 594)

H-INDEX

37
(FIVE YEARS 20)

2021 ◽  
pp. 350-356
Author(s):  
Manju Duhan ◽  
Pradeep Kumar Bhatia

Effective software maintenance is a crucial factor to measure that can be achieved with the help of software metrics. In this paper, authors derived a new approach for measuring the maintainability of software based on hybrid metrics that takes advantages of both i.e. static metrics and dynamic metrics in an object-oriented environment whereas, dynamic metrics capture the run time features of object-oriented languages i.e. run time polymorphism, dynamic binding etc. which is not covered by static metrics. To achieve this, the authors proposed a model based on static and hybrid metrics to measure maintainability factor by using soft computing techniques and it is found that the proposed neuro-fuzzy model was trained well and predict adequate results with MAE 0.003 and RMSE 0.009 based on hybrid metrics. Additionally, the proposed model was validated on two test datasets and it is concluded that the proposed model performed well, based on hybrid metrics.


2021 ◽  
Vol 11 (18) ◽  
pp. 8290
Author(s):  
Muhammad Adnan Khan ◽  
Jürgen Stamm ◽  
Sajjad Haider

A key goal of sediment management is the quantification of suspended sediment load (SSL) in rivers. This research focused on a comparison of different means of suspended sediment estimation in rivers. This includes sediment rating curves (SRC) and soft computing techniques, i.e., local linear regression (LLR), artificial neural networks (ANN) and the wavelet-cum-ANN (WANN) method. Then, different techniques were applied to predict daily SSL at the Pirna and Magdeburg Stations of the Elbe River in Germany. By comparing the results of all the best models, it can be concluded that the soft computing techniques (LLR, ANN and WANN) better predicted the SSL than the SRC method. This is due to the fact that the former employed non-linear techniques for the data series reconstruction. The WANN models were the overall best performer. The WANN models in the testing phase showed a mean R2 of 0.92 and a PBIAS of −0.59%. Additionally, they were able to capture the suspended sediment peaks with greater accuracy. They were more successful as they captured the dynamic features of the non-linear and time-variant suspended sediment load, while other methods used simple raw data. Thus, WANN models could be an efficient technique to simulate the SSL time series because they extract key features embedded in the SSL signal.


Author(s):  
Aishwarya Priyadarshini ◽  
Sanhita Mishra ◽  
Debani Prasad Mishra ◽  
Surender Reddy Salkuti ◽  
Ramakanta Mohanty

<p>Nowadays, fraudulent or deceitful activities associated with financial transactions, predominantly using credit cards have been increasing at an alarming rate and are one of the most prevalent activities in finance industries, corporate companies, and other government organizations. It is therefore essential to incorporate a fraud detection system that mainly consists of intelligent fraud detection techniques to keep in view the consumer and clients’ welfare alike. Numerous fraud detection procedures, techniques, and systems in literature have been implemented by employing a myriad of intelligent techniques including algorithms and frameworks to detect fraudulent and deceitful transactions. This paper initially analyses the data through exploratory data analysis and then proposes various classification models that are implemented using intelligent soft computing techniques to predictively classify fraudulent credit card transactions. Classification algorithms such as K-Nearest neighbor (K-NN), decision tree, random forest (RF), and logistic regression (LR) have been implemented to critically evaluate their performances. The proposed model is computationally efficient, light-weight and can be used for credit card fraudulent transaction detection with better accuracy.</p>


2021 ◽  
Vol 11 (17) ◽  
pp. 8007
Author(s):  
Marina Alonso-Parra ◽  
Cristina Puente ◽  
Ana Laguna ◽  
Rafael Palacios

This research is aimed to analyze textual descriptions of harassment situations collected anonymously by the Hollaback! project. Hollaback! is an international movement created to end harassment in all of its forms. Its goal is to collect stories of harassment through the web and a free app all around the world to elevate victims’ individual voices to find a societal solution. Hollaback! pretends to analyze the impact of a bystander during a harassment in order to launch a public awareness-raising campaign to equip everyday people with tools to undo harassment. Thus, the analysis presented in this paper is a first step in Hollaback!’s purpose: the automatic detection of a witness intervention inferred from the victim’s own report. In a first step, natural language processing techniques were used to analyze the victim’s free-text descriptions. For this part, we used the whole dataset with all its countries and locations. In addition, classification models, based on machine learning and soft computing techniques, were developed in the second part of this study to classify the descriptions into those that have bystander presence and those that do not. For this machine learning part, we selected the city of Madrid as an example, in order to establish a criterion of the witness behavior procedure.


Sign in / Sign up

Export Citation Format

Share Document