Fuzzy Prediction of Insolvent Customers in Mobile Telecommunication

Author(s):  
Walid Moudani ◽  
Grace Zaarour ◽  
Félix Mora-Camino

This paper presents a predictive model to handle customer insolvency in advance for large mobile telecommunication companies for the purpose of minimizing their losses. However, another goal is of the highest interest for large mobile telecommunication companies is based on maintaining an overall satisfaction of the customers which may have important consequences on the quality and on the consume return of the operations. In this paper, a new mathematical formulation taking into consideration a set of business rules and the satisfaction of the customers is proposed. However, the customer insolvency is defined to be a classification problem since our main purpose is to categorize the customer in one of the two classes: potentially insolvent or potentially solvent. Therefore, a model with precise business prediction using the knowledge discovery and Data Mining techniques on an enormous heterogeneous and noisy data is proposed. Moreover, a fuzzy approach to evaluate and analyze the customer behavior leading to segment them into groups that provide better understanding of customers is developed. These groups with many other significant variables feed into a classification algorithm based on Rough Set technique to classify the customers. A real case study is considered here, followed by analysis and comparison of the results for the reason to select the best classification model that maximizes the accuracy for insolvent customers and minimizes the error rate in the misclassification of solvent customers.

Author(s):  
Walid Moudani ◽  
Grace Zaarour ◽  
Félix Mora-Camino

This paper proposes a predictive model to handle customer insolvency in advance for large mobile telecommunication companies for the purpose of minimizing their losses while preserving an overall satisfaction of the customers which may have important consequences on the quality and on the consume return of the operations. A new mathematical formulation taking into consideration a set of business rules and the satisfaction of the customers is proposed. However, the customer insolvency is defined to be a classification problem since our main purpose is to categorize the customer in one of the two classes: potentially insolvent or potentially solvent. Therefore, a model with precise business prediction using the knowledge discovery and Data Mining techniques on an enormous heterogeneous and noisy data is proposed. A fuzzy approach to evaluate and analyze the customer behavior leading to segment them into groups that provide better understanding of customers is developed. These groups with many other significant variables feed into a classification algorithm based on Rough fuzzy Sets technique to classify the customers. A real case study is considered here, followed by analysis and comparison of the results for the reason to select the best classification model that maximizes the accuracy for insolvent customers and minimizes the error rate in the misclassification of solvent customers.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 727
Author(s):  
Eric J. Ma ◽  
Arkadij Kummer

We present a case study applying hierarchical Bayesian estimation on high-throughput protein melting-point data measured across the tree of life. We show that the model is able to impute reasonable melting temperatures even in the face of unreasonably noisy data. Additionally, we demonstrate how to use the variance in melting-temperature posterior-distribution estimates to enable principled decision-making in common high-throughput measurement tasks, and contrast the decision-making workflow against simple maximum-likelihood curve-fitting. We conclude with a discussion of the relative merits of each workflow.


2021 ◽  
Vol 11 (11) ◽  
pp. 5123
Author(s):  
Maiada M. Mahmoud ◽  
Nahla A. Belal ◽  
Aliaa Youssif

Transcription factors (TFs) are proteins that control the transcription of a gene from DNA to messenger RNA (mRNA). TFs bind to a specific DNA sequence called a binding site. Transcription factor binding sites have not yet been completely identified, and this is considered to be a challenge that could be approached computationally. This challenge is considered to be a classification problem in machine learning. In this paper, the prediction of transcription factor binding sites of SP1 on human chromosome1 is presented using different classification techniques, and a model using voting is proposed. The highest Area Under the Curve (AUC) achieved is 0.97 using K-Nearest Neighbors (KNN), and 0.95 using the proposed voting technique. However, the proposed voting technique is more efficient with noisy data. This study highlights the applicability of the voting technique for the prediction of binding sites, and highlights the outperformance of KNN on this type of data. The study also highlights the significance of using voting.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1714
Author(s):  
Mohamed Marey ◽  
Hala Mostafa

In this work, we propose a general framework to design a signal classification algorithm over time selective channels for wireless communications applications. We derive an upper bound on the maximum number of observation samples over which the channel response is an essential invariant. The proposed framework relies on dividing the received signal into blocks, and each of them has a length less than the mentioned bound. Then, these blocks are fed into a number of classifiers in a parallel fashion. A final decision is made through a well-designed combiner and detector. As a case study, we employ the proposed framework on a space-time block-code classification problem by developing two combiners and detectors. Monte Carlo simulations show that the proposed framework is capable of achieving excellent classification performance over time selective channels compared to the conventional algorithms.


Author(s):  
Ritam Guha ◽  
Manosij Ghosh ◽  
Pawan Kumar Singh ◽  
Ram Sarkar ◽  
Mita Nasipuri

AbstractIn any multi-script environment, handwritten script classification is an unavoidable pre-requisite before the document images are fed to their respective Optical Character Recognition (OCR) engines. Over the years, this complex pattern classification problem has been solved by researchers proposing various feature vectors mostly having large dimensions, thereby increasing the computation complexity of the whole classification model. Feature Selection (FS) can serve as an intermediate step to reduce the size of the feature vectors by restricting them only to the essential and relevant features. In the present work, we have addressed this issue by introducing a new FS algorithm, called Hybrid Swarm and Gravitation-based FS (HSGFS). This algorithm has been applied over three feature vectors introduced in the literature recently—Distance-Hough Transform (DHT), Histogram of Oriented Gradients (HOG), and Modified log-Gabor (MLG) filter Transform. Three state-of-the-art classifiers, namely, Multi-Layer Perceptron (MLP), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM), are used to evaluate the optimal subset of features generated by the proposed FS model. Handwritten datasets at block, text line, and word level, consisting of officially recognized 12 Indic scripts, are prepared for experimentation. An average improvement in the range of 2–5% is achieved in the classification accuracy by utilizing only about 75–80% of the original feature vectors on all three datasets. The proposed method also shows better performance when compared to some popularly used FS models. The codes used for implementing HSGFS can be found in the following Github link: https://github.com/Ritam-Guha/HSGFS.


Author(s):  
P A Bracewell ◽  
U R Klement

Piping design for ‘revamp’ projects in the process industry requires the retrieval of large amounts of ‘as-built’ data from existing process plant installations. Positional data with a high degree of accuracy are required. Photogrammetry, the science of measurement from photographs, was identified in Imperial Chemical Industries plc (ICI) as a suitable tool for information retrieval. The mathematical formulation enabling the definition of three-dimensional positions from photographic information is described. The process of using ICI's photogrammetric system for the definition of complete objects such as structures and pipes is illustrated. The need for specialized photogrammetric software for design purposes is explained. A case study describing how the photogrammetric system has been applied is described and graphical outputs from this exercise are shown. It is concluded that this particular photogrammetric system has proved to be a cost effective and accurate tool for the retrieval of ‘as-built’ information.


Author(s):  
Steven Tebby ◽  
Ebrahim Esmailzadeh ◽  
Ahmad Barari

The torsion stiffness of an automotive chassis can be determined using an analytical approach based purely on geometry, using an experimental method, or alternatively by employing a Finite Element Analysis (FEA) process. These three methods are suitable at different design stages and combined together could prove to be practical methods of determining the torsion stiffness of a chassis. This paper describes and compares two distinct FEA processes to determine the torsion stiffness of an automotive chassis during the detailed design stage. The first process iteratively applies forces to the model and records displacements, while the second process gradually applies vertical displacements in place of force to determine the torsional stiffness threshold. Each method is explained and supported with a case study to provide a basis of comparison of the results.


Sign in / Sign up

Export Citation Format

Share Document