Advances in Data Mining and Database Management - Analyzing Data Through Probabilistic Modeling in Statistics
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 12)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781799847069, 9781799847076

Author(s):  
Shaival Hemant Nagarsheth ◽  
Shambhu Nath Sharma

The white noise process, the Ornstein-Uhlenbeck process, and coloured noise process are salient noise processes to model the effect of random perturbations. In this chapter, the statistical properties, the master's equations for the Brownian noise process, coloured noise process, and the OU process are summarized. The results associated with the white noise process would be derived as the special cases of the Brownian and the OU noise processes. This chapter also formalizes stochastic differential rules for the Brownian motion and the OU process-driven vector stochastic differential systems in detail. Moreover, the master equations, especially for the coloured noise-driven stochastic differential system as well as the OU noise process-driven, are recast in the operator form involving the drift and modified diffusion operators involving an additional correction term to the standard diffusion operator. The results summarized in this chapter will be useful for modelling a random walk in stochastic systems.


Author(s):  
Kuppulakshmi V. ◽  
Sugapriya C. ◽  
Jeganathan Kathirvel ◽  
Nagarajan Deivanayagampillai

This research investigates the comparison of inventory management planning in Verhult's demand and exponentially increasing demand. The working process is different in both the cases coupling the parameters and points out the constraints for the optimal total cost in both the cases. This analysis shows that rate of deterioration and percentage of reworkable items is considered as decision variable in both (1) exponentially increasing demand and (2) Verhult's demand. While comparing, the total cost in Verhult's demand pattern is more profitable production process. A substantial numerical example is considered to investigate the effect of change in the total cost in both the demand function. A sensitivity analysis is developed to study the effect of changes in total cost.


Author(s):  
Ahan Chatterjee ◽  
Swagatam Roy

The most talked about disease of our era, cancer, has taken many lives, and most of them are due to late prognosis. Statistical data shows around 10 million people lose their lives per year due to cancer globally. With every passing year, the malignant cancer cells are evolving at a rapid pace. The cancer cells are mutating with time, and it's becoming much more dangerous than before. In the chapter, the authors propose a DCGAN-based neural net architecture that will generate synthetic blood cancer cell images from fed data. The images, which will be generated, don't exist but can be formed in the near future due to constant mutation of the virus. Afterwards, the synthetic image is passes through a CNN net architecture which will predict the output class of the synthetic image. The novelty in this chapter is that it will generate some cancer cell images that can be generated after mutation, and it will predict the class of the image, whether it's malignant or benign through the proposed CNN architecture.


Author(s):  
Ahan Chatterjee ◽  
Aniruddha Mandal ◽  
Swagatam Roy ◽  
Shruti Sinha ◽  
Aditi Priya ◽  
...  

In this chapter, the authors take a walkthrough in BCI technology. At first, they took a closer look into the kind of waves that are being generated by our brain (i.e., the EEG and ECoG waves). In the next section, they have discussed about patients affected by CLIS and ALS-CLIS and how they can be treated or be benefitted using BCI technology. Visually evoked potential-based BCI technology has also been thoroughly discussed in this chapter. The application of machine learning and deep learning in this field are also being discussed with the need for feature engineering in this paradigm also been said. In the final section, they have done a thorough literature survey on various research-related to this field with proposed methodology and results.


Author(s):  
Dariusz Jacek Jakóbczak

The proposed method, called probabilistic nodes combination (PNC), is the method of 2D curve modeling and handwriting identification by using the set of key points. Nodes are treated as characteristic points of signature or handwriting for modeling and writer recognition. Identification of handwritten letters or symbols need modeling, and the model of each individual symbol or character is built by a choice of probability distribution function and nodes combination. PNC modeling via nodes combination and parameter γ as probability distribution function enables curve parameterization and interpolation for each specific letter or symbol. Two-dimensional curve is modeled and interpolated via nodes combination and different functions as continuous probability distribution functions: polynomial, sine, cosine, tangent, cotangent, logarithm, exponent, arc sin, arc cos, arc tan, arc cot, or power function.


Author(s):  
Dariusz Jacek Jakóbczak

Object recognition is one of the topics of artificial intelligence, computer vision, image processing, and machine vision. The classical problem in these areas of computer science is that of determining object via characteristic features. An important feature of the object is its contour. Accurate reconstruction of contour points leads to possibility to compare the unknown object with models of specified objects. The key information about the object is the set of contour points which are treated as interpolation nodes. Classical interpolations (Lagrange or Newton polynomials) are useless for precise reconstruction of the contour. The chapter is dealing with proposed method of contour reconstruction via curves interpolation. First stage consists in computing the contour points of the object to be recognized. Then one can compare models of known objects, given by the sets of contour points, with coordinates of interpolated points of unknown object. Contour points reconstruction and curve interpolation are possible using a new method of Hurwitz-Radon matrices.


Author(s):  
Franco Caron

The capability to elaborate a reliable estimate at completion for a project since the early stage of project execution is the prerequisite in order to provide an effective project control. The non-repetitive and uncertain nature of projects and the involvement of multiple stakeholders increase project complexity and raise the need to exploit all the available knowledge sources in order to improve the forecasting process. Therefore, drawing on a set of case studies, this chapter proposes a Bayesian approach to support the elaboration of the estimate at completion in those industrial fields where projects are denoted by a high level of uncertainty and complexity. The Bayesian approach allows the authors to integrate experts' opinions, data records related to past projects, and data related to the current performance of the ongoing project. Data from past projects are selected through a similarity analysis. The proposed approach shows a higher accuracy in comparison with the traditional formulas typical of the earned value management (EVM) methodology.


Author(s):  
Ahan Chatterjee ◽  
Swagatam Roy ◽  
Trisha Sinha

The main objective of this chapter is to take a deeper look into the infrastructural condition of the hospitals across the districts of West Bengal, India. There is a liaison between various variables and the infrastructural growth of the public healthcare centres. In this chapter, the authors have formed a panel data from the year 2004 – 2017, consisting of 17 districts across West Bengal. They have assessed the random effect model on the data to choose their respective hypothesis. A Bayesian risk analysis had also been carried out on the mortality rate of the patients on which factors it depends. Next, a Poisson distribution model is being fit to get some insights into the data. Afterward, they predicted the number of patients who will arrive in 2020 and the shortfall of hospitals is also being projected. The remedies to these have also been suggested in that section. At last, they carried out an econometric analysis in the healthcare domain and took a closer look at how healthcare expenditure affects our focus variables performance.


Author(s):  
Dariusz Jacek Jakobczak ◽  
Ahan Chatterjee

The huge amount of data burst which occurred with the arrival of economic access to the internet led to the rise of market of cloud computing which stores this data. And obtaining results from these data led to the growth of the “big data” industry which analyses this humongous amount of data and retrieve conclusion using various algorithms. Hadoop as a big data platform certainly uses map-reduce framework to give an analysis report of big data. The term “big data” can be defined as modern technique to store, capture, and manage data which are in the scale of petabytes or larger sized dataset with high-velocity and various structures. To address this massive growth of data or big data requires a huge computing space to ensure fruitful results through processing of data, and cloud computing is that technology that can perform huge-scale and computation which are very complex in nature. Cloud analytics does enable organizations to perform better business intelligence, data warehouse operation, and online analytical processing (OLAP).


Author(s):  
Swagatam Roy ◽  
Ahan Chatterjee ◽  
Trisha Sinha

In this chapter, the authors take a closer look into the economic relation with cybercrime and an analytics method to combat that. At first, they examine whether the increase in the unemployment rate among youths is the prime cause of the growth of cybercrime or not. They proposed a model with the help of the Phillips curve and Okun's law to get hold of the assumptions. A brief discussion of the impact of cybercrime in economic growth is also presented in this paper. Crime pattern detection and the impact of bitcoin in the current digital currency market have also been discussed. They have proposed an analytic method to combat the crime using the concept of game theory. They have tested the vulnerability of the cloud datacenter using game theory where two players will play the game in non-cooperative strategy in the Nash equilibrium state. Through the rational state decisions of the players and implementation MSWA algorithm, they have simulated the results through which they can check the dysfunctionality probabilities of the datacenters.


Sign in / Sign up

Export Citation Format

Share Document