Methodologies and Applications of Computational Statistics for Machine Intelligence - Advances in Systems Analysis, Software Engineering, and High Performance Computing
Latest Publications


TOTAL DOCUMENTS

12
(FIVE YEARS 12)

H-INDEX

0
(FIVE YEARS 0)

Published By IGI Global

9781799877011, 9781799877035

Author(s):  
L. Nirmala Devi ◽  
A.Nageswar Rao

Human action recognition (HAR) is one of most significant research topics, and it has attracted the concentration of many researchers. Automatic HAR system is applied in several fields like visual surveillance, data retrieval, healthcare, etc. Based on this inspiration, in this chapter, the authors propose a new HAR model that considers an image as input and analyses and exposes the action present in it. Under the analysis phase, they implement two different feature extraction methods with the help of rotation invariant Gabor filter and edge adaptive wavelet filter. For every action image, a new vector called as composite feature vector is formulated and then subjected to dimensionality reduction through principal component analysis (PCA). Finally, the authors employ the most popular supervised machine learning algorithm (i.e., support vector machine [SVM]) for classification. Simulation is done over two standard datasets; they are KTH and Weizmann, and the performance is measured through an accuracy metric.


Author(s):  
Nagadevi Darapureddy ◽  
Muralidhar Kurni ◽  
Saritha K.

Artificial intelligence (AI) refers to science-generating devices with functions like reasoning, thinking, learning, and planning. A robot is an intelligent artificial machine capable of sensing and interacting with its environment utilizing integrated sensors or computer vision. In the present day, AI has become a more familiar presence in robotic resolutions, introducing flexibility and learning capabilities. A robot with AI provides new opportunities for industries to produce work safer, save valuable time, and increase productivity. Economic impact assessment and awareness of the social, legal, and ethical problems of robotics and AI are essential to optimize the advantages of these innovations while minimizing adverse effects. The impact of AI and robots affects healthcare, manufacturing, transport, and jobs in logistics, security, retail, agri-food, and construction. The chapter outlines the vision of AI, robot's timeline, highlighting robot's limitations, hence embedding AI to robotic real-world applications to get an optimized solution.


Author(s):  
Yakup Ari

The financial time series have a high frequency and the difference between their observations is not regular. Therefore, continuous models can be used instead of discrete-time series models. The purpose of this chapter is to define Lévy-driven continuous autoregressive moving average (CARMA) models and their applications. The CARMA model is an explicit solution to stochastic differential equations, and also, it is analogue to the discrete ARMA models. In order to form a basis for CARMA processes, the structures of discrete-time processes models are examined. Then stochastic differential equations, Lévy processes, compound Poisson processes, and variance gamma processes are defined. Finally, the parameter estimation of CARMA(2,1) is discussed as an example. The most common method for the parameter estimation of the CARMA process is the pseudo maximum likelihood estimation (PMLE) method by mapping the ARMA coefficients to the corresponding estimates of the CARMA coefficients. Furthermore, a simulation study and a real data application are given as examples.


Author(s):  
Venkat Narayana Rao T. ◽  
Manogna Thumukunta ◽  
Muralidhar Kurni ◽  
Saritha K.

Artificial intelligence and automation are believed by many to be the new age of industrial revolution. Machine learning is an artificial intelligence section that recognizes patterns from vast amounts of data and projects useful information. Prediction, as an application of machine learning, has been sought after by all kinds of industries. Predictive models with higher efficiencies have proven effective in reducing market risks, predicting natural disasters, indicating health risks, and predicting stock values. The quality of decision making through these algorithms has left a lasting impression on several businesses and is bound to alter how the world looks at analytics. This chapter includes an introduction to machine learning and prediction using machine learning. It also sheds light on its approach and its applications.


Author(s):  
Raghavendra Rao Althar ◽  
Debabrata Samanta

The chapter focuses on exploring the work done for applying data science for software engineering, focusing on secured software systems development. With requirements management being the first stage of the life cycle, all the approaches that can help security mindset right at the beginning are explored. By exploring the work done in this area, various key themes of security and its data sources are explored, which will mark the setup of base for advanced exploration of the better approaches to make software systems mature. Based on the assessments of some of the work done in this area, possible prospects are explored. This exploration also helps to emphasize the key challenges that are causing trouble for the software development community. The work also explores the possible collaboration across machine learning, deep learning, and natural language processing approaches. The work helps to throw light on critical dimensions of software development where security plays a key role.


Author(s):  
Ankita Mandal ◽  
Soumi Dutta ◽  
Sabyasachi Pramanik

In the present research work, the use of geometrical figures have been made for the calculation of the value of pi. Instead of circle and square, ellipse and rectangle had been used to derive the value of pi. Ellipse can be considered as an extension of a circle where it had been stretched in two dimensions in unequal manner giving rise to the concept of major axis and minor axis. These axes are considered as the length and breadth of the considered rectangle. The ellipse has been considered within the rectangle and some random points are generated to see the position occurrence of the generated points. If the point lies within the ellipse, then the specific counter is incremented; otherwise, the counter for the rectangle is incremented.


Author(s):  
Raghavendra Rao Althar ◽  
Debabrata Samanta

This chapter focuses on knowledge graphs application in software engineering. It starts with a general exploration of artificial intelligence for software engineering and then funnels down to the area where knowledge graphs can be a good fit. The focus is to put together work done in this area and call out key learning and future aspirations. The knowledge management system's architecture, specific application of the knowledge graph in software engineering like automation of test case creation and aspiring to build a continuous learning system are explored. Understanding the semantics of the knowledge, developing an intelligent development environment, defect prediction with network analysis, and clustering of the graph data are exciting explorations.


Author(s):  
Yibeltal Meslie ◽  
Wegayehu Enbeyle ◽  
Binay Kumar Pandey ◽  
Sabyasachi Pramanik ◽  
Digvijay Pandey ◽  
...  

COVID-19 is likely to pose a significant threat to healthcare, especially for disadvantaged populations due to the inadequate condition of public health services with people's lack of financial ways to obtain healthcare. The primary intention of such research was to investigate trend analysis for total daily confirmed cases with new corona virus (i.e., COVID-19) in the countries of Africa and Asia. The study utilized the daily recorded time series observed for two weeks (52 observations) in which the data is obtained from the world health organization (WHO) and world meter website. Univariate ARIMA models were employed. STATA 14.2 and Minitab 14 statistical software were used for the analysis at 5% significance level for testing hypothesis. Throughout time frame studied, because all four series are non-stationary at level, they became static after the first variation. The result revealed the appropriate time series model (ARIMA) for Ethiopia, Pakistan, India, and Nigeria were Moving Average order 2, ARIMA(1, 1, 1), ARIMA(2, 1, 1), and ARIMA (1, 1, 2), respectively.


Author(s):  
Abhishek Bhattacharya ◽  
Arijit Ghosal ◽  
Ahmed J. Obaid ◽  
Salahddine Krit ◽  
Vinod Kumar Shukla ◽  
...  

Microblogging, where millions of users exchange messages to share their opinions on different trending and non-trending topics, is one of the popular communication media in recent times. Several researchers are concentrating on these data due to a huge source of information exchanges in online social media. In platforms such as Twitter, dataset-generated lacks coherence, and manually extracting meaning or knowledge from them proves to be painstakingly difficult. It opens up the challenges to the researchers for knowledge extraction driven by a summarization approach. Therefore, automated summary generation tools are recommended to get a meaningful summary out of a given topic becomes crucial in the age of big data. In this work, an unsupervised, extractive summarization model has been proposed. For categorization of data, k-means algorithm has been used, and based on scoring of each document in the corpus, summarization model is designed. The proposed methodology achieves an improved outcome over existing methods, such as lexical rank, sum basic, LSA, etc. evaluated by rouge tool.


Author(s):  
M. R. Sundara Kumar ◽  
S. Sankar ◽  
Vinay Kumar Nassa ◽  
Digvijay Pandey ◽  
Binay Kumar Pandey ◽  
...  

In this digital world, a set of information about the real-world entities is collected and stored in a common place for extraction. When the information generated has no meaning, it will convert into meaningful information with a set of rules. Those data have to be converted from one form to another form based on the attributes where it was generated. Storing these data with huge volume in one place and retrieving from the repository reveals complications. To overcome the problem of extraction, a set of rules and algorithms was framed by the standards and researchers. Mining the data from the repository by certain principles is called data mining. It has a lot of algorithms and rules for extraction from the data warehouses. But when the data is stored under a common structure on the repository, the values derived from that huge volume are complicated. Computing statistical data using data mining provides the exact information about the real-world applications like population, weather report, and probability of occurrences.


Sign in / Sign up

Export Citation Format

Share Document