Advances in Computer and Electrical Engineering - Challenges and Applications for Implementing Machine Learning in Computer Vision
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 10)

H-INDEX

1
(FIVE YEARS 1)

Published By IGI Global

9781799801825, 9781799801849

Author(s):  
Surendra Rahamatkar

This chapter presents the relevance of picture handling to distinguish different sorts of harm. For areal-type harm, 1) edge extraction, 2) unsupervised arrangement, 3) texture examination, and 4) edge improvement are suitable to distinguish harmed zone. For liner-type harm, it is hard to improve the permeability of harm partition by picture preparing. Likewise, the impact of overlaying office information to help staff to discover harm at an extraction is described.


Author(s):  
Ramgopal Kashyap

In the medical image resolution, automatic segmentation is a challenging task, and it's still an unsolved problem for most medical applications due to the wide variety connected with image modalities, encoding parameters, and organic variability. In this chapter, a review and critique of medical image segmentation using clustering, compression, histogram, edge detection, parametric, variational model. and level set-based methods is presented. Modes of segmentation like manual, semi-automatic, interactive, and automatic are also discussed. To present current challenges, aim and motivation for doing fast, interactive and correct segmentation, the medical image modalities X-ray, CT, MRI, and PET are discussed in this chapter.


Author(s):  
Vinayak Majhi ◽  
Sudip Paul

Content-based image retrieval is a promising technique to access visual data. With the huge development of computer storage, networking, and the transmission technology now it becomes possible to retrieve the image data beside the text. In the traditional way, we find the content of image by the tagged image with some indexed text. With the development of machine learning technique in the domain of artificial intelligence, the feature extraction techniques become easier for CBIR. The medical images are continuously increasing day by day where each image holds some specific and unique information about some specific disease. The objectives of using CBIR in medical diagnosis are to provide correct and effective information to the specialist for the quality and efficient diagnosis of the disease. Medical image content requires different types of CBIR technique for different medical image acquisition techniques such as MRI, CT, PET Scan, USG, MRS, etc. So, in this concern, each CBIR technique has its unique feature extraction algorithm for each acquisition technique.


Author(s):  
Muralikrishna Iyyanki ◽  
Prisilla Jayanthi ◽  
Valli Manickam

At present, public health and population health are the key areas of major concern, and the current study highlights the significant challenges through a few case studies of application of machine learning for health data with focus on regression. Four types of machine learning methods found to be significant are supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. In light of the case studies reported as part of the literature survey and specific exercises carried out for this chapter, it is possible to say that machine learning provides new opportunities for automatic learning in expressive models. Regression models including multiple and multivariate regression are suitable for modeling air pollution and heart disease prediction. The applicability of STATA and R packages for multiple linear regression and predictive modelling for crude birth rate and crude mortality rate is well established in the study as carried out using the data from data.gov.in. Decision tree as a class of very powerful machine learning models is applied for brain tumors. In simple terms, machine learning and data mining techniques go hand-in-hand for prediction, data modelling, and decision making. The health analytics and unpredictable growth of health databases require integration of the conventional data analysis to be paired with methods for efficient computer-assisted analysis. In the second case study, confidence interval is evaluated. Here, the statistical parameter CI is used to indicate the true range of the mean of the crude birth rate and crude mortality rate computed from the observed data.


Author(s):  
Amit Kumar Tyagi ◽  
Poonam Chahal

With the recent development in technologies and integration of millions of internet of things devices, a lot of data is being generated every day (known as Big Data). This is required to improve the growth of several organizations or in applications like e-healthcare, etc. Also, we are entering into an era of smart world, where robotics is going to take place in most of the applications (to solve the world's problems). Implementing robotics in applications like medical, automobile, etc. is an aim/goal of computer vision. Computer vision (CV) is fulfilled by several components like artificial intelligence (AI), machine learning (ML), and deep learning (DL). Here, machine learning and deep learning techniques/algorithms are used to analyze Big Data. Today's various organizations like Google, Facebook, etc. are using ML techniques to search particular data or recommend any post. Hence, the requirement of a computer vision is fulfilled through these three terms: AI, ML, and DL.


Author(s):  
Hiral R. Patel ◽  
Ajay M Patel ◽  
Satyen M. Parikh

The chapter introduces machine learning and why it is important. Machine learning is generally used to find knowledge from unknown data. There are many approaches and algorithms available for performing machine learning. Different kinds of algorithms are available to find different patterns from the data. This chapter focuses on different approaches with different usage.


Author(s):  
Pauline Ong ◽  
Tze Wei Chong ◽  
Woon Kiow Lee

The traditional approach of student attendance monitoring system in Universiti Tun Hussein Onn Malaysia is slow and disruptive. As a solution, biometric verification based on face recognition for student attendance monitoring was presented. The face recognition system consisted of five main stages. Firstly, face images under various conditions were acquired. Next, face detection was performed using the Viola Jones algorithm to detect the face in the original image. The original image was minimized and transformed into grayscale for faster computation. Histogram techniques of oriented gradients was applied to extract the features from the grayscale images, followed by the principal component analysis (PCA) in dimension reduction stage. Face recognition, the last stage of the entire system, using support vector machine (SVM) as classifier. The development of a graphical user interface for student attendance monitoring was also involved. The highest face recognition accuracy of 62% was achieved. The obtained results are less promising which warrants further analysis and improvement.


Author(s):  
Ramgopal Kashyap

The Boltzmann distribution was derived in this chapter. The Boltzmann equation was explained next to the main difficulty of this equation, the integral of the collision operator, which was solved by the BGK-approximation where a long-term substitute is essential. The discretization of the Boltzmann comparison with the BGK-approximation was introduced along with the lattice and the different lattice configurations to define the lattice framework where the method is applied. Also, in this framework, the algorithm of the process was described. The boundary conditions were summarised, where one can see that they represent macroscopic conditions acting locally in every node.


Author(s):  
Sanjay Saxena ◽  
Sudip Paul ◽  
Adhesh Garg ◽  
Angana Saikia ◽  
Amitava Datta

Computational neuroscience is inspired by the mechanism of the human brain. Neural networks have reformed machine learning and artificial intelligence. Deep learning is a type of machine learning that teaches computers to do what comes naturally to individuals: acquire by example. It is inspired by biological brains and became the essential class of models in the field of machine learning. Deep learning involves several layers of computation. In the current scenario, researchers and scientists around the world are focusing on the implementation of different deep models and architectures. This chapter consists the information about major architectures of deep network. That will give the information about convolutional neural network, recurrent neural network, multilayer perceptron, and many more. Further, it discusses CNN (convolutional neural network) and its different pretrained models due to its major requirements in visual imaginary. This chapter also deliberates about the similarity of deep model and architectures with the human brain.


Author(s):  
Amit Kumar Tyagi ◽  
G. Rekha

Due to development in technology, millions of devices (internet of things: IoTs) are generating a large amount of data (which is called as big data). This data is required for analysis processes or analytics tools or techniques. In the past several decades, a lot of research has been using data mining, machine learning, and deep learning techniques. Here, machine learning is a subset of artificial intelligence and deep learning is a subset of machine leaning. Deep learning is more efficient than machine learning technique (in terms of providing result accurate) because in this, it uses perceptron and neuron or back propagation method (i.e., in these techniques, solve a problem by learning by itself [with being programmed by a human being]). In several applications like healthcare, retails, etc. (or any real-world problems), deep learning is used. But, using deep learning techniques in such applications creates several problems and raises several critical issues and challenges, which are need to be overcome to determine accurate results.


Sign in / Sign up

Export Citation Format

Share Document