International Journal of Image Processing and Vision Science
Latest Publications


TOTAL DOCUMENTS

75
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Institute For Project Management Pvt. Ltd

2278-1110

Author(s):  
Snehal B. Ranit ◽  
Dr. Nileshsingh V. Thakur

This paper addresses the issue of image segmentation. Image segmentation process is the main basic process or technique used in various image processing problem domains, for example, content based image retrieval; pattern recognition; object recognition; face recognition; medical image processing; fault detection in product industries; etc. Scope of improvement exists in the following areas: Image partitioning; color based feature; texture based feature, searching mechanism for similarity; cluster formation logic; pixel connectivity criterion; intelligent decision making for clustering; processing time; etc. This paper presents the image segmentation mechanism which addresses few of the identified areas where the scope of contribution exists. Presented work basically deals with the development of the mechanism which is divided into three parts first part focuses on the color based image segmentation using k-means clustering methodology. Second part deals with region properties based segmentation. Later, deals with the boundary based segmentation. In all these three approaches, finally the Steiner tree is created to identify the class of the region. For this purpose the Euclidean distance is used. Experimental result justifies the application of the developed mechanism for the image segmentation.


Author(s):  
Akansha Saxena ◽  
Santosh Kumar

The term Mathematical Morphology (MM) mostly deals with the mathematical theory of describing shapes using sets. In morphology, images are represented as sets. This task is investigated by the interaction between an image and a certain chosen arbitrary structuring element using the basic operations of erosion and dilation. The various applications of morphologyinclude skeletonization, prunning, optical character recognition,image analysis,artifacts removal,boundary extraction, etc. It is further extended by the fact that mathematical morphology provides better quality image data for analysis and diagnostic purposes. The process is very efficient due to the use of MATLAB algorithmswhich are helpful for securing meaningful information against different threats like-speckle noise, salt and pepper noise,etc.


Author(s):  
ANUJA DAS ◽  
SHREETAM BEHERA ◽  
TUSAR KANTA PANDA

In the recent years Cardiac disorder is a very common problem faced by the people. The ECG is the most important test for the interpretation of cardiac abnormalities. The ECG gives the electrical activity of the human heart and by analyzing the deviation in these electrical activities, conclusion can be drawn. The study is divided into two parts. In the first part it deals with the detection of real time ECG waveform from the MIT-BIT Arrhythmia database and then these signals is further diagnosed by applying Wavelet Transform for R-peak detection. The second part of the study deals with the calculation of heart rate with the help of R-peaks detected and accordingly the cardiac arrhythmia can be analyzed. The study has been inspired by the need to find an efficient method for ECG Signal Analysis which is simple and has good accuracy and takes less computation time.


Author(s):  
DR. P.V. RAMA RAJU ◽  
DR. V. MALLESWARA RAO

Late potentials in ECG take place in the terminal portion of the QRS complex and are characterized by tiny amplitudes and larger frequencies. The occurrence of late potentials may signify underlying distribution of electrical activity of the cells in the heart, and provides a substrate for production of arrhythmias. The conventional Fourier transform does not readily localize these features in time and frequency. Short Time Fourier Transform (STFT) is added useful because the concentration of the signal energy at various times in the cardiac cycle is more readily identified. This STFT suffers from the problem of selecting the proper window function as a window width can determine whether high temporal or high spectral resolution is achieved. The Wigner-Ville distribution, which produces a composite time – frequency distribution but suffers from the problem of interference from cross-terms. The problem of late potentials causes high levels of signal power to be seen at frequencies not representing the original signal. The present work describes the application of Wavelet Transform to provide a more accurate picture of the localized time-scale features indicative of the late potentials. The first step includes generating mathematical equations for various cases by developing a program in Matlab. Compared the signal under consideration with all those signals in the database by developing an identification code in Matlab. Analyzed the late potentials in the signal under consideration and identified the case. The second step includes generating mathematical equations for various specimen cases for the same type category by developing a program in Matlab. Compared the signal under consideration with all those signals in the database by developing an identification code in Matlab. Analyzed the late potentials in the signal under consideration and identified the case along with the specimen case.


Author(s):  
Shanmukhappa Angadi ◽  
Vilas Naik

The Shot Boundary Detection (SBD) is an early step for most of the video applications involving understanding, indexing, characterization, or categorization of video. The SBD is temporal video segmentation and it has been an active topic of research in the area of content based video analysis. The research efforts have resulted in a variety of algorithms. The major methods that have been used for shot boundary detection include pixel intensity based, histogram-based, edge-based, and motion vectors based, technique. Recently researchers have attempted use of graph theory based methods for shot boundary detection. The proposed algorithm is one such graph based model and employs graph partition mechanism for detection of shot boundaries. Graph partition model is one of the graph theoretic segmentation algorithms, which offers data clustering by using a graph model. Pair-wise similarities between all data objects are used to construct a weighted graph represented as an adjacency matrix (weighted similarity matrix) that contains all necessary information for clustering. Representing the data set in the form of an edge-weighted graph converts the data clustering problem into a graph partitioning problem. The algorithm is experimented on sports and movie videos and the results indicate the promising performance.


Author(s):  
Madhusmita Panda ◽  
Rajiv Ku. Dash

The increasing popularity of wireless broadband services nowadays indicates that, future wireless systems will witness a rapid growth of high-data-rate applications with very diverse quality of service requirements. To support such applications under limited radio resources and harsh wireless channel conditions, dynamic resource allocation, which achieves both higher system spectral efficiency and better QoS, has been identified as one of the most promising techniques. In particular, jointly optimizing resource allocation across adjacent and even nonadjacent layers of the protocol stack leads to dramatic improvement in overall system performance. In this article an overview of recent research on dynamic resource allocation, especially for OFDM systems is provided. Recent work and open issues on cross-layer resource allocation and adaptation are also discussed


Author(s):  
PRADIP KUMAR TALAPATRA ◽  
SHAKTI PRASAD DASH ◽  
P. RAKESH

Humans are the most advanced creatures of the nature. Accordingly it can be stated that humanoid robots are the most advanced creatures of human beings. Among the man-made systems such as automobile, hand-phones and multimedia devices, robots of future will hopefully be the most ideal assistants to human beings. During several decades of research, development projects aimed at building bipedal and humanoid robots has been increasing at a rapid rate. A brief review of current activities in the development of bipedal humanoid robotics is provided in this paper. The dynamic modelling of biped robotic system in the current trend is also described. The main objectives for using bipedal robots are introduced and bipedal locomotion as well as its dynamic behaviors in different fields are also considered. The use of dynamics of different kinds of mechanical systems in the field of humanoid robotics is also emphasized. Finally, a list of few projects in this field is provided.


Author(s):  
SAVITHA SIVAN ◽  
THUSNAVIS BELLA MARY. I

Content-based image retrieval (CBIR) is an active research area with the development of multimedia technologies and has become a source of exact and fast retrieval. The aim of CBIR is to search and retrieve images from a large database and find out the best match for the given query. Accuracy and efficiency for high dimensional datasets with enormous number of samples is a challenging arena. In this paper, Content Based Image Retrieval using various features such as color, shape, texture is made and a comparison is made among them. The performance of the retrieval system is evaluated depending upon the features extracted from an image. The performance was evaluated using precision and recall rates. Haralick texture features were analyzed at 0 o, 45 o, 90 o, 180 o using gray level co-occurrence matrix. Color feature extraction was done using color moments. Structured features and multiple feature fusion are two main technologies to ensure the retrieval accuracy in the system. GIST is considered as one of the main structured features. It was experimentally observed that combination of these techniques yielded superior performance than individual features. The results for the most efficient combination of techniques have also been presented and optimized for each class of query.


Author(s):  
GAMIL R.S. QAID ◽  
SANJAY N. TALBAR

Data communication is transmission data from a point to another. Nowadays main issue in data communication is the security. It can provide a fine solution by encryption. The encryption algorithm is the mathematical process for performing encryption on data. The proposed algorithm supports for user desired security level and processing level. The algorithm provides security levels and their corresponding processing levels by generating random keys for the encryption/decryption process. This facility is achieved by using fuzzy logic. The results of the proposed encryption algorithm will be analyzed by comparing with other existing encryption algorithms. The aim of the research is to build a new algorithm using fuzzy sets requirement which will be more advanced than the existing encryption algorithms.


Author(s):  
J.KRISHNA CHAITHANYA ◽  
DR.T.RAMA SHRI

The satellite images present a great variety of features due to the trouble what returns their treatment is little delicate. The automated extraction of linear features from remotely sensed imagery has been the subject of extensive research over several decades. Recent studies show promise for extraction of feature information for applications such as updating geographic information systems (GIS). Research has been stimulated by the increase in available imagery in recent years following the launch of several airborne and satellite sensors. All the satellite images, which are going to be used in the present work, are going to be processed in the computer vision, for which the existing researchers are interested to analyze the synthetic images by feature extraction. These images contain many types of features. Indeed, the features are classified in 1-D feature such as step, roof and 2-D features such as corners, edges, and blocks. The satellite images present a great variety of features due to the trouble what returns their treatment is little delicate. In this we present a method for edge segmentation of satellite images based on 2-D Phase Congruency (PC) model. The proposed approach is composed by two steps: The contextual nonlinear smoothing algorithm (CNLS) is used to smooth the input images. Then, the 2D stretched Gabor filter (S-G filter) based on proposed angular variation is developed in order to avoid the multiple responses.


Sign in / Sign up

Export Citation Format

Share Document