Journal of Innovative Image Processing - December 2019
Latest Publications


TOTAL DOCUMENTS

74
(FIVE YEARS 74)

H-INDEX

10
(FIVE YEARS 10)

Published By Inventive Research Organization

2582-4252

2021 ◽  
Vol 3 (4) ◽  
pp. 357-366
Author(s):  
Haoxiang Wang

Industrial internet of things has grown quite popular in recent years and involves a large number of intelligent devices linked together to build a system that can investigate, communicate, gather and observe information. Due to this requirement, there is more demand for compression techniques which compresses data, leading to less usage of resources and low complexity. This is where Convolutional Neural Networks (CNN) play a large role in the field of computer vision, especially in places where high applications such as interpretation coupled with detection is required. Similarly, low-level applications such as image compression cannot be resolved using this methodology. In this paper, a compression technique for remote sensing images using CNN is proposed. This methodology incorporates CNN in a compact learning environment wherein the actual image that consists of structural data is coded using Lempel Ziv Markov chain algorithm. This process is followed by image reconstruction in order to obtain the actual image in high quality. Other methodologies such as optimized trunctiona, JPEG2000, JPEC and binary tree were compared using a large number of experiments in terms of space saving, reconstructed image quality and efficiency. The output obtained indicates that the proposed methodology shows effective improvement, attaining a 50 dB signal to noise ratio and space saving of 90%.


2021 ◽  
Vol 3 (4) ◽  
pp. 367-376
Author(s):  
Yasir Babiker Hamdan ◽  
A. Sathesh

Due to the complex and irregular shapes of handwritten text, it is challenging to spot and recognize the handwritten words. In low-resource scripts, retrieval of words is a difficult and laborious task. The need for increasing the number of samples and introducing variations in the extended training datasets occur with the use of deep learning and neural network models. All possible variations and occurrences cannot be covered in an efficient manner with the use of the existing preprocessing strategies and theories. A scalable and elastic methodology for wrapping the extracted features is presented with the introduction of an adversarial feature deformation and regularization module in this paper. In the original deep learning framework, this module is introduced between the intermediate layers while training in an alternative manner. When compared to the conventional models, highly informative features are learnt in an efficient manner with the help of this setup. Extensive word datasets are used for testing the proposed model, which is built on popular frameworks available for word recognition and spotting, while enhancing them with the proposed module. While varying the training data size, the results are recorded and compared with the conventional models. Improvement in the mAP scores, word-error rate and low data regime is observed from the results of comparison.


2021 ◽  
Vol 3 (4) ◽  
pp. 347-356
Author(s):  
K. Geetha

The real-time issue of reliability segmenting root structure while using X-Ray Computed Tomography (CT) images is addressed in this work. A deep learning approach is proposed using a novel framework, involving decoders and encoders. The encoders-decoders framework is useful to improve multiple resolution by means of upsampling and downsampling images. The methodology of the work is enhanced by incorporating network branches with individual tasks using low-resolution context information and high-resolution segmentation. In large volumetric images, it is possible to resolve small root details by implementing a memory efficient system, resulting in the formation of a complete network. The proposed work, recent image analysis tool developed for root CT segmented is compared with several other previously existing methodology and it is found that this methodology is more efficient. Quantitatively and qualitatively, it is found that a multiresolution approach provides high accuracy in a shallower network with a large receptive field or deep network in a small receptive field. An incremental learning approach is also embedded to enhance the performance of the system. Moreover, it is also capable of detecting fine and large root materials in the entire volume. The proposed work is fully automated and doesn’t require user interaction.


2021 ◽  
Vol 3 (4) ◽  
pp. 322-335
Author(s):  
R. Rajesh Sharma

Recently, the information extraction from graphics and video summarizing using keyframes have benefited from a recent look at the visual content-based method. Analysis of keyframes in a movie may be done by extracting visual elements from the video clips. In order to accurately anticipate the path of an item in real-time, the visible components are utilized. The frame variations with low-level properties such as color and structure are the basis of the rapid and reliable approach. This research work contains 3 phases: preprocessing, two-stage extraction, and video prediction module. Besides, this framework on object track estimation uses the probabilistic deterministic process to arrive at an estimate of the object. Keyframes for the whole video sequence are extracted using a proposed two-stage feature extraction approach by CNN feature extraction. An alternate sequence is first constructed by comparing the color characteristics of neighboring frames in the original series to those of the generated one. When an alternate arrangement is compared to the final keyframe sequence, it is found that there are substantial structural changes between consecutive frames. Three keyframe extraction techniques based on on-time behavior have been employed in this study. A keyframe extraction optimization phase termed as "Adam" optimizer, dependent on the number of final keyframes is then introduced. The proposed technique outperforms the prior methods in computational cost and resilience across a wide range of video formats, video resolutions, and other parameters. Finally, this research compares SSIM, MAE, and RMSE performance metrics with the traditional approach.


2021 ◽  
Vol 3 (4) ◽  
pp. 336-346
Author(s):  
Judy Simon

Human Computer Interface (HCI) requires proper coordination and definition of features that serve as input to the system. The parameters of a saccadic and smooth eye movement tracking are observed and a comparison is drawn for HCI. This methodology is further incorporated with Pupil, OpenCV and Microsoft Visual Studio for image processing to identify the position of the pupil and observe the pupil movement direction in real-time. Once the direction is identified, it is possible to determine the accurate cruise position which moves towards the target. To quantify the differences between the step-change tracking of saccadic eye movement and incremental tracking of smooth eye movement, the test was conducted on two users. With the help of incremental tracking of smooth eye movement, an accuracy of 90% is achieved. It is found that the incremental tracking requires an average time of 7.21s while the time for step change tracking is just 2.82s. Based on the observations, it is determined that, when compared to the saccadic eye movement tracking, the smooth eye movement tracking is over four times more accurate. Therefore, the smooth eye tracking was found to be more accurate, precise, reliable, and predictable to use with the mouse cursor than the saccadic eye movement tracking.


2021 ◽  
Vol 3 (4) ◽  
pp. 311-321
Author(s):  
S. Kavitha ◽  
J. Manikandan

Automation of systems emerged since the beginning of 20th century. In the early days, the automation systems were developed with a fixed algorithm to perform some specific task in a repeated manner. Such fixed automation systems are revolutionized in recent days with an artificial intelligence program to take decisions on their own. The motive of the proposed work is to train a textile industry system to automatically detect the defects presence in the generated fabrics. The work utilizes an OverFeat network algorithm for such training process and compares its performances with its earlier version called AlexNet and VGG. The experimental work is conducted with a fabric defect dataset consisting of three class images categorised as horizontal, vertical and hole defects.


2021 ◽  
Vol 3 (4) ◽  
pp. 298-310
Author(s):  
S. Iwin Thanakumar Joseph

Agricultural field identification is still a difficult issue because of the poor resolution of satellite imagery. Monitoring remote harvest and determining the condition of farmlands rely on the digital approach agricultural applications. Therefore, high-resolution photographs have obtained much more attention since they are more efficient in detecting land cover components. In contrast, because of low-resolution repositories of past satellite images used for time series analysis, wavelet decomposition filter-based analysis, free availability, and economic concerns, low-resolution images are still essential. Using low-resolution Synthetic Aperture Radar (SAR) satellite photos, this study proposes a GAN strategy for locating agricultural regions and determining the crop's cultivation state, linked to the initial or harvesting time. An object detector is used in the preprocessing step of training, followed by a transformation technique for extracting feature information and then the GAN strategy for classifying the crop segmented picture. After testing, the suggested algorithm is applied to the database's SAR images, which are further processed and categorized based on the training results. Using this information, the density between the crops is calculated. After zooming in on SAR photos, the crop condition may be categorized based on crop density and crop distance. The Euclidean distance formula is used to calculate the distance. Finally, the findings are compared to other existing approaches to determine the proposed technique's performance using reliable measures.


2021 ◽  
Vol 3 (4) ◽  
pp. 284-297
Author(s):  
B. Vivekanandam

Thermal noise is the most common type of contamination in digital image acquisition operations, and is caused by the temperature condition of the industrial sensor devices used in the process. When it comes to picture improvement, removing noise from the image is one of the most crucial steps. However, in image processing, it is more critical to retain the characteristics of the original picture while eliminating the noise. Thermal noise removal is a challenging problem in image denoising. This article provides a strategy based on a Hybrid Adaptive Median (HAM) filtering approach for removing thermal noise from the image output of an industrial sensor. The demonstration of this proposed approach's ability, is to successfully detect and reduce thermal noise. In addition, this study examines an adaptive hybrid adaptive median filtering approach that has significant computational advantages, making it highly practical. Finally, this research report on experiments shows the high-quality industrial sensor imaging systems that have been successfully implemented in the real world.


2021 ◽  
Vol 3 (3) ◽  
pp. 269-283
Author(s):  
R. Kanthavel

To solve the challenges in traffic object identification, fuzzification, and simplification in a real traffic environment, it is highly required to develop an automatic detection and classification technique for roads, automobiles, and pedestrians with multiple traffic objects inside the same framework. The proposed method has been evaluated on a database with complicated poses, motions, backgrounds, and lighting conditions for an urban scenario where pedestrians are not obstructed. The suggested CNN classifier has an FPR of less than that of the SVM classifier. Confirming the significance of automatically optimized features, the SVM classifier's accuracy is equal to that of the CNN. The proposed framework is integrated with the additional adaptive segmentation method to identify pedestrians more precisely than the conventional techniques. Additionally, the proposed lightweight feature mapping leads to faster calculation times and it has also been verified and tabulated in the results and discussion section.


Sign in / Sign up

Export Citation Format

Share Document