input image
Recently Published Documents


TOTAL DOCUMENTS

1106
(FIVE YEARS 717)

H-INDEX

17
(FIVE YEARS 7)

2022 ◽  
Vol 22 (3) ◽  
pp. 1-14
Author(s):  
K. Shankar ◽  
Eswaran Perumal ◽  
Mohamed Elhoseny ◽  
Fatma Taher ◽  
B. B. Gupta ◽  
...  

COVID-19 pandemic has led to a significant loss of global deaths, economical status, and so on. To prevent and control COVID-19, a range of smart, complex, spatially heterogeneous, control solutions, and strategies have been conducted. Earlier classification of 2019 novel coronavirus disease (COVID-19) is needed to cure and control the disease. It results in a requirement of secondary diagnosis models, since no precise automated toolkits exist. The latest finding attained using radiological imaging techniques highlighted that the images hold noticeable details regarding the COVID-19 virus. The application of recent artificial intelligence (AI) and deep learning (DL) approaches integrated to radiological images finds useful to accurately detect the disease. This article introduces a new synergic deep learning (SDL)-based smart health diagnosis of COVID-19 using Chest X-Ray Images. The SDL makes use of dual deep convolutional neural networks (DCNNs) and involves a mutual learning process from one another. Particularly, the representation of images learned by both DCNNs is provided as the input of a synergic network, which has a fully connected structure and predicts whether the pair of input images come under the identical class. Besides, the proposed SDL model involves a fuzzy bilateral filtering (FBF) model to pre-process the input image. The integration of FBL and SDL resulted in the effective classification of COVID-19. To investigate the classifier outcome of the SDL model, a detailed set of simulations takes place and ensures the effective performance of the FBF-SDL model over the compared methods.


2022 ◽  
Vol 24 (2) ◽  
pp. 0-0

Over recent times, medical imaging plays a significant role in clinical practices. Storing and transferring the huge volume of images becomes complicated without an efficient image compression technique. This paper proposes a compression algorithm that uses a Haar based wavelet transform called Tetrolet transform, which reduces the noise on the input images and decomposes with a 4 x 4 blocks of equal squares called tetrominoes. It opts for a decomposing using optimal scheme for achieving the input image into a sparse representation which gives a much-detailed performance for texture and edge information better than wavelet transform. Set Partitioning in Hierarchical Trees (SPIHT) is used for encoding the significant coefficients to achieve efficient image compression. It has been investigated with various metaheuristic algorithms. Experimental results prove that the proposed method outperforms the other transform-based compression in terms of PSNR, CR, and Complexity. Also, the proposed method shows an improved result with another state of work.


Author(s):  
Jayati Mukherjee ◽  
Swapan K. Parui ◽  
Utpal Roy

Segmentation of text lines and words in an unconstrained handwritten or a machine-printed degraded document is a challenging document analysis problem due to the heterogeneity in the document structure. Often there is un-even skew between the lines and also broken words in a document. In this article, the contribution lies in segmentation of a document page image into lines and words. We have proposed an unsupervised, robust, and simple statistical method to segment a document image that is either handwritten or machine-printed (degraded or otherwise). In our proposed method, the segmentation is treated as a two-class classification problem. The classification is done by considering the distribution of gap size (between lines and between words) in a binary page image. Our method is very simple and easy to implement. Other than the binarization of the input image, no pre-processing is necessary. There is no need of high computational resources. The proposed method is unsupervised in the sense that no annotated document page images are necessary. Thus, the issue of a training database does not arise. In fact, given a document page image, the parameters that are needed for segmentation of text lines and words are learned in an unsupervised manner. We have applied our proposed method on several popular publicly available handwritten and machine-printed datasets (ISIDDI, IAM-Hist, IAM, PBOK) of different Indian and other languages containing different fonts. Several experimental results are presented to show the effectiveness and robustness of our method. We have experimented on ICDAR-2013 handwriting segmentation contest dataset and our method outperforms the winning method. In addition to this, we have suggested a quantitative measure to compute the level of degradation of a document page image.


2022 ◽  
Vol 24 (2) ◽  
pp. 1-14
Author(s):  
Saravanan S. ◽  
Sujitha Juliet

Over recent times, medical imaging plays a significant role in clinical practices. Storing and transferring the huge volume of images becomes complicated without an efficient image compression technique. This paper proposes a compression algorithm that uses a Haar based wavelet transform called Tetrolet transform, which reduces the noise on the input images and decomposes with a 4 x 4 blocks of equal squares called tetrominoes. It opts for a decomposing using optimal scheme for achieving the input image into a sparse representation which gives a much-detailed performance for texture and edge information better than wavelet transform. Set Partitioning in Hierarchical Trees (SPIHT) is used for encoding the significant coefficients to achieve efficient image compression. It has been investigated with various metaheuristic algorithms. Experimental results prove that the proposed method outperforms the other transform-based compression in terms of PSNR, CR, and Complexity. Also, the proposed method shows an improved result with another state of work.


Author(s):  
Krithika Vaidyanathan ◽  
Nandhini Murugan ◽  
Subramani Chinnamuthu ◽  
Sivashanmugam Shivasubramanian ◽  
Surya Raghavendran ◽  
...  

Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 680
Author(s):  
Sehyeon Kim ◽  
Dae Youp Shin ◽  
Taekyung Kim ◽  
Sangsook Lee ◽  
Jung Keun Hyun ◽  
...  

Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18–4.35% in the control group, and by 2.51–3.00% in the patient group.


2022 ◽  
Vol 2022 ◽  
pp. 1-18
Author(s):  
Muhammad Arif ◽  
F. Ajesh ◽  
Shermin Shamsudheen ◽  
Oana Geman ◽  
Diana Izdrui ◽  
...  

Radiology is a broad subject that needs more knowledge and understanding of medical science to identify tumors accurately. The need for a tumor detection program, thus, overcomes the lack of qualified radiologists. Using magnetic resonance imaging, biomedical image processing makes it easier to detect and locate brain tumors. In this study, a segmentation and detection method for brain tumors was developed using images from the MRI sequence as an input image to identify the tumor area. This process is difficult due to the wide variety of tumor tissues in the presence of different patients, and, in most cases, the similarity within normal tissues makes the task difficult. The main goal is to classify the brain in the presence of a brain tumor or a healthy brain. The proposed system has been researched based on Berkeley’s wavelet transformation (BWT) and deep learning classifier to improve performance and simplify the process of medical image segmentation. Significant features are extracted from each segmented tissue using the gray-level-co-occurrence matrix (GLCM) method, followed by a feature optimization using a genetic algorithm. The innovative final result of the approach implemented was assessed based on accuracy, sensitivity, specificity, coefficient of dice, Jaccard’s coefficient, spatial overlap, AVME, and FoM.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
R. Dinesh Kumar ◽  
E. Golden Julie ◽  
Y. Harold Robinson ◽  
S. Vimal ◽  
Gaurav Dhiman ◽  
...  

Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.


2022 ◽  
Author(s):  
Nitin Kumar

Abstract In order to solve the problems of poor region delineation and boundary artifacts in Indian style migration of images, an improved Variational Autoencoder (VAE) method for dress style migration is proposed. Firstly, the Yolo v3 model is used to quickly identify the dress localization of the input image, and then the classical semantic segmentation algorithm (FCN) is used to finely delineate the desired dress style migration region twice, and finally the trained VAE model is used to generate the migrated Indian style image using a decision support system. The results show that, compared with the traditional style migration model, the improved VAE style migration model can obtain finer synthetic images for dress style migration, and can adapt to different Indian traditional styles to meet the application requirements of dress style migration scenarios. We evaluated several deep learning based models and achieved BLEU value of 0.6 on average. The transformer-based model outperformed the other models, achieving a BLEU value of up to 0.72.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 150
Author(s):  
Meicheng Zheng ◽  
Weilin Luo

Due to refraction, absorption, and scattering of light by suspended particles in water, underwater images are characterized by low contrast, blurred details, and color distortion. In this paper, a fusion algorithm to restore and enhance underwater images is proposed. It consists of a color restoration module, an end-to-end defogging module and a brightness equalization module. In the color restoration module, a color balance algorithm based on CIE Lab color model is proposed to alleviate the effect of color deviation in underwater images. In the end-to-end defogging module, one end is the input image and the other end is the output image. A CNN network is proposed to connect these two ends and to improve the contrast of the underwater images. In the CNN network, a sub-network is used to reduce the depth of the network that needs to be designed to obtain the same features. Several depth separable convolutions are used to reduce the amount of calculation parameters required during network training. The basic attention module is introduced to highlight some important areas in the image. In order to improve the defogging network’s ability to extract overall information, a cross-layer connection and pooling pyramid module are added. In the brightness equalization module, a contrast limited adaptive histogram equalization method is used to coordinate the overall brightness. The proposed fusion algorithm for underwater image restoration and enhancement is verified by experiments and comparison with previous deep learning models and traditional methods. Comparison results show that the color correction and detail enhancement by the proposed method are superior.


Sign in / Sign up

Export Citation Format

Share Document