image resizing
Recently Published Documents


TOTAL DOCUMENTS

229
(FIVE YEARS 36)

H-INDEX

21
(FIVE YEARS 2)

Author(s):  
A. Semma ◽  
S. Lazrak ◽  
Y. Hannad ◽  
M. Boukhani ◽  
Y. El Kettani

Abstract. Introducing Deep Learning has been successful in improving the performance of automated writer identification systems. However, using very large patch sizes as input to CNN consumes a lot of machine resources and requires a lot of training time. To overcome this problem, many researchers use resized images.In this paper, we will try to make a comparative study between several patches sizes which were then resized to a normalized size of 32 × 32. Our aim is to elaborate the best recommendations for choosing the image resizing in order to increase the CNN performance. Thus, we will carry our tests on three databases. The first is CVL, a Latin dataset with 310 writers, the second is CERUG-CH a Chinese dataset with 105 writers and the last is KHATT that contains the Arabic writings of 1000 writers. To see if the type of CNN model impacts the results conducted on resized images, we deploy two models: ResNet-18 and MobileNet. The main finding is that the best results correspond to the resizing values of the images which makes it possible to have the average line height of the writings closer to the height of the CNN patches.


2021 ◽  
Author(s):  
Bibhash Pran Das ◽  
Mrutyunjay Biswal ◽  
Abhranta Panigrahi ◽  
Manish Okade
Keyword(s):  

2021 ◽  
Author(s):  
Yigong Hu ◽  
Shengzhong Liu ◽  
Tarek Abdelzaher ◽  
Maggie Wigness ◽  
Philip David

2021 ◽  
pp. 221-227
Author(s):  
Bambang Krismono Triwijoyo ◽  
Boy Subirosa Sabarguna ◽  
Widodo Budiharto ◽  
Edi Abdurachman

Medical research indicated that narrowing of the retinal blood vessels might be an early indicator of cardiovascular diseases; one of them is hypertensive retinopathy. This paper proposed the new staging method of hypertensive retinopathy by measure the ratio of diameter artery and vein (AVR). The dataset used in this research is the public Messidor color fundus image dataset. The proposed method consists of image resizing using bicubic interpolation, optic disk detection, a region of interest computation, vessel diameter measuring, AVR calculation, and grading the new categories of Hypertensive Retinopathy based on Keith-Wagener-Barker categories. The experiments show that the proposed method can determine the stage of hypertensive retinopathy into new categories.


Author(s):  
Dov Danon ◽  
Moab Arar ◽  
Daniel Cohen-Or ◽  
Ariel Shamir

AbstractTraditional image resizing methods usually work in pixel space and use various saliency measures. The challenge is to adjust the image shape while trying to preserve important content. In this paper we perform image resizing in feature space using the deep layers of a neural network containing rich important semantic information. We directly adjust the image feature maps, extracted from a pre-trained classification network, and reconstruct the resized image using neural-network based optimization. This novel approach leverages the hierarchical encoding of the network, and in particular, the high-level discriminative power of its deeper layers, that can recognize semantic regions and objects, thereby allowing maintenance of their aspect ratios. Our use of reconstruction from deep features results in less noticeable artifacts than use of imagespace resizing operators. We evaluate our method on benchmarks, compare it to alternative approaches, and demonstrate its strengths on challenging images.


Author(s):  
Janarthanan A ◽  
Pandiyarajan C ◽  
Sabarinathan M ◽  
Sudhan M ◽  
Kala R

Optical character recognition (OCR) is a process of text recognition in images (one word). The input images are taken from the dataset. The collected text images are implemented to pre-processing. In pre-processing, we can implement the image resize process. Image resizing is necessary when you need to increase or decrease the total number of pixels, whereas remapping can occur when you are zooming refers to increase the quantity of pixels, so that when you zoom an image, you will see clear content. After that, we can implement the segmentation process. In segmentation, we can segment the each characters in one word. We can extract the features values from the image that means test feature. In classification process, we have to classify the text from the image. Image classification is performed the images in order to identify which image contains text. A classifier is used to identify the image containing text. The experimental results shows that the accuracy.


2021 ◽  
pp. 799-805
Author(s):  
Hsi-Chin Hsin ◽  
Cheng-Ying Yang ◽  
Chien-Kun Su
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document