binarization algorithm
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 13)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
pp. 1-15
Author(s):  
Milan Ćurković ◽  
Andrijana Ćurković ◽  
Damir Vučina

Image binarization is one of the fundamental methods in image processing and it is mainly used as a preprocessing for other methods in image processing. We present an image binarization method with the primary purpose to find markers such as those used in mobile 3D scanning systems. Handling a mobile 3D scanning system often includes bad conditions such as light reflection and non-uniform illumination. As the basic part of the scanning process, the proposed binarization method successfully overcomes the above problems and does it successfully. Due to the trend of increasing image size and real-time image processing we were able to achieve the required small algorithmic complexity. The paper outlines a comparison with several other methods with a focus on objects with markers including the calibration system plane of the 3D scanning system. Although it is obvious that no binarization algorithm is best for all types of images, we also give the results of the proposed method applied to historical documents.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 04) ◽  
pp. 813-831
Author(s):  
B.J. Bipin Nair ◽  
Gopikrishna Ashok ◽  
N.R. Sreekumar

Even though several studies exist on denoising degraded documents, now a days it is a tedious task in the field of document image processing because ancient document may contain several degradations which will be a barrier for reader. Here we use old Malayalam Grantha scripts that contain useful information like the poem titled ‘Njana Stuthi’ and ancient literature. These historical documents are losing content due to heavy degradations such as, ink bleed, fungi-found to be brittleness & show through. In order to remove these kind of degradations, the study is proposing a novel binarization algorithm which remove noises from Grantha scripts as well as notebook images and make the document readable. Here we use 500 datasets of Grantha scripts for experimentation. In our proposed method, binarization is done through a channel based method in which we are converting image in to RGB, further adding weights to make the image darker or brighter followed by morphological operation open and finally passing it RGB and HSV channel for more clarity and clear separation of black text and white background, remaining noise will be removed using adaptive thresholding technique. The proposed method is outperformed with good accuracy.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0251014
Author(s):  
JianWu Long ◽  
ZeRan Yan ◽  
HongFa Chen ◽  
XinLei Song

Although most images in industrial applications have fewer targets and simple image backgrounds, binarization is still a challenging task, and the corresponding results are usually unsatisfactory because of uneven illumination interference. In order to efficiently threshold images with nonuniform illumination, this paper proposes an efficient global binarization algorithm that estimates the inhomogeneous background surface of the original image constructed from the first k leading principal components in the Gaussian scale space (GSS). Then, we use the difference operator to extract the distinct foreground of the original image in which the interference of uneven illumination is effectively eliminated. Finally, the image can be effortlessly binarized by an existing global thresholding algorithm such as the Otsu method. In order to qualitatively and quantitatively verify the segmentation performance of the presented scheme, experiments were performed on a dataset collected from a nonuniform illumination environment. Compared with classical binarization methods, in some metrics, the experimental results demonstrate the effectiveness of the introduced algorithm in providing promising binarization outcomes and low computational costs.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Dohun Kim ◽  
Guhyun Kim ◽  
Cheol Seong Hwang ◽  
Doo Seok Jeong

Author(s):  
Shyamali Mitra ◽  
K. C. Santosh ◽  
Mrinal Kanti Naskar

Binarization plays a crucial role in Optical Character Recognition (OCR) ancillary domains, such as recovery of degraded document images. In Document Image Analysis (DIA), selecting threshold is not trivial since it differs from one problem (dataset) to another. Instead of trying several different thresholds for one dataset to another, we consider noise inherency of document images in our proposed binarization scheme. The proposed stochastic architecture implements the local thresholding technique: Niblack’s binarization algorithm. We introduce a stochastic comparator circuit that works on unipolar stochastic numbers. Unlike the conventional stochastic circuit, it is simple and easy to deploy. We implemented it on the Xilinx Virtex6 XC6VLX760-2FF1760 FPGA platform and received encouraging experimental results. The complete set of results are available upon request. Besides, compared to conventional designs, the proposed stochastic implementation is better in terms of time complexity as well as fault-tolerant capacity.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Pengliang Wei ◽  
Ting Jiang ◽  
Huaiyue Peng ◽  
Hongwei Jin ◽  
Han Sun ◽  
...  

Crop-type identification is one of the most significant applications of agricultural remote sensing, and it is important for yield estimation prediction and field management. At present, crop identification using datasets from unmanned aerial vehicle (UAV) and satellite platforms have achieved state-of-the-art performances. However, accurate monitoring of small plants, such as the coffee flower, cannot be achieved using datasets from these platforms. With the development of time-lapse image acquisition technology based on ground-based remote sensing, a large number of small-scale plantation datasets with high spatial-temporal resolution are being generated, which can provide great opportunities for small target monitoring of a specific region. The main contribution of this paper is to combine the binarization algorithm based on OTSU and the convolutional neural network (CNN) model to improve coffee flower identification accuracy using the time-lapse images (i.e., digital images). A certain number of positive and negative samples are selected from the original digital images for the network model training. Then, the pretrained network model is initialized using the VGGNet and trained using the constructed training datasets. Based on the well-trained CNN model, the coffee flower is initially extracted, and its boundary information can be further optimized by using the extracted coffee flower result of the binarization algorithm. Based on the digital images with different depression angles and illumination conditions, the performance of the proposed method is investigated by comparison of the performances of support vector machine (SVM) and CNN model. Hence, the experimental results show that the proposed method has the ability to improve coffee flower classification accuracy. The results of the image with a 52.5° angle of depression under soft lighting conditions are the highest, and the corresponding Dice (F1) and intersection over union (IoU) have reached 0.80 and 0.67, respectively.


2019 ◽  
Author(s):  
Leroy Cronin ◽  
Edward Lee ◽  
Juan Manuel Parrilla Gutiérrez ◽  
Alon Henson ◽  
Euan K. Brechin

<p>Random number generators are important in fields which require non-deterministic input, such as cryptography. One example of a non-deterministic system is found in chemistry via the crystallization of chemical compounds, which occurs through stochastic processes. Herein, we present an automated platform capable of generating random numbers from observation of crystallizations resulting from multiple parallel one-pot chemical reactions. From the resulting images, crystals were identified using computer vision, and binary sequences were obtained by applying a binarization algorithm to these regions. An assessment of randomness of these sequences was undertaken by applying a barrage of tests for randomness described by the National Institute of Standards and Technology (NIST). We find that numbers generated through this method are able to pass each of the three levels for each of the NIST tests. We then compare the encryption strength of the random numbers generated from each of the crystallizing systems to that of a pseudo-random number generation algorithm (the Mersenne Twister). We find that messages encrypted using chemically derived random numbers take significantly longer to decrypt than the algorithmically generated number.</p>


Sign in / Sign up

Export Citation Format

Share Document