scholarly journals PENDETEKSIAN PLAT NOMOR KENDARAAN MENGGUNAKAN ALGORITMA YOU ONLY LOOK ONCE V3 DAN TESSERACT

2021 ◽  
Vol 8 (1) ◽  
pp. 57-62
Author(s):  
Muhamad Rizky Fauzan ◽  
Ari Purno Wahyu Wibowo

Perkembangan teknologi saat ini sangat berkembang pesat. Teknologi yang saat ini sedang dilakukan pengembangan secara besar-besaran yaitu Artificial Intelligence.  Artificial Intelligence atau AI memiliki berbagai macam fungsi dan tujuan tergantung dari sistem yang akan dibuat. Salah satunya yaitu pendekteksian objek dan teks dari gambar atau video. Contoh dari pemanfaatan teknologi ini yaitu pada pendeteksian objek dan teks pada plat nomor kendaraan.  Pada penelitian ini dilakukan perancangan sistem dengan menggunakan algoritma You Only Look Once V3 sebagai algoritma pendeteksi objek dan Tesseract Optical Character Recognition sebagai pendeteksi teks dalam gambar. Perancangan ini akan dibantu dengan library OpenCV pada bahasa pemrogramanan python dan menggunakan dataset gambar yang sudah tersedia. Penelitian ini bertujuan untuk mengetahui tingkat keakurasian algoritma You Only Look Once V3 yang dikombinasikan dengan Tesseract Optical Character Recognition.

2021 ◽  
Author(s):  
Michael Schwartz ◽  

Many companies have tried to automate data collection for handheld Digital Multimeters (DMM) using Optical Character Recognition (OCR). Only recently have companies tried to perform this task using Artificial Intelligence (AI) technology, Cal Lab Solutions being one of them in 2020. But when we developed our first prototype application, we discovered the difficulties of getting a good value with every measurement and test point.A year later, lessons learned and equipped with better software, this paper is a continuation of that AI project. In Beta-,1 we learned the difficulties of AI reading segmented displays. There are no pre-trained models for this type of display, so we needed to train a model. This required the testing of thousands of images, so we changed the scope of the project to a continual learning AI project. This paper will cover how we built our continuous learning AI model to show how any lab with a webcam can start automating those handheld DMMS with software that gets smarter over time.


Optical Character Recognition or Optical Character Reader (OCR) is a pattern-based method consciousness that transforms the concept of electronic conversion of images of handwritten text or printed text in a text compiled. Equipment or tools used for that purpose are cameras and apartment scanners. Handwritten text is scanned using a scanner. The image of the scrutinized document is processed using the program. Identification of manuscripts is difficult compared to other western language texts. In our proposed work we will accept the challenge of identifying letters and letters and working to achieve the same. Image Preprocessing techniques can effectively improve the accuracy of an OCR engine. The goal is to design and implement a machine with a learning machine and Python that is best to work with more accurate than OCR's pre-built machines with unique technologies such as MatLab, Artificial Intelligence, Neural networks, etc.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Zainab Akhtar ◽  
Jong Weon Lee ◽  
Muhammad Attique Khan ◽  
Muhammad Sharif ◽  
Sajid Ali Khan ◽  
...  

PurposeIn artificial intelligence, the optical character recognition (OCR) is an active research area based on famous applications such as automation and transformation of printed documents into machine-readable text document. The major purpose of OCR in academia and banks is to achieve a significant performance to save storage space.Design/methodology/approachA novel technique is proposed for automated OCR based on multi-properties features fusion and selection. The features are fused using serially formulation and output passed to partial least square (PLS) based selection method. The selection is done based on the entropy fitness function. The final features are classified by an ensemble classifier.FindingsThe presented method was extensively tested on two datasets such as the authors proposed and Chars74k benchmark and achieved an accuracy of 91.2 and 99.9%. Comparing the results with existing techniques, it is found that the proposed method gives improved performance.Originality/valueThe technique presented in this work will help for license plate recognition and text conversion from a printed document to machine-readable.


2013 ◽  
Vol 8 (1) ◽  
pp. 686-691
Author(s):  
Vneeta Rani ◽  
Dr.Vijay Laxmi

OCR (optical character recognition) is a technology that is commonly used for recognizing patterns artificial intelligence & computer machine. With the help of OCR we can convert scanned document into editable documents which can be further used in various research areas. In this paper, we are presenting a character segmentation technique that can segment simple characters, skewed characters as well as broken characters. Character segmentation is very important phase in any OCR process because output of this phase will be served as input to various other phase like character recognition phase etc. If there is some problem in character segmentation phase then recognition of the corresponding character is very difficult or nearly impossible.


2018 ◽  
Vol 179 (31) ◽  
pp. 14-20 ◽  
Author(s):  
Shreshtha Garg ◽  
Kapil Kumar ◽  
Nikhil Prabhakar ◽  
Amulya Ratan ◽  
Aayush Trivedi

1997 ◽  
Vol 9 (1-3) ◽  
pp. 58-77
Author(s):  
Vitaly Kliatskine ◽  
Eugene Shchepin ◽  
Gunnar Thorvaldsen ◽  
Konstantin Zingerman ◽  
Valery Lazarev

In principle, printed source material should be made machine-readable with systems for Optical Character Recognition, rather than being typed once more. Offthe-shelf commercial OCR programs tend, however, to be inadequate for lists with a complex layout. The tax assessment lists that assess most nineteenth century farms in Norway, constitute one example among a series of valuable sources which can only be interpreted successfully with specially designed OCR software. This paper considers the problems involved in the recognition of material with a complex table structure, outlining a new algorithmic model based on ‘linked hierarchies’. Within the scope of this model, a variety of tables and layouts can be described and recognized. The ‘linked hierarchies’ model has been implemented in the ‘CRIPT’ OCR software system, which successfully reads tables with a complex structure from several different historical sources.


2020 ◽  
Vol 2020 (1) ◽  
pp. 78-81
Author(s):  
Simone Zini ◽  
Simone Bianco ◽  
Raimondo Schettini

Rain removal from pictures taken under bad weather conditions is a challenging task that aims to improve the overall quality and visibility of a scene. The enhanced images usually constitute the input for subsequent Computer Vision tasks such as detection and classification. In this paper, we present a Convolutional Neural Network, based on the Pix2Pix model, for rain streaks removal from images, with specific interest in evaluating the results of the processing operation with respect to the Optical Character Recognition (OCR) task. In particular, we present a way to generate a rainy version of the Street View Text Dataset (R-SVTD) for "text detection and recognition" evaluation in bad weather conditions. Experimental results on this dataset show that our model is able to outperform the state of the art in terms of two commonly used image quality metrics, and that it is capable to improve the performances of an OCR model to detect and recognise text in the wild.


Sign in / Sign up

Export Citation Format

Share Document