Digital Images Preprocessing for Optical Character Recognition in Video Frames Reconstructed from Compromising Electromagnetic Emanations from Video Cables

Author(s):  
Santiago Morales-Aguilar ◽  
Chaouki Kasmi ◽  
Milosch Meriac ◽  
Felix Vega ◽  
Fahad Alyafei
Author(s):  
Srinivasa Rao Dhanikonda, Et. al.

Images play an essential function in the electronic media to share information. Nowadays, each event is going to be recorded in the arrangement of digital images. Text from the image file won't be in a format on the computer. OCR (Optical Character Recognition) for English vocabulary is well constructed. Currently, there's a requirement of OCR for Indian languages to maintain historical documents composed mainly in Indian languages to arrange publications in the library and for program form processing. OCR for the Telugu language is challenging as consonants and vowels plays a vital role in forming words along with vattus and gunithas. It may be a mixture of vowels and consonants that may form a compound character. This paper presents research on methods utilized in the OCR method for the Telugu Language until today.


2020 ◽  
Author(s):  
Juan Galvis ◽  
Chaouki Kasmi ◽  
Felix Vega ◽  
Santiago Morales Aguilar

The present work shows the application of deep learning models to the denoising of video frames retrieved from electromagnetic emanations from remote video interfaces. It has been demonstrated that the cables of video interfaces like VGA or HDMI, produce unintended emanations, and that these emanations can be received and processed to reconstruct the video frames displayed on the external monitor. However, the reconstructed frames are noisy, making it difficult to recover any useful information. By applying deep learning models to denoise, deblur, and interpret the images, information can be interpreted.<br>


2020 ◽  
Author(s):  
Juan Galvis ◽  
Chaouki Kasmi ◽  
Felix Vega ◽  
Santiago Morales Aguilar

The present work shows the application of deep learning models to the denoising of video frames retrieved from electromagnetic emanations from remote video interfaces. It has been demonstrated that the cables of video interfaces like VGA or HDMI, produce unintended emanations, and that these emanations can be received and processed to reconstruct the video frames displayed on the external monitor. However, the reconstructed frames are noisy, making it difficult to recover any useful information. By applying deep learning models to denoise, deblur, and interpret the images, information can be interpreted.<br>


1997 ◽  
Vol 9 (1-3) ◽  
pp. 58-77
Author(s):  
Vitaly Kliatskine ◽  
Eugene Shchepin ◽  
Gunnar Thorvaldsen ◽  
Konstantin Zingerman ◽  
Valery Lazarev

In principle, printed source material should be made machine-readable with systems for Optical Character Recognition, rather than being typed once more. Offthe-shelf commercial OCR programs tend, however, to be inadequate for lists with a complex layout. The tax assessment lists that assess most nineteenth century farms in Norway, constitute one example among a series of valuable sources which can only be interpreted successfully with specially designed OCR software. This paper considers the problems involved in the recognition of material with a complex table structure, outlining a new algorithmic model based on ‘linked hierarchies’. Within the scope of this model, a variety of tables and layouts can be described and recognized. The ‘linked hierarchies’ model has been implemented in the ‘CRIPT’ OCR software system, which successfully reads tables with a complex structure from several different historical sources.


2020 ◽  
Vol 2020 (1) ◽  
pp. 78-81
Author(s):  
Simone Zini ◽  
Simone Bianco ◽  
Raimondo Schettini

Rain removal from pictures taken under bad weather conditions is a challenging task that aims to improve the overall quality and visibility of a scene. The enhanced images usually constitute the input for subsequent Computer Vision tasks such as detection and classification. In this paper, we present a Convolutional Neural Network, based on the Pix2Pix model, for rain streaks removal from images, with specific interest in evaluating the results of the processing operation with respect to the Optical Character Recognition (OCR) task. In particular, we present a way to generate a rainy version of the Street View Text Dataset (R-SVTD) for "text detection and recognition" evaluation in bad weather conditions. Experimental results on this dataset show that our model is able to outperform the state of the art in terms of two commonly used image quality metrics, and that it is capable to improve the performances of an OCR model to detect and recognise text in the wild.


Sign in / Sign up

Export Citation Format

Share Document