Analytics for Noisy Unstructured Text Data II

Author(s):  
L. Venkata Subramaniam ◽  
Shourya Roy

The importance of text mining applications is growing proportionally with the exponential growth of electronic text. Along with the growth of internet many other sources of electronic text have become really popular. With increasing penetration of internet, many forms of communication and interaction such as email, chat, newsgroups, blogs, discussion groups, scraps etc. have become increasingly popular. These generate huge amount of noisy text data everyday. Apart from these the other big contributors in the pool of electronic text documents are call centres and customer relationship management organizations in the form of call logs, call transcriptions, problem tickets, complaint emails etc., electronic text generated by Optical Character Recognition (OCR) process from hand written and printed documents and mobile text such as Short Message Service (SMS). Though the nature of each of these documents is different but there is a common thread between all of these—presence of noise. An example of information extraction is the extraction of instances of corporate mergers, more formally MergerBetween(company1,company2,date), from an online news sentence such as: “Yesterday, New-York based Foo Inc. announced their acquisition of Bar Corp.” Opinion(product1,good), from a blog post such as: “I absolutely liked the texture of SheetK quilts.” At superficial level, there are two ways for information extraction from noisy text. The first one is cleaning text by removing noise and then applying existing state of the art techniques for information extraction. There in lies the importance of techniques for automatically correcting noisy text. In this chapter, first we will review some work in the area of noisy text correction. The second approach is to devise extraction techniques which are robust with respect to noise. Later in this chapter, we will see how the task of information extraction is affected by noise.

Author(s):  
Shourya Roy ◽  
L. Venkata Subramaniam

Accdrnig to rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in what oredr the ltteers in a wrod are, the olny iprmoetnt tihng is that the frist and lsat ltteer be at the rghit pclae. Tihs is bcuseae the human mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.1 Unfortunately computing systems are not yet as smart as the human mind. Over the last couple of years a significant number of researchers have been focussing on noisy text analytics. Noisy text data is found in informal settings (online chat, SMS, e-mails, message boards, among others) and in text produced through automated speech recognition or optical character recognition systems. Noise can possibly degrade the performance of other information processing algorithms such as classification, clustering, summarization and information extraction. We will identify some of the key research areas for noisy text and give a brief overview of the state of the art. These areas will be, (i) classification of noisy text, (ii) correcting noisy text, (iii) information extraction from noisy text. We will cover the first one in this chapter and the later two in the next chapter. We define noise in text as any kind of difference in the surface form of an electronic text from the intended, correct or original text. We see such noisy text everyday in various forms. Each of them has unique characteristics and hence requires special handling. We introduce some such forms of noisy textual data in this section. Online Noisy Documents: E-mails, chat logs, scrapbook entries, newsgroup postings, threads in discussion fora, blogs, etc., fall under this category. People are typically less careful about the sanity of written content in such informal modes of communication. These are characterized by frequent misspellings, commonly and not so commonly used abbreviations, incomplete sentences, missing punctuations and so on. Almost always noisy documents are human interpretable, if not by everyone, at least by intended readers. SMS: Short Message Services are becoming more and more common. Language usage over SMS text significantly differs from the standard form of the language. An urge towards shorter message length facilitating faster typing and the need for semantic clarity, shape the structure of this non-standard form known as the texting language (Choudhury et. al., 2007). Text Generated by ASR Devices: ASR is the process of converting a speech signal to a sequence of words. An ASR system takes speech signal such as monologs, discussions between people, telephonic conversations, etc. as input and produces a string a words, typically not demarcated by punctuations as transcripts. An ASR system consists of an acoustic model, a language model and a decoding algorithm. The acoustic model is trained on speech data and their corresponding manual transcripts. The language model is trained on a large monolingual corpus. ASR convert audio into text by searching the acoustic model and language model space using the decoding algorithm. Most conversations at contact centers today between agents and customers are recorded. To do any processing of this data to obtain customer intelligence it is necessary to convert the audio into text. Text Generated by OCR Devices: Optical character recognition, or ‘OCR’, is a technology that allows digital images of typed or handwritten text to be transferred into an editable text document. It takes the picture of text and translates the text into Unicode or ASCII. . For handwritten optical character recognition, the rate of recognition is 80% to 90% with clean handwriting. Call Logs in Contact Centers: Today’s contact centers (also known as call centers, BPOs, KPOs) produce huge amounts of unstructured data in the form of call logs apart from emails, call transcriptions, SMS, chattranscripts etc. Agents are expected to summarize an interaction as soon as they are done with it and before picking up the next one. As the agents work under immense time pressure hence the summary logs are very poorly written and sometimes even difficult for human interpretation. Analysis of such call logs are important to identify problem areas, agent performance, evolving problems etc. In this chapter we will be focussing on automatic classification of noisy text. Automatic text classification refers to segregating documents into different topics depending on content. For example, categorizing customer emails according to topics such as billing problem, address change, product enquiry etc. It has important applications in the field of email categorization, building and maintaining web directories e.g. DMoz, spam filter, automatic call and email routing in contact center, pornographic material filter and so on.


Author(s):  
M A Mikheev ◽  
P Y Yakimov

The article is devoted to solving the problem of document versions comparison in electronic document management systems. Systems-analogues were considered, the process of comparing text documents was studied. In order to recognize the text on the scanned image, the technology of optical character recognition and its implementation — Tesseract library were chosen. The Myers algorithm is applied to compare received texts. The software implementation of the text document comparison module was implemented using the solutions described above.


Author(s):  
Dr. T. Kameswara Rao ◽  
K. Yashwanth Chowdary ◽  
I. Koushik Chowdary ◽  
K. Prasanna Kumar ◽  
Ch. Ramesh

In recent years, text extraction from document images is one of the most widely studied topics in Image Analysis and Optical Character Recognition. These extractions of document images can be used for document analysis, content analysis, document retrieval and many more. Many complex text extracting processes Maximization Likelihood (ML), Edge point detection, Corner point detection etc. are used to extract text documents from images. In this article, the corner point approach was used. To extract document from images we used a very simple approach based on FAST algorithm. Firstly, we divided the image into blocks and their density in each block was checked. The denser blocks were labeled as text blocks and the less dense were the image region or noise. Then we check the connectivity of the blocks to group the blocks so that the text part can be isolated from the image. This method is very fast and versatile, it can be used to detect various languages, handwriting and even images with a lot of noise and blur. Even though it is a very simple program the precision of this method is closer or higher than 90%. In conclusion, this method helps in more accurate and less complex detection of text from document images.


2011 ◽  
Vol 341-342 ◽  
pp. 565-569 ◽  
Author(s):  
Ahmed Dena Rafaa ◽  
Jan Nordin

One of the most important application these days in Pattern Recognition (PR) is Optical Character recognition (OCR) which is a system used to convert scanned printed or handwritten image files into machine readable and editable format such as text documents. The main motivation behind this study is to build an OCR system for offline machine-printed Turkish characters to convert any image file into a readable and editable format. This OCR system started from preprocessing step to convert the image file into a binary format with less noise to be ready for recognition. The preprocessing step includes digitization, binarization, thresholding, and noise removal. Next, horizontal projection method is used for line detection and word allocation and 8-connected neighbors’ schema is used to extract characters as a set of connected components. Then, the Template matching method is utilized to implement the matching process between the segmented characters and the template set stored in OCR database in order to recognize the text. Unlike other approaches, template matching takes shorter time and does not require sample training but it is not able to recognize some letters with similar shape or combined letters, for this reason, this OCR system combines both the template matching and the size feature of the segmented characters to achieve accurate results. Finally, upon a successful implementation of the OCR, the recognized patterns are displayed in notepad as readable and editable text. The Turkish machine-printed database consists of a list of 630 names of cities in Turkey written by using Arial font with different sizes in uppercase, lowercase and capitalizes the first character for each word. The proposed OCR’s result show that the accuracy of the system is from 96% to 100%.


2015 ◽  
Vol 9 (13) ◽  
pp. 148 ◽  
Author(s):  
Karthick K ◽  
Chitra S

<p>The optical character recognition has been used in many applications such as dictionary generation, customer billing system, banking and postal automation, and library automation etc. The bilingual OCR system to make uni-lingual script helps us to reduce the requirement of two different OCR systems into a single OCR system for recognition of two different languages. This type of globalization helps the universal users of any language can read the text documents in their self-language if the bilingual documents are converted into uni-lingual document. In this paper, the image which contains printed Tamil and European numerals has been recognized using common OCR System and the Tamil numerals are converted into European numerals to globalize the document from a bilingual script into a uni-lingual document. The main objective of the work is to bring out the single numeral (European numerals) text document from the input image with two different numerals (Tamil and European Numerals). The Kohonen’s self-organizing map (SOM) based recognition system has been used for recognizing the numerals and recognized characters in bilingual numerals (Tamil and European Numerals) form are converted into Uni-lingual form (European numerals). This paper also discusses the various approaches used for OCR.</p>


2019 ◽  
Vol 18 (6) ◽  
pp. 1381-1406 ◽  
Author(s):  
Lukáš Bureš ◽  
Ivan Gruber ◽  
Petr Neduchal ◽  
Miroslav Hlaváč ◽  
Marek Hrúz

An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR). The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used. These backgrounds enable the generation of similar background images as the training ones on the fly.The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents). A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.


2017 ◽  
Author(s):  
Geoff Boeing

In 1974, Robert Caro published The Power Broker, a critical biography of Robert Moses’s dictatorial tenure as the “master builder” of mid-century New York. Moses transformed the urban fabric and transportation system of New York in a profound way, producing the Brooklyn Battery Tunnel, the Verrazano Narrows Bridge, the Westside Highway, the Cross-Bronx Expressway, the Lincoln Center, the UN headquarters, Shea Stadium, Jones Beach State Park and many other projects. However, The Power Broker did lasting damage to his public image and today he remains one of the most polarizing figures in city planning history. On August 26, 1974, Moses issued a 23-page typed statement denouncing Caro’s work as “full of mistakes, unsupported charges, nasty baseless personalities, and random haymakers.” The statement went on to gainsay several of Caro’s assertions one at a time. Robert Moses’s typewritten response survives today as a grainy photocopy from the New York City Parks Department archive. To better preserve and disseminate it, I have extracted and transcribed its text using optical character recognition, and edited the result to correct transcription errors.


Author(s):  
D. Karthikeyan ◽  
Arumbu V. P. ◽  
K. Surendhirababu ◽  
K. Selvakumar ◽  
P. Divya ◽  
...  

An internet of things (IoT) is an exclusive method, were its impact on the enactments of human life is very trendy. This research on library control system operates on the basis of IoT and optical character recognition (OCR) algorithm rules and its training. A closed-circuit television (CCTV) watched mechanism is created to control the book issuing and returning phenomenon via tag studying system in the library. In this proposed work text file is converted into an audio file. This audio file is being played and the contents of the book can be heard via the headset. This unique function of the OCR helps blind people. Now a days OCR widely focused in machine processes such as machine transformation, text to speech extraction and text data mining. It utilized in various area of research in artificial intelligence, computer vision and pattern recognition. Using OCR to scan the damaged book in the library converted into pdf format the book gets new life and sharing the contents to multiple readers. In this paper aims to implement IoT based library management system to maintaining books in digital format.


Sign in / Sign up

Export Citation Format

Share Document