character extraction
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 15)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Bhavyasri Maddineni

Handwritten Text Recognition (HTR) also known as Handwriting Recognition (HWR) is the detection and interpretation of handwritten text images by the computer. Handwritten text from various sources such as notebooks, documents, forms, photographs, and other devices can be given to the computer to predict and convert into the Computerized Text/Digital Text. Humans find easier to write on a piece of paper rather than typing, but now-a-days everything is being digitalized. So, HTR/HWR has an increasing use these days. There are various techniques used in recognizing the handwriting. Some of the traditional techniques are Character extraction, Character recognition, and Feature extraction, while the modern techniques are segmenting the lines for recognition, machine learning techniques, convolution neural networks, and recurrent neural networks. There are various applications for the HTR/HWR such as the Online recognition, Offline Recognition, Signature verification, Postal address interpretation, Bank-Cheque processing, Writer recognition and these are considered to be the active areas of research. An effective HTR/HWR is therefore needed for the above stated applications. During this project our objective is to find and develop various models of the purpose.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yifeng Zhao ◽  
Deyun Chen

Due to the complexity of human emotions, there are some similarities between different emotion features. The existing emotion recognition method has the problems of difficulty of character extraction and low accuracy, so the bidirectional LSTM and attention mechanism based on the expression EEG multimodal emotion recognition method are proposed. Firstly, facial expression features are extracted based on the bilinear convolution network (BCN), and EEG signals are transformed into three groups of frequency band image sequences, and BCN is used to fuse the image features to obtain the multimodal emotion features of expression EEG. Then, through the LSTM with the attention mechanism, important data is extracted in the process of timing modeling, which effectively avoids the randomness or blindness of sampling methods. Finally, a feature fusion network with a three-layer bidirectional LSTM structure is designed to fuse the expression and EEG features, which is helpful to improve the accuracy of emotion recognition. On the MAHNOB-HCI and DEAP datasets, the proposed method is tested based on the MATLAB simulation platform. Experimental results show that the attention mechanism can enhance the visual effect of the image, and compared with other methods, the proposed method can extract emotion features from expressions and EEG signals more effectively, and the accuracy of emotion recognition is higher.


Author(s):  
Kedar R ◽  
Kaviraj A ◽  
Manish R ◽  
Niteesh B ◽  
Suthir S

The technology is growing and increasing in our day to day life to satisfy the needs of human beings. The system we are going to propose makes the human job easier. Here the YOLO algorithm which is a deep learning object detection architecture is used to detect the number plate of the vehicle. After detecting the number plate it converts the vehicle number to a text format. Then it checks it with the database to see if the vehicle is authorized to enter into the premise or not. This system can be implemented in highly restrained areas such as military areas, government organizations, Parliament, etc. This proposed system has around six stages: Capture Image, Search for black pixels, Image filtering, Plate region extraction, character extraction, OCR for character recognition. The alphanumeric characters are identified using the OCR algorithm. It is then used to compare the obtained result from the YOLO algorithm with the database and then check if the vehicle is allowed to enter the premise or not. This proposed system is simulated and implemented using Python, and it was also tested on real-time images for performance purposes.


2020 ◽  
Vol 2 (4) ◽  
pp. 4-10
Author(s):  
Santhosh Kumar S ◽  
Vishnu Vardhan S ◽  
Wasim Jaffar M ◽  
Sultan Saleem A ◽  
Sharmasth Vali Y

The distinguishing proof of online networking networks has as of late been of significant worry, since clients taking an interest in such networks can add to viral showcasing efforts. Right now center around clients' correspondence considering character as a key trademark for recognizing informative systems for example systems with high data streams. We portray the Twitter Personality based Communicative Communities Extraction (T-PCCE) framework that recognizes the most informative networks in a Twitter organize chart thinking about clients' character. We at that point grow existing methodologies as a part of client’s character extraction by collecting information that speak to a few parts of client conduct utilizing AI strategies. We utilize a current measured quality based network discovery calculation and we expand it by embeddings a post-preparing step that dispenses with diagram edges dependent on clients' character. The adequacy of our methodology is exhibited by testing the Twitter diagram and looking at the correspondence quality of the removed networks with and without considering the character factor. We characterize a few measurements to tally the quality of correspondence inside every network. Our algorithmic system and the resulting usage utilize the cloud foundation and utilize the MapReduce Programming Environment. Our outcomes show that the T-PCCE framework makes the most informative networks.


In today’s world managing the records of attendance of staffs, students, employee or bus is a tedious task. This project focuses on automating the bus attendance process through vehicle license plate recognition. As, the license plate is a feature that is peculiar to every vehicle, it would help in efficiently marking the bus attendance. The bus attendance system using RFID is a time consuming process. Hence we developed a project to efficiently mark attendance using number plate recognition and OCR. The system was trained using faster RCNN model with bus image dataset. The proposed system is the number plate is captured through surveillance camera and the captured image will be passed as an input to the neural network for training and the number plate will be detected. Character extraction is done using OCR and extracted character matched will be checked with the database and the attendance for particular bus will be marked.


The world is moving at a rate too fast for us to fathom. Our world is not functionally accessible for individual of any disability to be completely independent. For the mankind to move forward with time, a holistic approach in development can create an environment where no individual can feel overlooked. It is estimated that approximately 1.3 billion people live with some form of vision impairment out of which 36 million people are blind [1]. Without proper integration of such people in our society; every discovery, every news is creating a divide between the people whose mental abilities are impalpable otherwise. With this paper, our aim is to convert the Braille books written in regional language (in this case, Hindi) into its corresponding text format so that those people can be part of the world every Scientist, Engineer, a Visionary, every child dreamed of. In this project, a picture of braille text would be fed as an input and a series of techniques in preprocessing, Segmentation, Character Extraction, Character Recognition and Text Conversion will be applied to it resulting in an output image that will be the corresponding Hindi format of that braille text image. The main application of this could be seen in the lives of the families of the visually impaired to understand what is inscribed in the braille format, correction of the answer-script of any braille exams, cross-checking for errors in manufacturing of braille documents, and increase in opportunities for participation of visually impaired in the social section of life.


2020 ◽  
Vol 7 (2) ◽  
pp. 1
Author(s):  
A. KANKHAR MADHAV ◽  
C. NAMRATA MAHENDER ◽  
◽  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document