scholarly journals Enhanced Gesture Recognition Text to Speech Browser for Visually Challenged

Web has understood an astonishing change in human access to learning and information. The need of plotting an upgraded program for the ostensibly tried. The present structure helps the apparently tried people to use the information in the web satisfactorily, by changing over the substance in the site page to voice for their better use. The customer can look through the substance and indispensable information from the web by simply composing the URL. The site page substances are removed by JSOUP HTML parser. The isolated substance will be scrutinized out by Text to Speech (TTS) engine. The weights in this system are the apparently tried need to type URL the required in the change box, there are no contrasting options to control TTS and there are no decisions for investigating through site pages. The proposed structure is to arrange talk affirmation engine.

2019 ◽  
Vol 1 (01) ◽  
pp. 31-38 ◽  
Author(s):  
Samuel Manoharan

This paper proposes a smart algorithm for image processing by means of recognition of text, extraction of information and vocalization for the visually challenged. The system uses LattePanda Alpha system on board that processes the scanned images. The image is categorized into its equivalent alphanumeric characters following pre-processing, segmentation, extraction of features and post-processing of the scanned or image based information. Further, a text to speech synthesizer is used for vocalization processed content. In converting handwritten scripts, the system offers an accuracy of 97% in conversion. This also depends on the legibility of the data. The time delay for the entire conversion process is also analysed and the efficiency of the system is estimated.


2018 ◽  
Vol 18 (02) ◽  
pp. 1850010 ◽  
Author(s):  
T. Shreekanth ◽  
M. R. Deeksha ◽  
Karthikeya R. Kaushik

In society, there exists a gap in communication between the sighted community and the visually challenged people due to different scripts followed to read and write. To bridge this gap there is a need for a system that supports automatic conversion of Braille script to text and speech in the corresponding language. Optical Braille Recognition (OBR) system converts the hand-punched Braille characters into their equivalent natural language characters. The Text-to-Speech (TTS) system converts the recognized characters into audible speech using speech synthesis techniques. Existing literature reveals that OBR and TTS systems have been well established independently for English. There is a scope for development of OBR and TTS systems for regional languages. In spite of Kannada being one of the most widely spoken regional languages in India, minimal work has been done towards Kannada OBR and TTS. There is no system that directly converts Braille script to speech, therefore, this development of Kannada Braille to text and speech system is one of a kind. The acquired image is processed and feature extraction is performed using [Formula: see text]-means algorithm and heuristics to convert the Braille characters to Kannada script. The concatenation based speech synthesis technique employing phoneme as the basic unit is used to convert Kannada TTS using Festival TTS framework. Performance evaluation of the proposed system is done using Kannada Braille database developed independently, and the results obtained are found to be satisfactory when compared to existing methods in the literature.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 289
Author(s):  
Rishi Korrapolu ◽  
Manoj Sai. N ◽  
Kameshwara Rao.M

CAPTCHAs are strategies to recognize human clients and PC programs naturally. CAPTCHAs shield different sorts of online administrations from beast compel assaults and foreswearing of administration via programmed PC programs. Most CAPTCHAs comprise of mages with misshaped content. Shockingly, visual CAPTCHAs constrain access to the a huge number of outwardly hindered individuals utilizing the Web. Sound CAPTCHAs were made to fathom this openness issue. However the presently accessible sound CAPTCHAs have been broken with differing achievement, utilizing the shortcoming in the techniques utilized. Our system, presents the user with an interface that plays a song using instrumental music (nonvocal) randomly selected from some language of users choice. The user is then asked to kind the music composer and then the device estimates whether it is a human or no longer by means of analyzing the response. A person look at turned into conducted to research the overall performance of our proposed mechanism.


Author(s):  
U. Mamatha

As sign language is used by deaf and dumb but the non-sign-language speaker cannot understand there sign language to overcome the problem we proposed this system using python. In this first we taken the some of the hand gestures are captured using the web camera. The image is pre-processed and then feature are extracted from the captured image .comparing the feature extracted image with the reference image. If matched decision is taken the displayed as a text. This helps the non-sign-language members to recognize easily by using Convolutional neural network layer (CNN) with tensor flow


Author(s):  
Sohag Sundar Nanda ◽  
Basanta Kumar Swain ◽  
Sanghamitra Mohanty

This paper discusses the present status of availability of e-services for the visually challenged in India. An analysis of five major Governments to Citizen (G2C) e-service initiatives is done to figure out the level of assistance for the visually challenged users. E-services are judged on a 3-point scale that includes Text to Speech (TTS) support, adherence to W3C accessibility guidelines and provision for voice based e-services. Finally, we discuss the architecture and working of E-Prakash, a voice based e-service delivery system.


2020 ◽  
Vol 17 (8) ◽  
pp. 3782-3785
Author(s):  
R. Mahesh Udvag ◽  
T. Hari Kumar ◽  
R. S. Kavin Raj ◽  
M. P. Karthikeyan

Gesture recognition is a type of perceptual computing user interface that allows computers to capture and interpret human gestures as commands. Gestures are generally the movement of the hands which is a form of non-verbal communication. We use gestures as input to control devices or applications or websites. By using the gestures, we directly open or manipulate the website for the users. When the user uses the gesture, the system captures the gesture and compares it with the stored gesture data. If the gesture matches the data, then the required website will be manipulated. These gestures are widely helpful for the people who are blind and unable to type. The main objective of the project is to deliver user friendly access to the web contents. The vision based technology of hand gesture recognition is an important part of human-computer interaction (HCI), thus stepping into the path of future and our project is one such kind of stepping stone. Our project really concerns the people who are not able to search contents on the web through text search. We provide an improvisable solution where it paves a way for research and development.


2020 ◽  
pp. 205-208
Author(s):  
Sowmya R ◽  
Sushma S Jagtap ◽  
Gnanamoorthy Kasthuri

Assistive technology uses assistive, adaptive and rehabilitative devices for people with disabilities. It’s assessed there are about 36 million people with visual impairment in the world and a further 216 million who lead life with moderate to severe visual impairments. Leveraging technology has helped the visually challenged in carrying out tasks on par with the people blessed with vision particularly in the activities of reading and writing. In the proposed work, an image scanning device attached to a microcontroller is designed. This device is designed in the form of hand gloves for ease of usage. The glove with the camera at the fingertip, when rolled over lines of text, scans the information and converts it into digital text with Optical Character Recognition (OCR). The converted digital text is finally read aloud using Text-to-speech synthesis. The results obtained were accurate and met the standards of operability.


Author(s):  
Anitha D B ◽  
Jyothi T M ◽  
Pooja R ◽  
Sahana N

The objective of this paper is to presents new design on assistive smart glasses for visually impaired. The objective is to assist in multiple daily tasks using the advantage of wearable design format. The proposed method is a camera based assistive text reading to help to blind in person in reading the text present on the text labels, printed notes and products in their own respective languages. It combines the concept of Optical Character Recognition (OCR), text to Speech Synthesizer (TTS) and translator in Raspberry pi. Optical character recognition (OCR) is the identification of printed characters using photoelectric devices and computer software. It converts images of typed, handwritten or printed text into machine encoded text from scanned document or from subtitle text superimposed on an image. Text-to-Speech conversion is a method that scans and reads any language letters and numbers that are in the image using OCR technique and then translates it into any desired language and at last it gives audio output of the translated text. The audio output is heard through the raspberry pi's audio jack using speakers or earphones.


2012 ◽  
Vol 21 (2) ◽  
pp. 60-71 ◽  
Author(s):  
Ashley Alliano ◽  
Kimberly Herriger ◽  
Anthony D. Koutsoftas ◽  
Theresa E. Bartolotta

Abstract Using the iPad tablet for Augmentative and Alternative Communication (AAC) purposes can facilitate many communicative needs, is cost-effective, and is socially acceptable. Many individuals with communication difficulties can use iPad applications (apps) to augment communication, provide an alternative form of communication, or target receptive and expressive language goals. In this paper, we will review a collection of iPad apps that can be used to address a variety of receptive and expressive communication needs. Based on recommendations from Gosnell, Costello, and Shane (2011), we describe the features of 21 apps that can serve as a reference guide for speech-language pathologists. We systematically identified 21 apps that use symbols only, symbols and text-to-speech, and text-to-speech only. We provide descriptions of the purpose of each app, along with the following feature descriptions: speech settings, representation, display, feedback features, rate enhancement, access, motor competencies, and cost. In this review, we describe these apps and how individuals with complex communication needs can use them for a variety of communication purposes and to target a variety of treatment goals. We present information in a user-friendly table format that clinicians can use as a reference guide.


Sign in / Sign up

Export Citation Format

Share Document