scholarly journals Adaptive Language Processing Unit for Malaysian Sign Language Synthesizer

Author(s):  
HARIS AL QODRI MAARIF

Language Processing Unit (LPU) is a system built to process text-based data to comply with the rules of sign language grammar. This system was developed as an important part of the sign language synthesizer system. Sign language (SL) uses different grammatical rules from the spoken/verbal language, which only involves the important words that Hearing/Impaired Speech people can understand. Therefore, it needs word classification by LPU to determine grammatically processed sentences for the sign language synthesizer. However, the existing language processing unit in SL synthesizers suffers time lagging and complexity problems, resulting in high processing time. The two features, i.e., the computational time and success rate, become trade-offs which means the processing time becomes longer to achieve a higher success rate. This paper proposes an adaptive Language Processing Unit (LPU) that allows processing the words from spoken words to Malaysian SL grammatical rule that results in relatively fast processing time and a good success rate. It involves n-grams, NLP, and Hidden Markov Models (HMM)/Bayesian Networks as the classifier to process the text-based input. As a result, the proposed LPU system has successfully provided an efficient (fast) processing time and a good success rate compared to LPU with other edit distances (Mahalanobis, Levensthein, and Soundex). The system has been tested on 130 text-input sentences with several words ranging from 3 to 10 words. Results showed that the proposed LPU could achieve around 1.497ms processing time with an average success rate of 84.23% for a maximum of ten-word sentences.

Author(s):  
Soumya Ranjan Nayak ◽  
S Sivakumar ◽  
Akash Kumar Bhoi ◽  
Gyoo-Soo Chae ◽  
Pradeep Kumar Mallick

Graphical processing unit (GPU) has gained more popularity among researchers in the field of decision making and knowledge discovery systems. However, most of the earlier studies have GPU memory utilization, computational time, and accuracy limitations. The main contribution of this paper is to present a novel algorithm called the Mixed Mode Database Miner (MMDBM) classifier by implementing multithreading concepts on a large number of attributes. The proposed method use the quick sort algorithm in GPU parallel computing to overcome the state of the art limitations. This method applies the dynamic rule generation approach for constructing the decision tree based on the predicted rules. Moreover, the implementation results are compared with both SLIQ and MMDBM using Java and GPU with the computed acceleration ratio time using the BP dataset. The primary objective of this work is to improve the performance with less processing time. The results are also analyzed using various threads in GPU mining using eight different datasets of UCI Machine learning repository. The proposed MMDBM algorithm have been validated on these chosen eight different dataset with accuracy of 91.3% in diabetes, 89.1% in breast cancer, 96.6% in iris, 89.9% in labor, 95.4% in vote, 89.5% in credit card, 78.7% in supermarket and 78.7% in BP, and simultaneously, it also takes less computational time for given datasets. The outcome of this work will be beneficial for the research community to develop more effective multi thread based GPU solution in GPU mining to handle large set of data in minimal processing time. Therefore, this can be considered a more reliable and precise method for GPU computing.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Water ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 750
Author(s):  
Antonio Pasculli ◽  
Jacopo Cinosi ◽  
Laura Turconi ◽  
Nicola Sciarra

The current climate change could lead to an intensification of extreme weather events, such as sudden floods and fast flowing debris flows. Accordingly, the availability of an early-warning device system, based on hydrological data and on both accurate and very fast running mathematical-numerical models, would be not only desirable, but also necessary in areas of particular hazard. To this purpose, the 2D Riemann–Godunov shallow-water approach, solved in parallel on a Graphical-Processing-Unit (GPU) (able to drastically reduce calculation time) and implemented with the RiverFlow2D code (version 2017), was selected as a possible tool to be applied within the Alpine contexts. Moreover, it was also necessary to identify a prototype of an actual rainfall monitoring network and an actual debris-flow event, beside the acquisition of an accurate numerical description of the topography. The Marderello’s basin (Alps, Turin, Italy), described by a 5 × 5 m Digital Terrain Model (DTM), equipped with five rain-gauges and one hydrometer and the muddy debris flow event that was monitored on 22 July 2016, were identified as a typical test case, well representative of mountain contexts and the phenomena under study. Several parametric analyses, also including selected infiltration modelling, were carried out in order to individuate the best numerical values fitting the measured data. Different rheological options, such as Coulomb-Turbulent-Yield and others, were tested. Moreover, some useful general suggestions, regarding the improvement of the adopted mathematical modelling, were acquired. The rapidity of the computational time due to the application of the GPU and the comparison between experimental data and numerical results, regarding both the arrival time and the height of the debris wave, clearly show that the selected approaches and methodology can be considered suitable and accurate tools to be included in an early-warning system, based at least on simple acoustic and/or light alarms that can allow rapid evacuation, for fast flowing debris flows.


2021 ◽  
Author(s):  
R. D. Rusiru Sewwantha ◽  
T. N. D. S. Ginige

Sign Language is the use of various gestures and symbols for communication. It is mainly used by disabled people with communication difficulties due to their speech or hearing impediments. Due to the lack of knowledge on sign language, natural language speakers like us, are not able to communicate with such people. As a result, a communication gap is created between sign language users and natural language speakers. It should also be noted that sign language differs from country to country. With American sign language being the most commonly used, in Sri Lanka, we use Sri Lankan/Sinhala sign language. In this research, the authors propose a mobile solution using a Region Based Convolutional Neural Network for object detection to reduce the communication gap between the sign users and language speakers by identifying and interpreting Sinhala sign language to Sinhala text using Natural Language Processing (NLP). The system is able to identify and interpret still gesture signs in real-time using the trained model. The proposed solution uses object detection for the identification of the signs.


2014 ◽  
Vol 15 (1) ◽  
Author(s):  
Barbara Hänel-Faulhaber ◽  
Nils Skotara ◽  
Monique Kügow ◽  
Uta Salden ◽  
Davide Bottari ◽  
...  

2017 ◽  
Vol 11 (3) ◽  
Author(s):  
Günther Retscher ◽  
Hannes Hofer

AbstractFor Wi-Fi positioning location fingerprinting is very common but has the disadvantage that it is very labour consuming for the establishment of a database (DB) with received signal strength (RSS) scans measured on a large number of known reference points (RPs). To overcome this drawback a novel approach is developed which uses a logical sequence of intelligent checkpoints (iCPs) instead of RPs distributed in a regular grid. The iCPs are the selected RPs which have to be passed along the way for navigation from a start point A to the destination B. They are twofold intelligent because of the fact that they depend on their meaningful selection and because of their logical sequence in their correct order. Thus, always the following iCP is known due to a vector graph allocation in the DB and only a small limited number of iCPs needs to be tested when matching the current RSS scans. This reduces the required processing time significantly. It is proven that the iCP approach achieves a higher success rate than conventional approaches. In average correct matching results of 90.0% were achieved using a joint DB including RSS scans of all employed smartphones. An even higher success rate is achieved if the same mobile device is used in both the training and positioning phase.


2017 ◽  
Vol 13 (4) ◽  
Author(s):  
J. Manimaran ◽  
T. Velmurugan

AbstractBackground:Clinical Text Analysis and Knowledge Extraction System (cTAKES) is an open-source natural language processing (NLP) system. In recent development modules of cTAKES, a negation detection (ND) algorithm is used to improve annotation capabilities and simplify automatic identification of negative context in large clinical documents. In this research, the two types of ND algorithms used are lexicon and syntax, which are analyzed using a database made openly available by the National Center for Biomedical Computing. The aim of this analysis is to find the pros and cons of these algorithms.Methods:Patient medical reports were collected from three institutions included the 2010 i2b2/VA Clinical NLP Challenge, which is the input data for this analysis. This database includes patient discharge summaries and progress notes. The patient data is fed into five ND algorithms: NegEx, ConText, pyConTextNLP, DEEPEN and Negation Resolution (NR). NegEx, ConText and pyConTextNLP are lexicon-based, whereas DEEPEN and NR are syntax-based. The results from these five ND algorithms are post-processed and compared with the annotated data. Finally, the performance of these ND algorithms is evaluated by computing standard measures including F-measure, kappa statistics and ROC, among others, as well as the execution time of each algorithm.Results:This research is tested through practical implementation based on the accuracy of each algorithm’s results and computational time to evaluate its performance in order to find a robust and reliable ND algorithm.Conclusions:The performance of the chosen ND algorithms is analyzed based on the results produced by this research approach. The time and accuracy of each algorithm are calculated and compared to suggest the best method.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 01) ◽  
pp. 196-210
Author(s):  
Dr.P. Golda Jeyasheeli ◽  
N. Indumathi

Nowadays the interaction among deaf and mute people and normal people is difficult, because normal people scuffle to understand the sense of the gestures. The deaf and dumb people find problem in sentence formation and grammatical correction. To alleviate the issues faced by these people, an automatic sign language sentence generation approach is propounded. In this project, Natural Language Processing (NLP) based methods are used. NLP is a powerful tool for translation in the human language and also responsible for the formation of meaningful sentences from sign language symbols which is also understood by the normal person. In this system, both conventional NLP methods and Deep learning NLP methods are used for sentence generation. The efficiency of both the methods are compared. The generated sentence is displayed in the android application as an output. This system aims to connect the gap in the interaction among the deaf and dumb people and the normal people.


2019 ◽  
Vol 1 (2) ◽  
pp. 28-33
Author(s):  
Haves Ashan

Retinal detachment is a serious condition that threatens vision so that it can cause complications of blindness. The only treatment for retinal detachment is surgery, where not all surgical procedures in retinal detachment cases have a good success rate. Therefore, it is very important to be able to find tears in the retina, before developing further into detachments. Retinopexy lasers performed with the aim of causing adhesion around the tear in the retina, have been recommended as an action to prevent complications of retinal detachment. Retinopexy laser has an important role in patients with retinal tear who have symptoms of floaters and flashing and persistent traction in the retina especially in the area around the retinal tear, because symptomatic retinal tear is likely to develop into a complication of retinal detachment. However, although laser retinopexy is often recommended, in fact, the effectiveness of this action is still controversial.


2015 ◽  
Vol 14 (10) ◽  
pp. 6135-6141
Author(s):  
Pooja Handa ◽  
Meenu Kalra ◽  
Rajesh Sachdeva

Green computing is the process of reducing the power consumed by a computer and thereby reducing carbon emissions. The total power consumed by the computer excluding the monitor at its fully computative load is equal to the sum of the power consumed by the GPU in its idle state and the CPU at its full state. Recently, there have been tremendous interests in the acceleration of general computing applications using a Graphics Processing Unit (GPU). Now the GPU provides the computing powers not only for fast processing of graphics applications, but also for general computationally complex data intensive applications. On the other hand, power and energy consumptions are also becoming important design criteria. Consequently, software designs have to consider the power/energy consumptions together with performance when they are developing software.The GPU therefore does the 100% of the CPU work in its idle state .Hence the power consumed by the GPU will be low. Also when the GPU is doing all the work the CPU will remain at a load less than its idle load. Hence the power consumed will be equal to the power consumed by the CPU at a load less than its idle load plus the power consumed by a GPU.  


Sign in / Sign up

Export Citation Format

Share Document