scholarly journals Development of A Computer Aided Real-Time Interpretation System for Indigenous Sign Language in Nigeria Using Convolutional Neural Network

Author(s):  
Ayodele Olawale Olabanji ◽  
Akinlolu Adediran Ponnle

Sign language is the primary method of communication adopted by deaf and hearing-impaired individuals. The indigenous sign language in Nigeria is one area receiving growing interest, with the major challenge faced is communication between signers and non-signers. Recent advancements in computer vision and deep learning neural networks (DLNN) have led to the exploration of necessary technological concepts towards tackling existing challenges. One area with extensive impact from the use of DLNN is the interpretation of hand signs. This study presents an interpretation system for the indigenous sign language in Nigeria. The methodology comprises three key phases: dataset creation, computer vision techniques, and deep learning model development. A multi-class Convolutional Neural Network (CNN) is designed to train and interpret the indigenous signs in Nigeria. The model is evaluated using a custom-built dataset of some selected indigenous words comprising of 15000 image samples. The experimental outcome shows excellent performance from the interpretation system, with accuracy attaining 95.67%.

2020 ◽  
Vol 17 (9) ◽  
pp. 4660-4665
Author(s):  
L. Megalan Leo ◽  
T. Kalpalatha Reddy

In the modern times, Dental caries is one of the most prevalent diseases of the teeth in the whole world. Almost 90% of the people get affected by cavity. Dental caries is the cavity which occurs due to the remnant food and bacteria. Dental Caries are curable and preventable diseases when it is identified at earlier stage. Dentist uses the radiographic examination in addition with visual tactile inspection to identify the caries. Dentist finds difficult to identify the occlusal, pit and fissure caries. It may lead to sever problem if the cavity left untreated and not identified at the earliest stage. Machine learning can be applied to solve this issue by applying the labelled dataset given by the experienced dentist. In this paper, convolutional based deep learning method is applied to identify the cavity presence in the image. 480 Bite viewing radiography images are collected from the Elsevier standard database. All the input images are resized to 128–128 matrixes. In preprocessing, selective median filter is used to reduce the noise in the image. Pre-processed inputs are given to deep learning model where convolutional neural network with Google Net inception v3 architecture algorithm is implemented. ReLu activation function is used with Google Net to identify the caries that provide the dentists with the precise and optimized results about caries and the area affected. Proposed technique achieves 86.7% accuracy on the testing dataset.


2020 ◽  
Author(s):  
Zicheng Hu ◽  
Alice Tang ◽  
Jaiveer Singh ◽  
Sanchita Bhattacharya ◽  
Atul J. Butte

AbstractCytometry technologies are essential tools for immunology research, providing high-throughput measurements of the immune cells at the single-cell level. Traditional approaches in interpreting and using cytometry measurements include manual or automated gating to identify cell subsets from the cytometry data, providing highly intuitive results but may lead to significant information loss, in that additional details in measured or correlated cell signals might be missed. In this study, we propose and test a deep convolutional neural network for analyzing cytometry data in an end-to-end fashion, allowing a direct association between raw cytometry data and the clinical outcome of interest. Using nine large CyTOF studies from the open-access ImmPort database, we demonstrated that the deep convolutional neural network model can accurately diagnose the latent cytomegalovirus (CMV) in healthy individuals, even when using highly heterogeneous data from different studies. In addition, we developed a permutation-based method for interpreting the deep convolutional neural network model and identified a CD27-CD94+ CD8+ T cell population significantly associated with latent CMV infection. Finally, we provide a tutorial for creating, training and interpreting the tailored deep learning model for cytometry data using Keras and TensorFlow (github.com/hzc363/DeepLearningCyTOF).


2021 ◽  
Author(s):  
P. Golda Jeyasheeli ◽  
N. Indumathi

In Indian Population there is about 1 percent of the people are deaf and dumb. Deaf and dumb people use gestures to interact with each other. Ordinary humans fail to grasp the significance of gestures, which makes interaction between deaf and mute people hard. In attempt for ordinary citizens to understand the signs, an automated sign language identification system is proposed. A smart wearable hand device is designed by attaching different sensors to the gloves to perform the gestures. Each gesture has unique sensor values and those values are collected as an excel data. The characteristics of movements are extracted and categorized with the aid of a convolutional neural network (CNN). The data from the test set is identified by the CNN according to the classification. The objective of this system is to bridge the interaction gap between people who are deaf or hard of hearing and the rest of society.


Author(s):  
Kannuru Padmaja

Abstract: In this paper, we present the implementation of Devanagari handwritten character recognition using deep learning. Hand written character recognition gaining more importance due to its major contribution in automation system. Devanagari script is one of various languages script in India. It consists of 12 vowels and 36 consonants. Here we implemented the deep learning model to recognize the characters. The character recognition mainly five steps: pre-processing, segmentation, feature extraction, prediction, post-processing. The model will use convolutional neural network to train the model and image processing techniques to use the character recognition and predict the accuracy of rcognition. Keywords: convolutional neural network, character recognition, Devanagari script, deep learning.


2020 ◽  
Vol 9 (05) ◽  
pp. 25052-25056
Author(s):  
Abhi Kadam ◽  
Anupama Mhatre ◽  
Sayali Redasani ◽  
Amit Nerurkar

Current lighting technologies extend the options for changing the appearance of rooms and closed spaces, as such creating ambiences with an affective meaning. Using intelligence, these ambiences may instantly be adapted to the needs of the room’s occupant(s), possibly improving their well-being. In this paper, we set actuate lighting in our surrounding using mood detection. We analyze the mood of the person by Facial Emotion Recognition using deep learning model such as Convolutional Neural Network (CNN). On recognizing this emotion, we will actuate lighting in our surrounding in accordance with the mood. Based on implementation results, the system needs to be developed further by adding more specific data class and training data.


Author(s):  
S Gopi Naik

Abstract: The plan is to establish an integrated system that can manage high-quality visual information and also detect weapons quickly and efficiently. It is obtained by integrating ARM-based computer vision and optimization algorithms with deep neural networks able to detect the presence of a threat. The whole system is connected to a Raspberry Pi module, which will capture live broadcasting and evaluate it using a deep convolutional neural network. Due to the intimate interaction between object identification and video and image analysis in real-time objects, By generating sophisticated ensembles that incorporate various low-level picture features with high-level information from object detection and scenario classifiers, their performance can quickly plateau. Deep learning models, which can learn semantic, high-level, deeper features, have been developed to overcome the issues that are present in optimization algorithms. It presents a review of deep learning based object detection frameworks that use Convolutional Neural Network layers for better understanding of object detection. The Mobile-Net SSD model behaves differently in network design, training methods, and optimization functions, among other things. The crime rate in suspicious areas has been reduced as a consequence of weapon detection. However, security is always a major concern in human life. The Raspberry Pi module, or computer vision, has been extensively used in the detection and monitoring of weapons. Due to the growing rate of human safety protection, privacy and the integration of live broadcasting systems which can detect and analyse images, suspicious areas are becoming indispensable in intelligence. This process uses a Mobile-Net SSD algorithm to achieve automatic weapons and object detection. Keywords: Computer Vision, Weapon and Object Detection, Raspberry Pi Camera, RTSP, SMTP, Mobile-Net SSD, CNN, Artificial Intelligence.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2012
Author(s):  
Jiameng Gao ◽  
Chengzhong Liu ◽  
Junying Han ◽  
Qinglin Lu ◽  
Hengxing Wang ◽  
...  

Wheat is a very important food crop for mankind. Many new varieties are bred every year. The accurate judgment of wheat varieties can promote the development of the wheat industry and the protection of breeding property rights. Although gene analysis technology can be used to accurately determine wheat varieties, it is costly, time-consuming, and inconvenient. Traditional machine learning methods can significantly reduce the cost and time of wheat cultivars identification, but the accuracy is not high. In recent years, the relatively popular deep learning methods have further improved the accuracy on the basis of traditional machine learning, whereas it is quite difficult to continue to improve the identification accuracy after the convergence of the deep learning model. Based on the ResNet and SENet models, this paper draws on the idea of the bagging-based ensemble estimator algorithm, and proposes a deep learning model for wheat classification, CMPNet, which is coupled with the tillering period, flowering period, and seed image. This convolutional neural network (CNN) model has a symmetrical structure along the direction of the tensor flow. The model uses collected images of different types of wheat in multiple growth periods. First, it uses the transfer learning method of the ResNet-50, SE-ResNet, and SE-ResNeXt models, and then trains the collected images of 30 kinds of wheat in different growth periods. It then uses the concat layer to connect the output layers of the three models, and finally obtains the wheat classification results through the softmax function. The accuracy of wheat variety identification increased from 92.07% at the seed stage, 95.16% at the tillering stage, and 97.38% at the flowering stage to 99.51%. The model’s single inference time was only 0.0212 s. The model not only significantly improves the classification accuracy of wheat varieties, but also achieves low cost and high efficiency, which makes it a novel and important technology reference for wheat producers, managers, and law enforcement supervisors in the practice of wheat production.


Sebatik ◽  
2020 ◽  
Vol 24 (2) ◽  
pp. 300-306
Author(s):  
Muhamad Jaelani Akbar ◽  
Mochamad Wisuda Sardjono ◽  
Margi Cahyanti ◽  
Ericks Rachmat Swedia

Sayuran merupakan sebutan bagi bahan pangan asal tumbuhan yang biasanya mengandung kadar air tinggi dan dikonsumsi dalam keadaan segar atau setelah diolah secara minimal. Keanekaragaman sayur yang terdapat di dunia menyebabkan keragaman pula dalam pengklasifikasian sayur. Oleh karena itu diperlukan adanya pendekatan digital agar dapat mengenali jenis sayuran dengan cepat dan mudah. Dalam penelitian ini jumlah jenis sayuran yang digunakan sebanyak 7 jenis diantara: brokoli, jagung, kacang panjang, pare, terung ungu, tomat dan kubis. Dataset yang digunakan berjumlah 941 gambar sayur dari 7 jenis sayur, ditambah 131 gambar sayur dari jenis yang tidak terdapat pada dataset, selain itu digunakan 291 gambar selain sayuran. Untuk melakukan klasifikasi jenis sayuran digunakan algoritme Convolutional Neural Network (CNN), yang merupakan salah satu bidang ilmu baru dalam Machine Learning dan berkembang dengan pesat. CNN merupakan salah satu algoritme yang terdapat pada metode Deep Learning dengan memiliki kemampuan yang baik dalam Computer Vision, salah satunya yaitu image classification atau klasifikasi objek citra. Uji coba dilakukan pada lima perangkat selular berbasiskan sistem operasi Android. Python digunakan sebagai bahasa pemrograman dalam merancang aplikasi mobile ini dengan menggunakan modul Tensor flow untuk melakukan training dan testing data. Metode yang dapat digunakan dalam melakukan klasifikasi citra ini yaitu Convolutional Neural Network (CNN). Hasil final test accuracy yang diperoleh yaitu didapat keakuratan mengenali jenis sayuran sebesar 98.1% dengan salah satu hasil pengujian yaitu klasifikasi sayur jagung dengan akurasi sebesar 99.98049%.


10.2196/24762 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e24762
Author(s):  
Hyun-Lim Yang ◽  
Chul-Woo Jung ◽  
Seong Mi Yang ◽  
Min-Soo Kim ◽  
Sungho Shim ◽  
...  

Background Arterial pressure-based cardiac output (APCO) is a less invasive method for estimating cardiac output without concerns about complications from the pulmonary artery catheter (PAC). However, inaccuracies of currently available APCO devices have been reported. Improvements to the algorithm by researchers are impossible, as only a subset of the algorithm has been released. Objective In this study, an open-source algorithm was developed and validated using a convolutional neural network and a transfer learning technique. Methods A retrospective study was performed using data from a prospective cohort registry of intraoperative bio-signal data from a university hospital. The convolutional neural network model was trained using the arterial pressure waveform as input and the stroke volume (SV) value as the output. The model parameters were pretrained using the SV values from a commercial APCO device (Vigileo or EV1000 with the FloTrac algorithm) and adjusted with a transfer learning technique using SV values from the PAC. The performance of the model was evaluated using absolute error for the PAC on the testing dataset from separate periods. Finally, we compared the performance of the deep learning model and the FloTrac with the SV values from the PAC. Results A total of 2057 surgical cases (1958 training and 99 testing cases) were used in the registry. In the deep learning model, the absolute errors of SV were 14.5 (SD 13.4) mL (10.2 [SD 8.4] mL in cardiac surgery and 17.4 [SD 15.3] mL in liver transplantation). Compared with FloTrac, the absolute errors of the deep learning model were significantly smaller (16.5 [SD 15.4] and 18.3 [SD 15.1], P<.001). Conclusions The deep learning–based APCO algorithm showed better performance than the commercial APCO device. Further improvement of the algorithm developed in this study may be helpful for estimating cardiac output accurately in clinical practice and optimizing high-risk patient care.


Sign in / Sign up

Export Citation Format

Share Document