A Convolutional Neural Network-based Mobile Application to Bedside Neonatal Pain Assessment

Author(s):  
Lucas P. Carlini ◽  
Leonardo A. Ferreira ◽  
Gabriel A. S. Coutrin ◽  
Victor V. Varoto ◽  
Tatiany M. Heiderich ◽  
...  
2021 ◽  
Vol 3 (1) ◽  
pp. 8-14
Author(s):  
D. V. Fedasyuk ◽  
◽  
T. V. Demianets ◽  

A melanoma is the deadliest skin cancer, so early diagnosis can provide a positive prognosis for treatment. Modern methods for early detecting melanoma on the image of the tumor are considered, and their advantages and disadvantages are analyzed. The article demonstrates a prototype of a mobile application for the detection of melanoma on the image of a mole based on a convolutional neural network, which is developed for the Android operating system. The mobile application contains melanoma detection functions, history of the previous examinations and a gallery with images of the previous examinations grouped by the location of the lesion. The HAM10000-based training dataset has been supplemented with the images of melanoma from the archive of The International Skin Imaging Collaboration to eliminate class imbalances and improve network accuracy. The search for existing neural networks that provide high accuracy was conducted, and VGG16, MobileNet, and NASNetMobile neural networks have been selected for research. Transfer learning and fine-tuning has been applied to the given neural networks to adapt the networks for the task of skin lesion classification. It is established that the use of these techniques allows to obtain high accuracy of the neural network for this task. The process of converting a convolutional neural network to an optimized Flatbuffer format using TensorFlow Lite for placement and use on a mobile device is described. The performance characteristics of the selected neural networks on the mobile device are evaluated according to the classification time on the CPU and GPU and the amount of memory occupied by the file of a single network is compared. The neural network file size was compared before and after conversion. It has been shown that the use of the TensorFlow Lite converter significantly reduces the file size of the neural network without affecting its accuracy by using an optimized format. The results of the study indicate a high speed of application and compactness of networks on the device, and the use of graphical acceleration can significantly decrease the image classification time of the tumor. According to the analyzed parameters, NASNetMobile was selected as the optimal neural network to be used in the mobile application of melanoma detection.


2019 ◽  
Vol 11 (10-SPECIAL ISSUE) ◽  
pp. 1127-1135
Author(s):  
James Arnold Nogra ◽  
Cherry Lyn Sta Romana ◽  
Elmer Maravillas

2020 ◽  
Author(s):  
Vigno Moura ◽  
Vilson Almeida ◽  
Domingos Bruno Sousa Santos ◽  
Nator Costa ◽  
Luciano Lopes Sousa ◽  
...  

Abstract In this paper, a novel method for classifying electrocardiogram signals in mobile devices is proposed, which classifies different arrhythmias according to the Association for the Advancement of Medical Instrumentation standard EC57. A convolutional neural network has been constructed, trained and validated with the MIT-BIH Arrhythmia Dataset, which has 5 different classes: normal beat, supraventricular premature beat, premature ventricular contraction, fusion of ventricular and normal beat, unclassifiable beat. Once trained and validated, the model is subjected to a post-training quantization stage using the TensorFlow Lite conversion method. The obtained results were satisfactory, before and after the quantization, the convolutional neural network obtained an accuracy of 98.5%. With the quantization technique it was possible to obtain a significant reduction in model size, thus enabling the development of the mobile application, this reduction was approximately 90% compared to the original model size.


10.2196/23920 ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. e23920
Author(s):  
Byung-Moon Choi ◽  
Ji Yeon Yim ◽  
Hangsik Shin ◽  
Gyujeong Noh

Background Although commercially available analgesic indices based on biosignal processing have been used to quantify nociception during general anesthesia, their performance is low in conscious patients. Therefore, there is a need to develop a new analgesic index with improved performance to quantify postoperative pain in conscious patients. Objective This study aimed to develop a new analgesic index using photoplethysmogram (PPG) spectrograms and a convolutional neural network (CNN) to objectively assess pain in conscious patients. Methods PPGs were obtained from a group of surgical patients for 6 minutes both in the absence (preoperatively) and in the presence (postoperatively) of pain. Then, the PPG data of the latter 5 minutes were used for analysis. Based on the PPGs and a CNN, we developed a spectrogram–CNN index for pain assessment. The area under the curve (AUC) of the receiver-operating characteristic curve was measured to evaluate the performance of the 2 indices. Results PPGs from 100 patients were used to develop the spectrogram–CNN index. When there was pain, the mean (95% CI) spectrogram–CNN index value increased significantly—baseline: 28.5 (24.2-30.7) versus recovery area: 65.7 (60.5-68.3); P<.01. The AUC and balanced accuracy were 0.76 and 71.4%, respectively. The spectrogram–CNN index cutoff value for detecting pain was 48, with a sensitivity of 68.3% and specificity of 73.8%. Conclusions Although there were limitations to the study design, we confirmed that the spectrogram–CNN index can efficiently detect postoperative pain in conscious patients. Further studies are required to assess the spectrogram–CNN index’s feasibility and prevent overfitting to various populations, including patients under general anesthesia. Trial Registration Clinical Research Information Service KCT0002080; https://cris.nih.go.kr/cris/search/search_result_st01.jsp?seq=6638


Sign language is a language that involves a movement of hand gestures. It is a medium for the hearing impaired person (deaf or mute) to communicate with others. However, in order to communicate with the hearing impaired person, the communicator has to have knowledge in sign language. This is to ensure that the message delivered by the hearing impaired person is understood. This project proposes a real time Malaysian sign language detection based on the Convolutional Neural Network (CNN) technique utilizing the You Only Look Once version 3 (YOLOv3) algorithm. Sign language images from web sources and recorded sign language videos by frames were collected. The images were labelled either alphabets or movements. Once the preprocessing phase was completed, the system was trained and tested on the Darknet framework. The system achieved 63 percent accuracy with learning saturation (overfitting) at 7000 iterations. Once it is successfully conducted, this model will be integrated with other platform in the future such as mobile application.


Sign in / Sign up

Export Citation Format

Share Document