Real-time Detection of Aortic Valve in Echocardiography using Convolutional Neural Networks

Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.

2019 ◽  
Vol 9 (14) ◽  
pp. 2865 ◽  
Author(s):  
Kyungmin Jo ◽  
Yuna Choi ◽  
Jaesoon Choi ◽  
Jong Woo Chung

More than half of post-operative complications can be prevented, and operation performances can be improved based on the feedback gathered from operations or notifications of the risks during operations in real time. However, existing surgical analysis methods are limited, because they involve time-consuming processes and subjective opinions. Therefore, the detection of surgical instruments is necessary for (a) conducting objective analyses, or (b) providing risk notifications associated with a surgical procedure in real time. We propose a new real-time detection algorithm for detection of surgical instruments using convolutional neural networks (CNNs). This algorithm is based on an object detection system YOLO9000 and ensures continuity of detection of the surgical tools in successive imaging frames based on motion vector prediction. This method exhibits a constant performance irrespective of a surgical instrument class, while the mean average precision (mAP) of all the tools is 84.7, with a speed of 38 frames per second (FPS).


2017 ◽  
Vol 10 (27) ◽  
pp. 1329-1342 ◽  
Author(s):  
Javier O. Pinzon Arenas ◽  
Robinson Jimenez Moreno ◽  
Paula C. Useche Murillo

This paper presents the implementation of a Region-based Convolutional Neural Network focused on the recognition and localization of hand gestures, in this case 2 types of gestures: open and closed hand, in order to achieve the recognition of such gestures in dynamic backgrounds. The neural network is trained and validated, achieving a 99.4% validation accuracy in gesture recognition and a 25% average accuracy in RoI localization, which is then tested in real time, where its operation is verified through times taken for recognition, execution behavior through trained and untrained gestures, and complex backgrounds.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 59069-59080 ◽  
Author(s):  
Peng Jiang ◽  
Yuehan Chen ◽  
Bin Liu ◽  
Dongjian He ◽  
Chunquan Liang

Author(s):  
Olav A. Norgard Rongved ◽  
Steven A. Hicks ◽  
Vajira Thambawita ◽  
Hakon K. Stensland ◽  
Evi Zouganeli ◽  
...  

2021 ◽  
Vol 11 (21) ◽  
pp. 10043
Author(s):  
Claudia Álvarez-Aparicio ◽  
Ángel Manuel Guerrero-Higueras ◽  
Luis V. Calderita ◽  
Francisco J. Rodríguez-Lera ◽  
Vicente Matellán ◽  
...  

Convolutional Neural Networks are usually fitted with manually labelled data. The labelling process is very time-consuming since large datasets are required. The use of external hardware may help in some cases, but it also introduces noise to the labelled data. In this paper, we pose a new data labelling approach by using bootstrapping to increase the accuracy of the PeTra tool. PeTra allows a mobile robot to estimate people’s location in its environment by using a LIDAR sensor and a Convolutional Neural Network. PeTra has some limitations in specific situations, such as scenarios where there are not any people. We propose to use the actual PeTra release to label the LIDAR data used to fit the Convolutional Neural Network. We have evaluated the resulting system by comparing it with the previous one—where LIDAR data were labelled with a Real Time Location System. The new release increases the MCC-score by 65.97%.


The management of the attendance can be an incredible weight on the instructors in the event that it is completed in registers. Determining this issue, keen and automatic attendance marking system by using the executive’s framework is being used. In any case, verification is a significant problem in this framework. Brilliant attendance framework is implemented commonly along with the assistance of soft biometrics. Acknowledgment of face is one of the updated biometric techniques this framework got to be enhanced. Being a principle element of biometric confirmation, facial acknowledgment feature has become most utilized enormously in a few such applications, similar to video observing and surveillance-based CCTV film framework, a connection between PC and people and admittance frameworks existing inside and in network security. By using this structure, the issue present in along with intermediaries, understudies also have been checking on the present despite the fact that they are not physically present can without much of a stretch be illuminated. The primary usage steps utilized regarding this sort of framework are facial discovery and perceiving the distinguished the different face of the people. This term paper recommends a perfect model for actualizing a computerized attendance the board framework in order to make understudies for a class by utilizing the procedure of acknowledgment-based face detection procedure, by means of utilizing Convolutional Neural Network (CNN), Max pooling


2021 ◽  
Vol 09 (05) ◽  
pp. E741-E748
Author(s):  
Jeremi Podlasek ◽  
Mateusz Heesch ◽  
Robert Podlasek ◽  
Wojciech Kilisiński ◽  
Rafał Filip

Abstract Background and study aims Several computer-assisted polyp detection systems have been proposed, but they have various limitations, from utilizing outdated neural network architectures to a requirement for multi-graphics processing unit (GPU) processing, to validating on small or non-robust datasets. To address these problems, we developed a system based on a state-of-the-art convolutional neural network architecture able to detect polyps in real time on a single GPU and tested on both public datasets and full clinical examination recordings. Methods The study comprised 165 colonoscopy procedure recordings and 2678 still photos gathered retrospectively. The system was trained on 81,962 polyp frames in total and then tested on footage from 42 colonoscopies and CVC-ClinicDB, CVC-ColonDB, Hyper-Kvasir, and ETIS-Larib public datasets. Clinical videos were evaluated for polyp detection and false-positive rates whereas the public datasets were assessed for F1 score. The system was tested for runtime performance on a wide array of hardware. Results The performance on public datasets varied from an F1 score of 0.727 to 0.942. On full examination videos, it detected 94 % of the polyps found by the endoscopist with a 3 % false-positive rate and identified additional polyps that were missed during initial video assessment. The system’s runtime fits within the real-time constraints on all but one of the hardware configurations. Conclusions We have created a polyp detection system with a post-processing pipeline that works in real time on a wide array of hardware. The system does not require extensive computational power, which could help broaden the adaptation of new commercially available systems.


Author(s):  
Ruchi Gajjar ◽  
Nagendra Gajjar ◽  
Vaibhavkumar Jigneshkumar Thakor ◽  
Nikhilkumar Pareshbhai Patel ◽  
Stavan Ruparelia

Pedestrians in the vehicle way are in peril of being hit, along these lines making extreme damage walkers and vehicle inhabitants. Hence, constant person on foot identification was done through a set of recorded videos and the system detects the persons/pedestrians in the given input videos. In this survey, a continuous plan was proposed dependent on Aggregated Channel Features (ACF) and CPU. The proposed technique doesn't have to resize the information picture neither the video quality. We also use SVM with HOG and SVM with HAAR to detect the pedestrians. In addition, the Convolutional Neural Networks (CNN) were trained with a set of pedestrian images datasets and later tested on some test-set of pedestrian images. The analyses demonstrated that the proposed technique could be utilized to distinguish people on foot in the video with satisfactory mistake rates and high prediction accuracy. In this manner, it tends to be applied progressively for any real-time streaming of videos and also for prediction of pedestrians in prerecorded videos.


Sign in / Sign up

Export Citation Format

Share Document