Multi-modal infusion pump real-time monitoring technique for improvement in safety of intravenous-administration patients

Author(s):  
Young Jun Hwang ◽  
Gun Ho Kim ◽  
Eui Suk Sung ◽  
Kyoung Won Nam

Intravenous (IV) medication administration processes have been considered as high-risk steps, because accidents during IV administration can lead to serious adverse effects, which can deteriorate the therapeutic effect or threaten the patient’s life. In this study, we propose a multi-modal infusion pump (IP) monitoring technique, which can detect mismatches between the IP setting and actual infusion state and between the IP setting and doctor’s prescription in real time using a thin membrane potentiometer and convolutional-neural-network-based deep learning technique. During performance evaluation, the percentage errors between the reference infusion rate (IR) and average estimated IR were in the range of 0.50–2.55%, while those between the average actual IR and average estimated IR were in the range of 0.22–2.90%. In addition, the training, validation, and test accuracies of the implemented deep learning model after training were 98.3%, 97.7%, and 98.5%, respectively. The training and validation losses were 0.33 and 0.36, respectively. According to these experimental results, the proposed technique could provide improved protection functions to IV-administration patients.

Author(s):  
Masurah Mohamad ◽  
Ali Selamat

Deep learning has recently gained the attention of many researchers in various fields. A new and emerging machine learning technique, it is derived from a neural network algorithm capable of analysing unstructured datasets without supervision. This study compared the effectiveness of the deep learning (DL) model vs. a hybrid deep learning (HDL) model integrated with a hybrid parameterisation model in handling complex and missing medical datasets as well as their performance in increasing classification. The results showed that 1) the DL model performed better on its own, 2) DL was able to analyse complex medical datasets even with missing data values, and 3) HDL performed well as well and had faster processing times since it was integrated with a hybrid parameterisation model.


10.2196/24762 ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. e24762
Author(s):  
Hyun-Lim Yang ◽  
Chul-Woo Jung ◽  
Seong Mi Yang ◽  
Min-Soo Kim ◽  
Sungho Shim ◽  
...  

Background Arterial pressure-based cardiac output (APCO) is a less invasive method for estimating cardiac output without concerns about complications from the pulmonary artery catheter (PAC). However, inaccuracies of currently available APCO devices have been reported. Improvements to the algorithm by researchers are impossible, as only a subset of the algorithm has been released. Objective In this study, an open-source algorithm was developed and validated using a convolutional neural network and a transfer learning technique. Methods A retrospective study was performed using data from a prospective cohort registry of intraoperative bio-signal data from a university hospital. The convolutional neural network model was trained using the arterial pressure waveform as input and the stroke volume (SV) value as the output. The model parameters were pretrained using the SV values from a commercial APCO device (Vigileo or EV1000 with the FloTrac algorithm) and adjusted with a transfer learning technique using SV values from the PAC. The performance of the model was evaluated using absolute error for the PAC on the testing dataset from separate periods. Finally, we compared the performance of the deep learning model and the FloTrac with the SV values from the PAC. Results A total of 2057 surgical cases (1958 training and 99 testing cases) were used in the registry. In the deep learning model, the absolute errors of SV were 14.5 (SD 13.4) mL (10.2 [SD 8.4] mL in cardiac surgery and 17.4 [SD 15.3] mL in liver transplantation). Compared with FloTrac, the absolute errors of the deep learning model were significantly smaller (16.5 [SD 15.4] and 18.3 [SD 15.1], P<.001). Conclusions The deep learning–based APCO algorithm showed better performance than the commercial APCO device. Further improvement of the algorithm developed in this study may be helpful for estimating cardiac output accurately in clinical practice and optimizing high-risk patient care.


Face recognition is used to biometric authentication method to analyze the face extract and photographs useful to reputation formation from them, which can be usually called as a characteristic vector this is used to differentiate the organic features. In this paper to detect the suspect by extracting facial features from the captured image of the suspect from CCTV and match it with the pictures stored in the database and also to achieve an accuracy rate of 100 %, negligible loss using deep learning technique. For extracting the facial features, we are using deep learning model known as Convolutional Neural Network (CNN). It is one of the best models to extract features with the highest accuracy rate .


2020 ◽  
Author(s):  
Hyun-Lim Yang ◽  
Chul-Woo Jung ◽  
Seong Mi Yang ◽  
Min-Soo Kim ◽  
Sungho Shim ◽  
...  

BACKGROUND The arterial pressure-based cardiac output (APCO) is a less-invasive method for estimating the cardiac output without worries about complications from the pulmonary artery catheter (PAC). However, inaccuracies of the currently available APCO devices have been reported. Improvements of the algorithm by researchers are also impossible, since only a subset of the algorithm has been released. OBJECTIVE In this study, we developed and validated an open source APCO algorithm using convolutional neural network and the transfer learning technique. METHODS We did a retrospective study using data from a prospective cohort registry of intraoperative bio-signal data at a university hospital. The convolutional neural network model was trained using the arterial pressure waveform as input and the stroke volume (SV) value as output. The model parameters were pre-trained using the SV values from a commercial APCO device (Vigileo™ or EV1000™ with FloTrac™ algorithm) and adjusted by a transfer learning technique using SV values from the PAC. The performance of the model was evaluated by using absolute error for the PAC on the testing dataset from separate periods. Finally, we compared the performance of the deep learning model and the FloTrac with SV values from the PAC. RESULTS We used 2,057 surgical cases (1,958 training and 99 testing) in the registry for modelling. In the deep learning model, the absolute errors of SV were 14.5 ± 13.4 mL (10.2 ± 8.4 mL and 17.4 ± 15.3 in cardiac surgery and liver transplantation, respectively). In the comparison with FloTrac, the absolute errors of the deep learning model were significantly smaller than those of the FloTrac (16.5 ± 15.4 and 18.3 ± 15.1, respectively, P < .001). CONCLUSIONS The deep learning-based APCO algorithm showed better performance than the commercial APCO device. Further improvement of the algorithm developed in this study may be helpful for estimating cardiac output accurately in clinical practice and optimizing high-risk patient care. CLINICALTRIAL Not applicable.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2021 ◽  
pp. 1-12
Author(s):  
Gaurav Sarraf ◽  
Anirudh Ramesh Srivatsa ◽  
MS Swetha

With the ever-rising threat to security, multiple industries are always in search of safer communication techniques both in rest and transit. Multiple security institutions agree that any systems security can be modeled around three major concepts: Confidentiality, Availability, and Integrity. We try to reduce the holes in these concepts by developing a Deep Learning based Steganography technique. In our study, we have seen, data compression has to be at the heart of any sound steganography system. In this paper, we have shown that it is possible to compress and encode data efficiently to solve critical problems of steganography. The deep learning technique, which comprises an auto-encoder with Convolutional Neural Network as its building block, not only compresses the secret file but also learns how to hide the compressed data in the cover file efficiently. The proposed techniques can encode secret files of the same size as of cover, or in some sporadic cases, even larger files can be encoded. We have also shown that the same model architecture can theoretically be applied to any file type. Finally, we show that our proposed technique surreptitiously evades all popular steganalysis techniques.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1547
Author(s):  
Jian Sha ◽  
Xue Li ◽  
Man Zhang ◽  
Zhong-Liang Wang

Accurate real-time water quality prediction is of great significance for local environmental managers to deal with upcoming events and emergencies to develop best management practices. In this study, the performances in real-time water quality forecasting based on different deep learning (DL) models with different input data pre-processing methods were compared. There were three popular DL models concerned, including the convolutional neural network (CNN), long short-term memory neural network (LSTM), and hybrid CNN–LSTM. Two types of input data were applied, including the original one-dimensional time series and the two-dimensional grey image based on the complete ensemble empirical mode decomposition algorithm with adaptive noise (CEEMDAN) decomposition. Each type of input data was used in each DL model to forecast the real-time monitoring water quality parameters of dissolved oxygen (DO) and total nitrogen (TN). The results showed that (1) the performances of CNN–LSTM were superior to the standalone model CNN and LSTM; (2) the models used CEEMDAN-based input data performed much better than the models used the original input data, while the improvements for non-periodic parameter TN were much greater than that for periodic parameter DO; and (3) the model accuracies gradually decreased with the increase of prediction steps, while the original input data decayed faster than the CEEMDAN-based input data and the non-periodic parameter TN decayed faster than the periodic parameter DO. Overall, the input data preprocessed by the CEEMDAN method could effectively improve the forecasting performances of deep learning models, and this improvement was especially significant for non-periodic parameters of TN.


Sign in / Sign up

Export Citation Format

Share Document