scholarly journals Research on Multiple Spectral Ranges with Deep Learning for SpO2 Measurement

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 328
Author(s):  
Chih-Hsiung Shen ◽  
Wei-Lun Chen ◽  
Jung-Jie Wu

Oxyhemoglobin saturation by pulse oximetry (SpO2) has always played an important role in the diagnosis of symptoms. Considering that the traditional SpO2 measurement has a certain error due to the number of wavelengths and the algorithm and the wider application of machine learning and spectrum combination, we propose to use 12-wavelength spectral absorption measurement to improve the accuracy of SpO2 measurement. To investigate the multiple spectral regions for deep learning for SpO2 measurement, three datasets for training and verification were built, which were constructed over the spectra of first region, second region, and full region and their sub-regions, respectively. For each region under the procedures of optimization of our model, a thorough of investigation of hyperparameters is proceeded. Additionally, data augmentation is preformed to expand dataset with added noise randomly, increasing the diversity of data and improving the generalization of the neural network. After that, the established dataset is input to a one dimensional convolution neural network (1D-CNN) to obtain a measurement model of SpO2. In order to enhance the model accuracy, GridSearchCV and Bayesian optimization are applied to optimize the hyperparameters. The optimal accuracies of proposed model optimized by GridSearchCV and Bayesian Optimization is 89.3% and 99.4%, respectively, and trained with the dataset at the spectral region of six wavelengths including 650 nm, 680 nm, 730 nm, 760 nm, 810 nm, 860 nm. The total relative error of the best model is only 0.46%, optimized by Bayesian optimization. Although the spectral measurement with more features can improve the resolution ability of the neural network, the results reveal that the training with the dataset of the shorter six wavelength is redundant. This analysis shows that it is very important to construct an effective 1D-CNN model area for spectral measurement using the appropriate spectral ranges and number of wavelengths. It shows that our proposed 1D-CNN model gives a new and feasible approach to measure SpO2 based on multi-wavelength.

Author(s):  
Surenthiran Krishnan ◽  
Pritheega Magalingam ◽  
Roslina Ibrahim

<span>This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.</span>


2021 ◽  
Author(s):  
Luca L. Weishaupt ◽  
Jose Torres ◽  
Sophie Camilleri-Broët ◽  
Roni F. Rayes ◽  
Jonathan D. Spicer ◽  
...  

Abstract The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a deep learning architecture. Automation will reduce the human error involved in the manual process, increase efficiency, and result in more accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. A deep neural network named UNet was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. To overcome memory limitations overlapping and non-overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation was used to reduce overfitting and artificially create more data for training. Using this deep learning approach, the UNet achieved accuracy of 0.91±0.06, specificity of 0.90±0.08, sensitivity of 0.92±0.07, and precision of 0.8±0.1. The F1/DICE score was 0.85±0.07, with a segmentation time of 3.24±0.03 seconds per image, thus achieving a 370±3 times increased efficiency over manual segmentation, which took 20 minutes per image on average. In some cases, the neural network correctly delineated the tumor's stroma from its epithelial component in tumor regions that were classified as tumor by the pathologist. The UNet architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Yuki Fujimoto ◽  
Kenji Fukushima ◽  
Koichi Murase

Abstract We discuss deep learning inference for the neutron star equation of state (EoS) using the real observational data of the mass and the radius. We make a quantitative comparison between the conventional polynomial regression and the neural network approach for the EoS parametrization. For our deep learning method to incorporate uncertainties in observation, we augment the training data with noise fluctuations corresponding to observational uncertainties. Deduced EoSs can accommodate a weak first-order phase transition, and we make a histogram for likely first-order regions. We also find that our observational data augmentation has a byproduct to tame the overfitting behavior. To check the performance improved by the data augmentation, we set up a toy model as the simplest inference problem to recover a double-peaked function and monitor the validation loss. We conclude that the data augmentation could be a useful technique to evade the overfitting without tuning the neural network architecture such as inserting the dropout.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2020 ◽  
Vol 13 (1) ◽  
pp. 34
Author(s):  
Rong Yang ◽  
Robert Wang ◽  
Yunkai Deng ◽  
Xiaoxue Jia ◽  
Heng Zhang

The random cropping data augmentation method is widely used to train convolutional neural network (CNN)-based target detectors to detect targets in optical images (e.g., COCO datasets). It can expand the scale of the dataset dozens of times while consuming only a small amount of calculations when training the neural network detector. In addition, random cropping can also greatly enhance the spatial robustness of the model, because it can make the same target appear in different positions of the sample image. Nowadays, random cropping and random flipping have become the standard configuration for those tasks with limited training data, which makes it natural to introduce them into the training of CNN-based synthetic aperture radar (SAR) image ship detectors. However, in this paper, we show that the introduction of traditional random cropping methods directly in the training of the CNN-based SAR image ship detector may generate a lot of noise in the gradient during back propagation, which hurts the detection performance. In order to eliminate the noise in the training gradient, a simple and effective training method based on feature map mask is proposed. Experiments prove that the proposed method can effectively eliminate the gradient noise introduced by random cropping and significantly improve the detection performance under a variety of evaluation indicators without increasing inference cost.


2020 ◽  
pp. 74-80
Author(s):  
Philippe Schweizer ◽  

We would like to show the small distance in neutropsophy applications in sciences and humanities, has both finally consider as a terminal user a human. The pace of data production continues to grow, leading to increased needs for efficient storage and transmission. Indeed, the consumption of this information is preferably made on mobile terminals using connections invoiced to the user and having only reduced storage capacities. Deep learning neural networks have recently exceeded the compression rates of algorithmic techniques for text. We believe that they can also significantly challenge classical methods for both audio and visual data (images and videos). To obtain the best physiological compression, i.e. the highest compression ratio because it comes closest to the specificity of human perception, we propose using a neutrosophical representation of the information for the entire compression-decompression cycle. Such a representation consists for each elementary information to add to it a simple neutrosophical number which informs the neural network about its characteristics relative to compression during this treatment. Such a neutrosophical number is in fact a triplet (t,i,f) representing here the belonging of the element to the three constituent components of information in compression; 1° t = the true significant part to be preserved, 2° i = the inderterminated redundant part or noise to be eliminated in compression and 3° f = the false artifacts being produced in the compression process (to be compensated). The complexity of human perception and the subtle niches of its defects that one seeks to exploit requires a detailed and complex mapping that a neural network can produce better than any other algorithmic solution, and networks with deep learning have proven their ability to produce a detailed boundary surface in classifiers.


Author(s):  
Lifu Wang ◽  
Bo Shen ◽  
Ning Zhao ◽  
Zhiyuan Zhang

The residual network is now one of the most effective structures in deep learning, which utilizes the skip connections to “guarantee" the performance will not get worse. However, the non-convexity of the neural network makes it unclear whether the skip connections do provably improve the learning ability since the nonlinearity may create many local minima. In some previous works [Freeman and Bruna, 2016], it is shown that despite the non-convexity, the loss landscape of the two-layer ReLU network has good properties when the number m of hidden nodes is very large. In this paper, we follow this line to study the topology (sub-level sets) of the loss landscape of deep ReLU neural networks with a skip connection and theoretically prove that the skip connection network inherits the good properties of the two-layer network and skip connections can help to control the connectedness of the sub-level sets, such that any local minima worse than the global minima of some two-layer ReLU network will be very “shallow". The “depth" of these local minima are at most O(m^(η-1)/n), where n is the input dimension, η<1. This provides a theoretical explanation for the effectiveness of the skip connection in deep learning.


2021 ◽  
Vol 14 (6) ◽  
pp. 3421-3435
Author(s):  
Zhenjiao Jiang ◽  
Dirk Mallants ◽  
Lei Gao ◽  
Tim Munday ◽  
Gregoire Mariethoz ◽  
...  

Abstract. This study introduces an efficient deep-learning model based on convolutional neural networks with joint autoencoder and adversarial structures for 3D subsurface mapping from 2D surface observations. The method was applied to delineate paleovalleys in an Australian desert landscape. The neural network was trained on a 6400 km2 domain by using a land surface topography as 2D input and an airborne electromagnetic (AEM)-derived probability map of paleovalley presence as 3D output. The trained neural network has a squared error <0.10 across 99 % of the training domain and produces a squared error <0.10 across 93 % of the validation domain, demonstrating that it is reliable in reconstructing 3D paleovalley patterns beyond the training area. Due to its generic structure, the neural network structure designed in this study and the training algorithm have broad application potential to construct 3D geological features (e.g., ore bodies, aquifer) from 2D land surface observations.


Author(s):  
Xi Li ◽  
Ting Wang ◽  
Shexiong Wang

It draws researchers’ attentions how to make use of the log data effectively without paying much for storing them. In this paper, we propose pattern-based deep learning method to extract the features from log datasets and to facilitate its further use at the reasonable expense of the storage performances. By taking the advantages of the neural network and thoughts to combine statistical features with experts’ knowledge, there are satisfactory results in the experiments on some specified datasets and on the routine systems that our group maintains. Processed on testing data sets, the model is 5%, at least, more likely to outperform its competitors in accuracy perspective. More importantly, its schema unveils a new way to mingle experts’ experiences with statistical log parser.


Sign in / Sign up

Export Citation Format

Share Document