Investigating Recurrent Neural Networks for OCT A-scan Based Tissue Analysis

2014 ◽  
Vol 53 (04) ◽  
pp. 245-249 ◽  
Author(s):  
S. Otte ◽  
L. Wittig ◽  
G. Hüttmann ◽  
C. Kugler ◽  
D. Drömann ◽  
...  

Summary Objectives: Optical Coherence Tomography (OCT) has been proposed as a high resolution image modality to guide transbronchial biopsies. In this study we address the question, whether individual A-scans obtained in needle direction can contribute to the identification of pulmonary nodules. Methods: OCT A-scans from freshly resected human lung tissue specimen were recorded through a customized needle with an embedded optical fiber. Bidirectional Long Short Term Memory networks (BLSTMs) were trained on randomly distributed training and test sets of the acquired A-scans. Patient specific training and different pre-processing steps were evaluated. Results: Classification rates from 67.5% up to 76% were archived for different training scenarios. Sensitivity and specificity were highest for a patient specific training with 0.87 and 0.85. Low pass filtering decreased the accuracy from 73.2% on a reference distribution to 62.2% for higher cutoff frequencies and to 56% for lower cutoff frequencies. Conclusion: The results indicate that a grey value based classification is feasible and may provide additional information for diagnosis and navigation. Furthermore, the experiments show patient specific signal properties and indicate that the lower and upper parts of the frequency spectrum contribute to the classification.

2017 ◽  
Vol 32 (2) ◽  
pp. 179-190 ◽  
Author(s):  
Gretchen B Salsich ◽  
Barbara Yemm ◽  
Karen Steger-May ◽  
Catherine E Lang ◽  
Linda R Van Dillen

Objective: To investigate whether a novel, task-specific training intervention that focused on correcting pain-producing movement patterns was feasible and whether it would improve hip and knee kinematics, pain, and function in women with patellofemoral pain. Design: Prospective, non-randomized, within-group, double baseline, feasibility intervention study. Subjects: A total of 25 women with patellofemoral pain were enrolled. Intervention: The intervention, delivered 2×/week for six weeks, consisted of supervised, high-repetition practice of daily weight-bearing and recreational activities. Activities were selected and progressed based on participants’ interest and ability to maintain optimal alignment without increasing pain. Main measures: Primary feasibility outcomes were recruitment, retention, adherence, and treatment credibility (Credibility/Expectancy Questionnaire). Secondary outcomes assessing intervention effects were hip and knee kinematics, pain (visual analog scale: current, average in past week, maximum in past week), and function (Patient-Specific Functional Scale). Results: A total of 25 participants were recruited and 23 were retained (92% retention). Self-reported average daily adherence was 79% and participants were able to perform their prescribed home program correctly (reduced hip and knee frontal plane angles) by the second intervention visit. On average, treatment credibility was rated 25 (out of 27) and expectancy was rated 22 (out of 27). Hip and knee kinematics, pain, and function improved following the intervention when compared to the control phase. Conclusion: Based on the feasibility outcomes and preliminary intervention effects, this task-specific training intervention warrants further investigation and should be evaluated in a larger, randomized clinical trial.


2022 ◽  
Vol 9 (1) ◽  
Author(s):  
Marcos Fabietti ◽  
Mufti Mahmud ◽  
Ahmad Lotfi

AbstractAcquisition of neuronal signals involves a wide range of devices with specific electrical properties. Combined with other physiological sources within the body, the signals sensed by the devices are often distorted. Sometimes these distortions are visually identifiable, other times, they overlay with the signal characteristics making them very difficult to detect. To remove these distortions, the recordings are visually inspected and manually processed. However, this manual annotation process is time-consuming and automatic computational methods are needed to identify and remove these artefacts. Most of the existing artefact removal approaches rely on additional information from other recorded channels and fail when global artefacts are present or the affected channels constitute the majority of the recording system. Addressing this issue, this paper reports a novel channel-independent machine learning model to accurately identify and replace the artefactual segments present in the signals. Discarding these artifactual segments by the existing approaches causes discontinuities in the reproduced signals which may introduce errors in subsequent analyses. To avoid this, the proposed method predicts multiple values of the artefactual region using long–short term memory network to recreate the temporal and spectral properties of the recorded signal. The method has been tested on two open-access data sets and incorporated into the open-access SANTIA (SigMate Advanced: a Novel Tool for Identification of Artefacts in Neuronal Signals) toolbox for community use.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Juhong Namgung ◽  
Siwoon Son ◽  
Yang-Sae Moon

In recent years, cyberattacks using command and control (C&C) servers have significantly increased. To hide their C&C servers, attackers often use a domain generation algorithm (DGA), which automatically generates domain names for the C&C servers. Accordingly, extensive research on DGA domain detection has been conducted. However, existing methods cannot accurately detect continuously generated DGA domains and can easily be evaded by an attacker. Recently, long short-term memory- (LSTM-) based deep learning models have been introduced to detect DGA domains in real time using only domain names without feature extraction or additional information. In this paper, we propose an efficient DGA domain detection method based on bidirectional LSTM (BiLSTM), which learns bidirectional information as opposed to unidirectional information learned by LSTM. We further maximize the detection performance with a convolutional neural network (CNN) + BiLSTM ensemble model using Attention mechanism, which allows the model to learn both local and global information in a domain sequence. Experimental results show that existing CNN and LSTM models achieved F1-scores of 0.9384 and 0.9597, respectively, while the proposed BiLSTM and ensemble models achieved higher F1-scores of 0.9618 and 0.9666, respectively. In addition, the ensemble model achieved the best performance for most DGA domain classes, enabling more accurate DGA domain detection than existing models.


1999 ◽  
Vol 15 (3) ◽  
pp. 318-329 ◽  
Author(s):  
Bing Yu ◽  
David Gabriel ◽  
Larry Noble ◽  
Kai-Nan An

The purposes of this study were (a) to develop a procedure for objectively determining the optimum cutoff frequency for the Butterworth low-pass digital filler, and (b) to evaluate the cutoff frequencies derived from the residual analysis. A set of knee flexion-extension angle data in normal gait was used as the standard data set. The standard data were sampled at different sampling frequencies. Random errors with different magnitudes were added to the standard data to create different sets of raw data with a given sampling frequency. Each raw data set was filtered through a Butterworth low-pass digital filter at different cutoff frequencies. The cutoff frequency corresponding to the minimum error in the second time derivatives for a given set of raw data was considered as the optimum for that set of raw data. A procedure for estimating the optimum cutoff frequency from the sampling frequency and estimated relative mean error in the raw data set was developed. The estimated optimum cutoff frequency significantly correlated to the true optimum cutoff frequency with a correlation determinant value of 0.96. This procedure was applied to estimate the optimum cutoff frequency for another set of kinematic data. The calculated accelerations of the filtered data essentially matched the measured acceleration curve. There is no correlation between the cutoff frequency derived from the residual analysis and the true optimum cutoff frequency. The cutoff frequencies derived from the residual analysis were significantly lower than the optimum, especially when the sampling frequency is high.


2022 ◽  
Author(s):  
Ebrahim Balouji

<div> <div> <div> <p>In this research work, deep machine learning based methods together with a novel data augmentation are developed for predicting flicker, voltage dip, harmonics and interharmonics originating from highly time-varying electric arc furnace (EAF) currents and voltage. The aim with the prediction is to counteract both the response and reaction time delays of active power filters (APFs) specifically designed for electric arc furnaces (EAF). Multiple synchronous Reference frame (MSRF) analysis is used to decompose the frequency components of the EAF current and voltage waveforms into dqo components. Then using low- pass filters and prediction of the future values of these dqo components, reference signals for APFs are generated. Three different methods have been developed. In two of them, a low- pass Butterworth filter is used together with a linear FIR based prediction or long short-term memory network (LSTM) for prediction. In the third method, a deep convolutional neural network (CNN) combined with a LSTM network is used to filter and predict at the same time. For a 40 ms prediction horizon, the proposed methods provide 2.06%, 0.31%, 0.99% prediction errors of the dqo components for the Butterworth and linear prediction, Butterworth and LSTM and CNN with LSTM, respectively. The error of the predicted reconstructed waveforms of flicker, harmonics, and interharmonics resulted in 8.5%, 1.90%, and 3.2% reconstruction errors for the above-mentioned methods. Finally, a Simulink and GPU based implementation of predictive APF using Butterworth filter + LSTM and a trivial APF resulted 96% and 60% efficiency on compensation of EAF current interharmonics. </p> </div> </div> </div>


2021 ◽  
Vol 25 (3) ◽  
pp. 229-235
Author(s):  
Sung-Jong Eun ◽  
Jun Young Lee ◽  
Han Jung ◽  
Khae-Hawn Kim

Purpose: In this study, a urinary management system was established to collect and analyze urinary time and interval data detected through patient-worn smart bands, and the results of the analysis were shown through a web-based visualization to enable monitoring and appropriate feedback for urological patients.Methods: We designed a device that can recognize urination time and spacing based on patient-specific posture and consistent posture changes, and we built a urination patient management system based on this device. The order of body movements during urination was consistent in terms of time characteristics; therefore, sequential data were analyzed and urinary activity was recognized using repeated neural networks and long-term short-term memory systems. The results were implemented as a web (HTML5) service program, enabling visual support for clinical diagnostic assistance.Results: Experiments were conducted to evaluate the performance of the proposed recognition techniques. The effectiveness of smart band monitoring urination was evaluated in 30 men (average age, 28.73 years; range, 26–34 years) without urination problems. The entire experiment lasted a total of 3 days. The final accuracy of the algorithm was calculated based on urological clinical guidelines. This experiment showed a high average accuracy of 95.8%, demonstrating the soundness of the proposed algorithm.Conclusions: This urinary activity management system showed high accuracy and was applied in a clinical environment to characterize patients’ urinary patterns. As wearable devices are developed and generalized, algorithms capable of detecting certain sequential body motor patterns that reflect certain physiological behaviors can be a new methodology for studying human physiological behaviors. It is also thought that these systems will have a significant impact on diagnostic assistance for clinicians.


Electrogastrograms (EGGs) are recorded by placing electrocardiogram (ECG)-type electrodes on the surface of the epigastrium. The EGG is one of several biological signals that can be recorded from the electrodes on the epigastrium. Some of these signals, like the EGG, are much stronger than the EGG. The EGG signal is relatively low amplitude, ranging from approximately 100 to 500 (μV. Thus, the EGG signal must be properly amplified and filtered for quality recordings. To reduce baseline drift and to remove unwanted cardiac and respiratory signals, a 0.016-Hz high-pass filter and a 0.25-Hz low-pass filter are used. These filters create a bandpass, or window, from approximately 1 cycle per minute (cpm) to 15cpm through which the desired gastric myoelectrical signals pass during the EGG recording. In this chapter, the equipment needed to record the EGG, the EGG recording procedure, and how to identify and reduce artifacts in EGG recordings are discussed. For additional information on the acquisition and analysis of EGG data, the reader is referred to several reviews and texts. High-quality, fresh, disposable electrodes such as those used for electrocardiogram (EGG) recording are recommended. To minimize artifacts in the EGG recording caused by electrode movement on the skin, it is best to use electrodes that adhere very well to the skin (e.g., Cleartrace; ConMed Corp., Utica, NY; or BioTac; Graphic Controls, Inc., Buffalo, NY). Reusable silver/silver chloride electrodes are available (e.g., 1081 Biode; UFI, Morro Bay, CA). The size of the electrode surface is not important, but the electrical stability of the electrode is important. The electrodes should show little bias or offset potential because the EGG signal is relatively low amplitude and low frequency. A high-quality recording system is needed to amplify and process the 100 to 500-μV EGG signal that ranges from 1.0 to 15.0 cpm. Some older physiological polygraphs have appropriate amplifiers and filters that can be used to record the EGG. Several medical device companies produce complete EGG recording and analysis systems that include appropriate amplifiers and filters with analog-to-digital boards that digitize the EGG signal for analysis with software (e.g., 3CPM Company, Crystal Bay, NV; Medtronic, Shoreview, MN).


Author(s):  
Rawan AlSaad ◽  
Qutaibah Malluhi ◽  
Ibrahim Janahi ◽  
Sabri Boughorbel

Abstract Background Predictive modeling with longitudinal electronic health record (EHR) data offers great promise for accelerating personalized medicine and better informs clinical decision-making. Recently, deep learning models have achieved state-of-the-art performance for many healthcare prediction tasks. However, deep models lack interpretability, which is integral to successful decision-making and can lead to better patient care. In this paper, we build upon the contextual decomposition (CD) method, an algorithm for producing importance scores from long short-term memory networks (LSTMs). We extend the method to bidirectional LSTMs (BiLSTMs) and use it in the context of predicting future clinical outcomes using patients’ EHR historical visits. Methods We use a real EHR dataset comprising 11071 patients, to evaluate and compare CD interpretations from LSTM and BiLSTM models. First, we train LSTM and BiLSTM models for the task of predicting which pre-school children with respiratory system-related complications will have asthma at school-age. After that, we conduct quantitative and qualitative analysis to evaluate the CD interpretations produced by the contextual decomposition of the trained models. In addition, we develop an interactive visualization to demonstrate the utility of CD scores in explaining predicted outcomes. Results Our experimental evaluation demonstrate that whenever a clear visit-level pattern exists, the models learn that pattern and the contextual decomposition can appropriately attribute the prediction to the correct pattern. In addition, the results confirm that the CD scores agree to a large extent with the importance scores generated using logistic regression coefficients. Our main insight was that rather than interpreting the attribution of individual visits to the predicted outcome, we could instead attribute a model’s prediction to a group of visits. Conclusion We presented a quantitative and qualitative evidence that CD interpretations can explain patient-specific predictions using CD attributions of individual visits or a group of visits.


Sign in / Sign up

Export Citation Format

Share Document