scholarly journals Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Thomas Kurmann ◽  
Siqing Yu ◽  
Pablo Márquez-Neila ◽  
Andreas Ebneter ◽  
Martin Zinkernagel ◽  
...  

Abstract In ophthalmology, retinal biological markers, or biomarkers, play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualize these, Optical Coherence Tomography (OCT) is often the tool of choice due to its ability to image retinal structures in three dimensions at micrometer resolution. But with widespread use in clinical routine, and growing prevalence in chronic retinal conditions, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research. We present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. Our approach avoids the need for costly segmentation annotations and allows scans to be characterized by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.

Author(s):  
Anil Kumar ◽  
Pinhas Ben-Tzvi

This paper presents a new cost effective wireless telemetry system capable of estimating ambient air turbulence using RC helicopters. The proposed telemetry system correlates the RC helicopter’s flight dynamics with ship air wake patterns generated by cruising naval vessels. The telemetry system consists of two instrumentation units each equipped with aviation grade INS/IMU sensors to measure dynamics of the helicopter with respect to the concerned naval vessel. The presented telemetry system extracts ship air wake patterns by removing the helicopter dynamic effects from actual measurements. This paper presents a comprehensive comparison between popular machine learning algorithms in eliminating effects of pilot inputs from helicopter’s dynamics measurements. The system was tested on data collected in a wide range of wind conditions generated by modified YP676 naval training vessel in the Chesapeake Bay area over a period of more than a year.


2019 ◽  
Vol 14 (5) ◽  
pp. 406-421 ◽  
Author(s):  
Ting-He Zhang ◽  
Shao-Wu Zhang

Background: Revealing the subcellular location of a newly discovered protein can bring insight into their function and guide research at the cellular level. The experimental methods currently used to identify the protein subcellular locations are both time-consuming and expensive. Thus, it is highly desired to develop computational methods for efficiently and effectively identifying the protein subcellular locations. Especially, the rapidly increasing number of protein sequences entering the genome databases has called for the development of automated analysis methods. Methods: In this review, we will describe the recent advances in predicting the protein subcellular locations with machine learning from the following aspects: i) Protein subcellular location benchmark dataset construction, ii) Protein feature representation and feature descriptors, iii) Common machine learning algorithms, iv) Cross-validation test methods and assessment metrics, v) Web servers. Result & Conclusion: Concomitant with a large number of protein sequences generated by highthroughput technologies, four future directions for predicting protein subcellular locations with machine learning should be paid attention. One direction is the selection of novel and effective features (e.g., statistics, physical-chemical, evolutional) from the sequences and structures of proteins. Another is the feature fusion strategy. The third is the design of a powerful predictor and the fourth one is the protein multiple location sites prediction.


2021 ◽  
Vol 10 (2) ◽  
pp. 231
Author(s):  
Giacinto Triolo ◽  
Piero Barboni ◽  
Giacomo Savini ◽  
Francesco De Gaetano ◽  
Gaspare Monaco ◽  
...  

The introduction of anterior-segment optical-coherence tomography (AS-OCT) has led to improved assessments of the anatomy of the iridocorneal-angle and diagnoses of several mechanisms of angle closure which often result in raised intraocular pressure (IOP). Continuous advancements in AS-OCT technology and software, along with an extensive research in the field, have resulted in a wide range of possible parameters that may be used to diagnose and follow up on patients with this spectrum of diseases. However, the clinical relevance of such variables needs to be explored thoroughly. The aim of the present review is to summarize the current evidence supporting the use of AS-OCT for the diagnosis and follow-up of several iridocorneal-angle and anterior-chamber alterations, focusing on the advantages and downsides of this technology.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


2021 ◽  
pp. 000370282110345
Author(s):  
Tatu Rojalin ◽  
Dexter Antonio ◽  
Ambarish Kulkarni ◽  
Randy P. Carney

Surface-enhanced Raman scattering (SERS) is a powerful technique for sensitive label-free analysis of chemical and biological samples. While much recent work has established sophisticated automation routines using machine learning and related artificial intelligence methods, these efforts have largely focused on downstream processing (e.g., classification tasks) of previously collected data. While fully automated analysis pipelines are desirable, current progress is limited by cumbersome and manually intensive sample preparation and data collection steps. Specifically, a typical lab-scale SERS experiment requires the user to evaluate the quality and reliability of the measurement (i.e., the spectra) as the data are being collected. This need for expert user-intuition is a major bottleneck that limits applicability of SERS-based diagnostics for point-of-care clinical applications, where trained spectroscopists are likely unavailable. While application-agnostic numerical approaches (e.g., signal-to-noise thresholding) are useful, there is an urgent need to develop algorithms that leverage expert user intuition and domain knowledge to simplify and accelerate data collection steps. To address this challenge, in this work, we introduce a machine learning-assisted method at the acquisition stage. We tested six common algorithms to measure best performance in the context of spectral quality judgment. For adoption into future automation platforms, we developed an open-source python package tailored for rapid expert user annotation to train machine learning algorithms. We expect that this new approach to use machine learning to assist in data acquisition can serve as a useful building block for point-of-care SERS diagnostic platforms.


Author(s):  
Pratyush Kaware

In this paper a cost-effective sensor has been implemented to read finger bend signals, by attaching the sensor to a finger, so as to classify them based on the degree of bent as well as the joint about which the finger was being bent. This was done by testing with various machine learning algorithms to get the most accurate and consistent classifier. Finally, we found that Support Vector Machine was the best algorithm suited to classify our data, using we were able predict live state of a finger, i.e., the degree of bent and the joints involved. The live voltage values from the sensor were transmitted using a NodeMCU micro-controller which were converted to digital and uploaded on a database for analysis.


2017 ◽  
Vol 6 (4) ◽  
pp. 10 ◽  
Author(s):  
Maximilian W.M. Wintergerst ◽  
Thomas Schultz ◽  
Johannes Birtel ◽  
Alexander K. Schuster ◽  
Norbert Pfeiffer ◽  
...  

2005 ◽  
Vol 46 (11) ◽  
pp. 4147 ◽  
Author(s):  
Zvia Burgansky-Eliash ◽  
Gadi Wollstein ◽  
Tianjiao Chu ◽  
Joseph D. Ramsey ◽  
Clark Glymour ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document