Segmentation of Tau Stained Alzheimers Brain Tissue Using Convolutional Neural Networks

Author(s):  
Alexander Wurts ◽  
Derek H. Oakley ◽  
Bradley T. Hyman ◽  
Siddharth Samsi
Author(s):  
Thulasi Bikku ◽  
Jayavarapu Karthik ◽  
Ganga Rama Koteswara Rao ◽  
K P N V Satya Sree ◽  
P V V S Srinivas ◽  
...  

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 51557-51569
Author(s):  
Sil C. Van De Leemput ◽  
Midas Meijs ◽  
Ajay Patel ◽  
Frederick J. A. Meijer ◽  
Bram Van Ginneken ◽  
...  

2021 ◽  
Author(s):  
Kayla L. Stanke ◽  
Ryan Larsen ◽  
Laurie Rund ◽  
Brian J Leyshon ◽  
Allison Louie ◽  
...  

Magnetic Resonance Imaging is an important tool for characterizing volumetric changes of the piglet brain during development. These analyses have been aided by the development of piglet brain atlases which are based on averages drawn from multiple piglets. Because these atlases typically contain only brain tissue, their use is limited to “brain extracted” images from which the surrounding tissues have been removed. Brain extractions, or segmentations, are typically performed manually. This approach is time-intensive and can lead to variation between segmentations when multiple raters are used. Automated segmentations processes are important for reducing the time required for analyses and improving the uniformity of the segmentations. Here we demonstrate the use of region-based recurrent convolutional neural networks (RCNNs) on a dataset consisting of 32 piglet brains. The RCNNs are trained from manual segmentations of sets of 27 piglets and then applied to sets of the remaining 5 piglets. The volumes of the machine-generated brain masks are highly correlated with those of the manually generated masks, and visual inspection of the segmentations show acceptable accuracy. These results demonstrate that neural networks provide a viable tool for the segmentation of piglet brains.


2019 ◽  
Vol 64 ◽  
pp. 77-89 ◽  
Author(s):  
N. Khalili ◽  
N. Lessmann ◽  
E. Turk ◽  
N. Claessens ◽  
R. de Heus ◽  
...  

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document