target modality
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 4)

H-INDEX

6
(FIVE YEARS 1)

2020 ◽  
Vol 34 (07) ◽  
pp. 10486-10493
Author(s):  
Bing Cao ◽  
Han Zhang ◽  
Nannan Wang ◽  
Xinbo Gao ◽  
Dinggang Shen

In various clinical scenarios, medical image is crucial in disease diagnosis and treatment. Different modalities of medical images provide complementary information and jointly helps doctors to make accurate clinical decision. However, due to clinical and practical restrictions, certain imaging modalities may be unavailable nor complete. To impute missing data with adequate clinical accuracy, here we propose a framework called self-supervised collaborative learning to synthesize missing modality for medical images. The proposed method comprehensively utilize all available information correlated to the target modality from multi-source-modality images to generate any missing modality in a single model. Different from the existing methods, we introduce an auto-encoder network as a novel, self-supervised constraint, which provides target-modality-specific information to guide generator training. In addition, we design a modality mask vector as the target modality label. With experiments on multiple medical image databases, we demonstrate a great generalization ability as well as specialty of our method compared with other state-of-the-arts.


2020 ◽  
Vol 34 (01) ◽  
pp. 775-783
Author(s):  
Kang Li ◽  
Lequan Yu ◽  
Shujun Wang ◽  
Pheng-Ann Heng

The success of deep convolutional neural networks is partially attributed to the massive amount of annotated training data. However, in practice, medical data annotations are usually expensive and time-consuming to be obtained. Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e.g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity. To alleviate the learning difficulties caused by modality-specific appearance discrepancy, we first present an Image Alignment Module (IAM) to narrow the appearance gap between assistant and target modality data. We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation. To be specific, we formulate our framework as an integration of two individual segmentors. Each segmentor not only explicitly extracts one modality knowledge from corresponding annotations, but also implicitly explores another modality knowledge from its counterpart in mutual-guided manner. The ensemble of two segmentors would further integrate the knowledge from both modalities and generate reliable segmentation results on target modality. Experimental results on the public multi-class cardiac segmentation data, i.e., MM-WHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods.


2019 ◽  
Vol 19 (10) ◽  
pp. 117b
Author(s):  
Douglas A Addleman ◽  
Yingzi Xiong ◽  
Gordon E Legge

2019 ◽  
Vol 38 (4) ◽  
pp. 1016-1025 ◽  
Author(s):  
Yuankai Huo ◽  
Zhoubing Xu ◽  
Hyeonsoo Moon ◽  
Shunxing Bao ◽  
Albert Assad ◽  
...  
Keyword(s):  

Author(s):  
Derek Brown

Sensory substitution devices (SSDs) are most familiar from their use with subjects who are deficient in a target modality (e.g. congenitally blind subjects), but there is no doubt that the use and potential value of SSDs extend to persons without such deficits. Recent work by Amedi and his team (in particular Levy-Tzedek et al. 2012) has begun to explore this. Their idea is that SSDs may facilitate behavioural transference (BT) across sense modalities. In this case, a motor skill learned through visual perception might be subsequently employed in response to auditory perception, using an SSD as a mediator. They infer from the existence of such BT that the learned skill is amodally represented. After a brief overview I identify ways to more fully test for BT within this experimental paradigm and argue that their conclusion about amodal representation is premature. Additionally, I argue that their preferred SSD (Eyemusic) is of limited value for the project. While my remarks are critical, my intention is to be constructive, particularly in light of the fact that Levy-Tzedek et al. (2012) is, I believe, the first output from Amedi’s lab concerning this line of research.


2018 ◽  
Author(s):  
Rémy Masson ◽  
Aurélie Bidet-Caulet

AbstractThe P3a observed after novel events is an event-related potential comprising an early fronto-central phase and a late fronto-parietal phase. It has classically been considered to reflect the attention processing of distracting stimuli. However, novel sounds can lead to behavioral facilitation as much as behavioral distraction. This illustrates the duality of the orienting response which includes both an attentional and an arousal component. Using a paradigm with visual or auditory targets to detect and irrelevant unexpected distracting sounds to ignore, we showed that the facilitation effect by distracting sounds is independent of the target modality and endures more than 1500 ms. These results confirm that the behavioral facilitation observed after distracting sounds is related to an increase in unspecific phasic arousal on top of the attentional capture. Moreover, the amplitude of the early phase of the P3a to distracting sounds positively correlated with subjective arousal ratings, contrary to other event-related potentials. We propose that the fronto-central early phase of the P3a would index the arousing properties of distracting sounds and would be linked to the arousal component of the orienting response. Finally, we discuss the relevance of the P3a as a marker of distraction.


2018 ◽  
Vol 119 (5) ◽  
pp. 1879-1888 ◽  
Author(s):  
Yang Liu ◽  
Brandon M. Sexton ◽  
Hannah J. Block

When people match an unseen hand to a visual or proprioceptive target, they make both variable and systematic (bias) errors. Variance is a well-established factor in behavior, but the origin and implications of bias, and its connection to variance, are poorly understood. Eighty healthy adults matched their unseen right index finger to proprioceptive (left index finger) and visual targets with no performance feedback. We asked whether matching bias was related to target modality and to the magnitude or spatial properties of matching variance. Bias errors were affected by target modality, with subjects estimating visual and proprioceptive targets 20 mm apart. We found three pieces of evidence to suggest a connection between bias and variable errors: 1) for most subjects, the target modality that yielded greater spatial bias was also estimated with greater variance; 2) magnitudes of matching bias and variance were somewhat correlated for each target modality ( R = 0.24 and 0.29); and 3) bias direction was closely related to the angle of the major axis of the confidence ellipse ( R = 0.60 and 0.63). However, whereas variance was significantly correlated with visuo-proprioceptive weighting as predicted by multisensory integration theory ( R = −0.29 and 0.27 for visual and proprioceptive variance, respectively), bias was not. In a second session, subjects improved their matching variance, but not bias, for both target modalities, indicating a difference in stability. Taken together, these results suggest bias and variance are related only in some respects, which should be considered in the study of multisensory behavior. NEW & NOTEWORTHY People matching visual or proprioceptive targets make both variable and systematic (bias) errors. Multisensory integration is thought to minimize variance, but if the less variable modality has more bias, behavioral accuracy will decrease. Our data set suggests this is unusual. However, although bias and variable errors were spatially related, they differed in both stability and correlation with multisensory weighting. This suggests the bias-variance relationship is not straightforward, and both should be considered in multisensory behavior.


Author(s):  
Yuankai Huo ◽  
Zhoubing Xu ◽  
Shunxing Bao ◽  
Albert Assad ◽  
Richard G. Abramson ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document