Learning with Noise: Mask-Guided Attention Model for Weakly Supervised Nuclei Segmentation

2021 ◽  
pp. 461-470
Author(s):  
Ruoyu Guo ◽  
Maurice Pagnucco ◽  
Yang Song
2021 ◽  
Author(s):  
Arghavan Rezvani ◽  
Mahtab Bigverdi ◽  
Mohammad Hossein Rohban

AbstractWith the advent of high-throughput assays, a large number of biological experiments can be carried out. Image-based assays are among the most accessible and inexpensive technologies for this purpose. Indeed, these assays have proved to be effective in characterizing unknown functions of genes and small molecules. Image analysis pipelines have a pivotal role in translating raw images that are captured in such assays into useful and compact representation, also known as measurements. CellProfiler is a popular and commonly used tool for this purpose through providing readily available modules for the cell/nuclei segmentation, and making various measurements, or features, for each cell/nuclei. Single cell features are then aggregated for each treatment replica to form treatment “profiles.” However, there may be several sources of error in the CellProfiler quantification pipeline that affects the downstream analysis that is performed on the profiles. In this work, we examined various preprocessing approaches to improve the profiles. We consider identification of drug mechanisms of action as the downstream task to evaluate such preprocessing approaches. Our enhancement steps mainly consist of data cleaning, cell level outlier detection, toxic drug detection, and regressing out the cell area from all other features, as many of them are widely affected by the cell area. We also examined unsupervised and weakly-supervised deep learning based methods to reduce the feature dimensionality, and finally suggest possible avenues for future research.


2020 ◽  
Vol 39 (11) ◽  
pp. 3655-3666 ◽  
Author(s):  
Hui Qu ◽  
Pengxiang Wu ◽  
Qiaoying Huang ◽  
Jingru Yi ◽  
Zhennan Yan ◽  
...  

Author(s):  
Noa Cahan ◽  
Edith M. Marom ◽  
Shelly Soffer ◽  
Yiftach Barash ◽  
Eli Konen ◽  
...  

2020 ◽  
Vol 34 (07) ◽  
pp. 11077-11084
Author(s):  
Yung-Han Huang ◽  
Kuang-Jui Hsu ◽  
Shyh-Kang Jeng ◽  
Yen-Yu Lin

Video re-localization aims to localize a sub-sequence, called target segment, in an untrimmed reference video that is similar to a given query video. In this work, we propose an attention-based model to accomplish this task in a weakly supervised setting. Namely, we derive our CNN-based model without using the annotated locations of the target segments in reference videos. Our model contains three modules. First, it employs a pre-trained C3D network for feature extraction. Second, we design an attention mechanism to extract multiscale temporal features, which are then used to estimate the similarity between the query video and a reference video. Third, a localization layer detects where the target segment is in the reference video by determining whether each frame in the reference video is consistent with the query video. The resultant CNN model is derived based on the proposed co-attention loss which discriminatively separates the target segment from the reference video. This loss maximizes the similarity between the query video and the target segment while minimizing the similarity between the target segment and the rest of the reference video. Our model can be modified to fully supervised re-localization. Our method is evaluated on a public dataset and achieves the state-of-the-art performance under both weakly supervised and fully supervised settings.


Sign in / Sign up

Export Citation Format

Share Document