Recognizing Realistic Action Using Contextual Feature Group

2012 ◽  
pp. 459-469 ◽  
Author(s):  
Yituo Ye ◽  
Lei Qin ◽  
Zhongwei Cheng ◽  
Qingming Huang
Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 617
Author(s):  
Guoqing Bao ◽  
Xiuying Wang ◽  
Ran Xu ◽  
Christina Loh ◽  
Oreoluwa Daniel Adeyinka ◽  
...  

We have developed a platform, termed PathoFusion, which is an integrated system for marking, training, and recognition of pathological features in whole-slide tissue sections. The platform uses a bifocal convolutional neural network (BCNN) which is designed to simultaneously capture both index and contextual feature information from shorter and longer image tiles, respectively. This is analogous to how a microscopist in pathology works, identifying a cancerous morphological feature in the tissue context using first a narrow and then a wider focus, hence bifocal. Adjacent tissue sections obtained from glioblastoma cases were processed for hematoxylin and eosin (H&E) and immunohistochemical (CD276) staining. Image tiles cropped from the digitized images based on markings made by a consultant neuropathologist were used to train the BCNN. PathoFusion demonstrated its ability to recognize malignant neuropathological features autonomously and map immunohistochemical data simultaneously. Our experiments show that PathoFusion achieved areas under the curve (AUCs) of 0.985 ± 0.011 and 0.988 ± 0.001 in patch-level recognition of six typical pathomorphological features and detection of associated immunoreactivity, respectively. On this basis, the system further correlated CD276 immunoreactivity to abnormal tumor vasculature. Corresponding feature distributions and overlaps were visualized by heatmaps, permitting high-resolution qualitative as well as quantitative morphological analyses for entire histological slides. Recognition of more user-defined pathomorphological features can be added to the system and included in future tissue analyses. Integration of PathoFusion with the day-to-day service workflow of a (neuro)pathology department is a goal. The software code for PathoFusion is made publicly available.


2021 ◽  
Vol 336 ◽  
pp. 05008
Author(s):  
Cheng Wang ◽  
Sirui Huang ◽  
Ya Zhou

The accurate exploration of the sentiment information in comments for Massive Open Online Courses (MOOC) courses plays an important role in improving its curricular quality and promoting MOOC platform’s sustainable development. At present, most of the sentiment analyses of comments for MOOC courses are actually studies in the extensive sense, while relatively less attention is paid to such intensive issues as the polysemous word and the familiar word with an upgraded significance, which results in a low accuracy rate of the sentiment analysis model that is used to identify the genuine sentiment tendency of course comments. For this reason, this paper proposed an ALBERT-BiLSTM model for sentiment analysis of comments for MOOC courses. Firstly, ALBERT was used to dynamically generate word vectors. Secondly, the contextual feature vectors were obtained through BiLSTM pre-sequence and post-sequence, and the attention mechanism that could calculate the weight of different words in a sentence was applied together. Finally, the BiLSTM output vectors were input into Softmax for the classification of sentiments and prediction of the sentimental tendency. The experiment was performed based on the genuine data set of comments for MOOC courses. It was proved in the result that the proposed model was higher in accuracy rate than the already existing models.


2012 ◽  
Vol 45 (1) ◽  
pp. 434-446 ◽  
Author(s):  
Xiaojun Chen ◽  
Yunming Ye ◽  
Xiaofei Xu ◽  
Joshua Zhexue Huang

Author(s):  
EMANUELE FRONTONI ◽  
ADRIANO MANCINI ◽  
PRIMO ZINGARETTI

The importance of finding correct correspondences between two images is the major aspect in problems such as appearance-based robot localization and content-based image retrieval. Local feature matching has become a commonly used method to compare images, despite being highly probable that at least some of the matchings/correspondences it detects are incorrect. In this paper, we describe a novel approach to local feature matching, named Feature Group Matching (FGM), to select stable features and obtain a more reliable similarity value between two images. The proposed technique is demonstrated to be translational, rotational and scaling invariant. Experimental evaluation was performed on large and heterogeneous datasets of images using SIFT and SURF, the actual state-of-the-art feature extractors. Results show that FGM avoids almost 95% of incorrect matchings, reduces the visual aliasing (number of images considered similar) and increases both robotic localization and image retrieval accuracy on the average of 13%.


Sign in / Sign up

Export Citation Format

Share Document