feature mapping
Recently Published Documents


TOTAL DOCUMENTS

297
(FIVE YEARS 97)

H-INDEX

18
(FIVE YEARS 4)

Author(s):  
Tianlei Ma ◽  
Zhen Yang ◽  
Jiaqi Wang ◽  
Siyuan Sun ◽  
Xiangyang Ren ◽  
...  
Keyword(s):  

2021 ◽  
Vol 57 (4) ◽  
pp. 573-617
Author(s):  
Rafał Jurczyk

Abstract Old English se-demonstratives (which usually trace less salient referents) and personal pronouns (usually continuing previous topics) have frequently been taken to share a common pronominal property (e.g. Breban 2012; Epstein 2011; van Gelderen 2013, 2011; Kiparsky 2002; Howe 1996). This assumption holds despite their non-overlapping distribution which still remains a puzzle (cf. van Gelderen 2013; Los and van Kemenade 2018). In this paper, we argue that this distributional discrepancy stems from the lack of syntactic and formal affinities between the two forms. Se-demonstratives are either dependent (introducing full DPs) or independent (usually labeled as “pronominal”), but still instances of the same lexical item. As a D-category, they necessarily license their NP complements regardless of their being lexical or empty, thereby entering into tight formal and semantic relations with their nominal antecedents. In doing so, they rely on the working of their gender- and case-features, the two carrying semantic import and mapping onto the specific reference [+ref/spec]-property in the semantic module(s). Being bundles of case- and/or φ-features, pronominals lack the complex syntactic structure of se-demonstratives. Their formal and semantic relations with nominal antecedents are thus less intimate, holding due to interpretable person- and number-features.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7941
Author(s):  
Seemab Khan ◽  
Muhammad Attique Khan ◽  
Majed Alhaisoni ◽  
Usman Tariq ◽  
Hwan-Seung Yong ◽  
...  

Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Qi Cheng ◽  
Bo He ◽  
Chengkui Zhao ◽  
Hongyuan Bi ◽  
Duojiao Chen ◽  
...  

Abstract Background Microexons are a particular kind of exon of less than 30 nucleotides in length. More than 60% of annotated human microexons were found to have high levels of sequence conservation, suggesting their potential functions. There is thus a need to develop a method for predicting functional microexons. Results Given the lack of a publicly available functional label for microexons, we employed a transfer learning skill called Transfer Component Analysis (TCA) to transfer the knowledge obtained from feature mapping for the prediction of functional microexons. To provide reference knowledge, microindels were chosen because of their similarities to microexons. Then, Support Vector Machine (SVM) was used to train a classification model in the newly built feature space for the functional microindels. With the trained model, functional microexons were predicted. We also built a tool based on this model to predict other functional microexons. We then used this tool to predict a total of 19 functional microexons reported in the literature. This approach successfully predicted 16 out of 19 samples, giving accuracy greater than 80%. Conclusions In this study, we proposed a method for predicting functional microexons and applied it, with the predictive results being largely consistent with records in the literature.


Author(s):  
Chuchen Li ◽  
Huafeng Liu

Abstract Recent medical image segmentation methods heavily rely on large-scale training data and high-quality annotations. However, these resources are hard to obtain due to the limitation of medical images and professional annotators. How to utilize limited annotations and maintain the performance is an essential yet challenging problem. In this paper, we try to tackle this problem in a self-learning manner by proposing a Generative Adversarial Semi-supervised Network (GASNet). We use limited annotated images as main supervision signals, and the unlabeled images are manipulated as extra auxiliary information to improve the performance. More specifically, we modulate a segmentation network as a generator to produce pseudo labels for unlabeled images. To make the generator robust, we train an uncertainty discriminator with generative adversarial learning to determine the reliability of the pseudo labels. To further ensure dependability, we apply feature mapping loss to obtain statistic distribution consistency between the generated labels and the real labels. Then the verified pseudo labels are used to optimize the generator in a self-learning manner. We validate the effectiveness of the proposed method on right ventricle dataset, Sunnybrook dataset, STACOM, ISIC dataset, and Kaggle lung dataset. We obtain 0.8402 to 0.9121, 0.8103 to 0.9094, 0.9435 to 0.9724, 0.8635 to 0.886, and 0.9697 to 0.9885 dice coefficient with 1/8 to 1/2 proportion of densely annotated labels, respectively. The improvements are up to 28.6 points higher than the corresponding fully supervised baseline.


Author(s):  
Yuan Zhang

AbstractIn this research, we explored a method of multi-scale feature mapping to pre-screen radiographs quickly and accurately in the aided diagnosis of pneumoconiosis staging. We utilized an open dataset and a self-collected dataset as research datasets. We proposed a multi-scale feature mapping model based on deep learning feature extraction technology for detecting pulmonary fibrosis and a discrimination method for pneumoconiosis staging. The diagnostic accuracy was evaluated using under the curve (AUC) of the receiver operating characteristic (ROC) curve. The AUC value of our model was 0.84, which showed the best performance compared with previous work on datasets. The diagnosis results indicated that our method was highly consistent with that of clinical experts on real patient. Furthermore, the AUC value obtained through categories I–IV on the testing dataset demonstrated that categories I (AUC = 0.86) and IV (AUC = 0.82) obtained the best performance and achieved the level of clinician categorization. Our research could be applied to the pre-screening and diagnosis of pneumoconiosis in the clinic.


2021 ◽  
Author(s):  
Kuldeep Chaurasia ◽  
Mayank Dixit ◽  
Ayush Goyal ◽  
Uthej K. ◽  
Adhithyaram S. ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document