feature space
Recently Published Documents


TOTAL DOCUMENTS

2479
(FIVE YEARS 933)

H-INDEX

52
(FIVE YEARS 11)

Author(s):  
Aibo Guo ◽  
Xinyi Li ◽  
Ning Pang ◽  
Xiang Zhao

Community Q&A forum is a special type of social media that provides a platform to raise questions and to answer them (both by forum participants), to facilitate online information sharing. Currently, community Q&A forums in professional domains have attracted a large number of users by offering professional knowledge. To support information access and save users’ efforts of raising new questions, they usually come with a question retrieval function, which retrieves similar existing questions (and their answers) to a user’s query. However, it can be difficult for community Q&A forums to cover all domains, especially those emerging lately with little labeled data but great discrepancy from existing domains. We refer to this scenario as cross-domain question retrieval. To handle the unique challenges of cross-domain question retrieval, we design a model based on adversarial training, namely, X-QR , which consists of two modules—a domain discriminator and a sentence matcher. The domain discriminator aims at aligning the source and target data distributions and unifying the feature space by domain-adversarial training. With the assistance of the domain discriminator, the sentence matcher is able to learn domain-consistent knowledge for the final matching prediction. To the best of our knowledge, this work is among the first to investigate the domain adaption problem of sentence matching for community Q&A forums question retrieval. The experiment results suggest that the proposed X-QR model offers better performance than conventional sentence matching methods in accomplishing cross-domain community Q&A tasks.


2022 ◽  
Vol 16 (4) ◽  
pp. 1-18
Author(s):  
Min-Ling Zhang ◽  
Jing-Han Wu ◽  
Wei-Xuan Bao

As an emerging weakly supervised learning framework, partial label learning considers inaccurate supervision where each training example is associated with multiple candidate labels among which only one is valid. In this article, a first attempt toward employing dimensionality reduction to help improve the generalization performance of partial label learning system is investigated. Specifically, the popular linear discriminant analysis (LDA) techniques are endowed with the ability of dealing with partial label training examples. To tackle the challenge of unknown ground-truth labeling information, a novel learning approach named Delin is proposed which alternates between LDA dimensionality reduction and candidate label disambiguation based on estimated labeling confidences over candidate labels. On one hand, the (kernelized) projection matrix of LDA is optimized by utilizing disambiguation-guided labeling confidences. On the other hand, the labeling confidences are disambiguated by resorting to k NN aggregation in the LDA-induced feature space. Extensive experiments over a broad range of partial label datasets clearly validate the effectiveness of Delin in improving the generalization performance of well-established partial label learning algorithms.


Author(s):  
Raheem Sarwar ◽  
Saeed-Ul Hassan

The authorship identification task aims at identifying the original author of an anonymous text sample from a set of candidate authors. It has several application domains such as digital text forensics and information retrieval. These application domains are not limited to a specific language. However, most of the authorship identification studies are focused on English and limited attention has been paid to Urdu. However, existing Urdu authorship identification solutions drop accuracy as the number of training samples per candidate author reduces and when the number of candidate authors increases. Consequently, these solutions are inapplicable to real-world cases. Moreover, due to the unavailability of reliable POS taggers or sentence segmenters, all existing authorship identification studies on Urdu text are limited to the word n-grams features only. To overcome these limitations, we formulate a stylometric feature space, which is not limited to the word n-grams feature only. Based on this feature space, we use an authorship identification solution that transforms each text sample into a point set, retrieves candidate text samples, and relies on the nearest neighbors classifier to predict the original author of the anonymous text sample. To evaluate our solution, we create a significantly larger corpus than existing studies and conduct several experimental studies that show that our solution can overcome the limitations of existing studies and report an accuracy level of 94.03%, which is higher than all previous authorship identification works.


2022 ◽  
Vol 15 (04) ◽  
Author(s):  
Shaoqi Yu ◽  
Xiaorun Li ◽  
Shuhan Chen ◽  
Liaoying Zhao

2022 ◽  
Vol 40 (1) ◽  
pp. 1-22
Author(s):  
Lianghao Xia ◽  
Chao Huang ◽  
Yong Xu ◽  
Huance Xu ◽  
Xiang Li ◽  
...  

As the deep learning techniques have expanded to real-world recommendation tasks, many deep neural network based Collaborative Filtering (CF) models have been developed to project user-item interactions into latent feature space, based on various neural architectures, such as multi-layer perceptron, autoencoder, and graph neural networks. However, the majority of existing collaborative filtering systems are not well designed to handle missing data. Particularly, in order to inject the negative signals in the training phase, these solutions largely rely on negative sampling from unobserved user-item interactions and simply treating them as negative instances, which brings the recommendation performance degradation. To address the issues, we develop a C ollaborative R eflection-Augmented A utoencoder N etwork (CRANet), that is capable of exploring transferable knowledge from observed and unobserved user-item interactions. The network architecture of CRANet is formed of an integrative structure with a reflective receptor network and an information fusion autoencoder module, which endows our recommendation framework with the ability of encoding implicit user’s pairwise preference on both interacted and non-interacted items. Additionally, a parametric regularization-based tied-weight scheme is designed to perform robust joint training of the two-stage CRANetmodel. We finally experimentally validate CRANeton four diverse benchmark datasets corresponding to two recommendation tasks, to show that debiasing the negative signals of user-item interactions improves the performance as compared to various state-of-the-art recommendation techniques. Our source code is available at https://github.com/akaxlh/CRANet.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-15
Author(s):  
Divya Jyothi Gaddipati ◽  
Jayanthi Sivaswamy

Early detection and treatment of glaucoma is of interest as it is a chronic eye disease leading to an irreversible loss of vision. Existing automated systems rely largely on fundus images for assessment of glaucoma due to their fast acquisition and cost-effectiveness. Optical Coherence Tomographic ( OCT ) images provide vital and unambiguous information about nerve fiber loss and optic cup morphology, which are essential for disease assessment. However, the high cost of OCT is a deterrent for deployment in screening at large scale. In this article, we present a novel CAD solution wherein both OCT and fundus modality images are leveraged to learn a model that can perform a mapping of fundus to OCT feature space. We show how this model can be subsequently used to detect glaucoma given an image from only one modality (fundus). The proposed model has been validated extensively on four public andtwo private datasets. It attained an AUC/Sensitivity value of 0.9429/0.9044 on a diverse set of 568 images, which is superior to the figures obtained by a model that is trained only on fundus features. Cross-validation was also done on nearly 1,600 images drawn from a private (OD-centric) and a public (macula-centric) dataset and the proposed model was found to outperform the state-of-the-art method by 8% (public) to 18% (private). Thus, we conclude that fundus to OCT feature space mapping is an attractive option for glaucoma detection.


2022 ◽  
Vol 14 (2) ◽  
pp. 396
Author(s):  
Yue Shi ◽  
Liangxiu Han ◽  
Anthony Kleerekoper ◽  
Sheng Chang ◽  
Tongle Hu

The accurate and automated diagnosis of potato late blight disease, one of the most destructive potato diseases, is critical for precision agricultural control and management. Recent advances in remote sensing and deep learning offer the opportunity to address this challenge. This study proposes a novel end-to-end deep learning model (CropdocNet) for accurate and automated late blight disease diagnosis from UAV-based hyperspectral imagery. The proposed method considers the potential disease-specific reflectance radiation variance caused by the canopy’s structural diversity and introduces multiple capsule layers to model the part-to-whole relationship between spectral–spatial features and the target classes to represent the rotation invariance of the target classes in the feature space. We evaluate the proposed method with real UAV-based HSI data under controlled and natural field conditions. The effectiveness of the hierarchical features is quantitatively assessed and compared with the existing representative machine learning/deep learning methods on both testing and independent datasets. The experimental results show that the proposed model significantly improves accuracy when considering the hierarchical structure of spectral–spatial features, with average accuracies of 98.09% for the testing dataset and 95.75% for the independent dataset, respectively.


2022 ◽  
Vol 14 (2) ◽  
pp. 363
Author(s):  
Nuerbiye Muhetaer ◽  
Ilyas Nurmemet ◽  
Adilai Abulaiti ◽  
Sentian Xiao ◽  
Jing Zhao

In arid and semi-arid areas, timely and effective monitoring and mapping of salt-affected areas is essential to prevent land degradation and to achieve sustainable soil management. The main objective of this study is to make full use of synthetic aperture radar (SAR) polarization technology to improve soil salinity mapping in the Keriya Oasis, Xinjiang, China. In this study, 25 polarization features are extracted from ALOS PALSAR-2 images, of which four features are selected. In addition, three soil salinity inversion models, named the RSDI1, RSDI2, and RSDI3, are proposed. The analysis and comparison results of inversion accuracy show that the overall correlation values of the RSDI1, RSDI2, and RSDI3 models are 0.63, 0.61, and 0.62, respectively. This result indicates that the radar feature space models have the potential to extract information on soil salinization in the Keriya Oasis.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 537
Author(s):  
Caiyue Zhou ◽  
Yanfen Kong ◽  
Chuanyong Zhang ◽  
Lin Sun ◽  
Dongmei Wu ◽  
...  

Group-based sparse representation (GSR) uses image nonlocal self-similarity (NSS) prior to grouping similar image patches, and then performs sparse representation. However, the traditional GSR model restores the image by training degraded images, which leads to the inevitable over-fitting of the data in the training model, resulting in poor image restoration results. In this paper, we propose a new hybrid sparse representation model (HSR) for image restoration. The proposed HSR model is improved in two aspects. On the one hand, the proposed HSR model exploits the NSS priors of both degraded images and external image datasets, making the model complementary in feature space and the plane. On the other hand, we introduce a joint sparse representation model to make better use of local sparsity and NSS characteristics of the images. This joint model integrates the patch-based sparse representation (PSR) model and GSR model, while retaining the advantages of the GSR model and the PSR model, so that the sparse representation model is unified. Extensive experimental results show that the proposed hybrid model outperforms several existing image recovery algorithms in both objective and subjective evaluations.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
R. Dinesh Kumar ◽  
E. Golden Julie ◽  
Y. Harold Robinson ◽  
S. Vimal ◽  
Gaurav Dhiman ◽  
...  

Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.


Sign in / Sign up

Export Citation Format

Share Document