semantic gap
Recently Published Documents


TOTAL DOCUMENTS

271
(FIVE YEARS 58)

H-INDEX

21
(FIVE YEARS 3)

2021 ◽  
Vol 11 (24) ◽  
pp. 12116
Author(s):  
Shanza Abbas ◽  
Muhammad Umair Khan ◽  
Scott Uk-Jin Lee ◽  
Asad Abbas

Natural language interfaces to databases (NLIDB) has been a research topic for a decade. Significant data collections are available in the form of databases. To utilize them for research purposes, a system that can translate a natural language query into a structured one can make a huge difference. Efforts toward such systems have been made with pipelining methods for more than a decade. Natural language processing techniques integrated with data science methods are researched as pipelining NLIDB systems. With significant advancements in machine learning and natural language processing, NLIDB with deep learning has emerged as a new research trend in this area. Deep learning has shown potential for rapid growth and improvement in text-to-SQL tasks. In deep learning NLIDB, closing the semantic gap in predicting users’ intended columns has arisen as one of the critical and fundamental problems in this research field. Contributions toward this issue have consisted of preprocessed feature inputs and encoding schema elements afore of and more impactful to the targeted model. Various significant work contributed towards this problem notwithstanding, this has been shown to be one of the critical issues for the task of developing NLIDB. Working towards closing the semantic gap between user intention and predicted columns, we present an approach for deep learning text-to-SQL tasks that includes previous columns’ occurrences scores as an additional input feature. Overall exact match accuracy can also be improved by emphasizing the improvement of columns’ prediction accuracy, which depends significantly on column prediction itself. For this purpose, we extract the query fragments from previous queries’ data and obtain the columns’ occurrences and co-occurrences scores. Column occurrences and co-occurrences scores are processed as input features for the encoder–decoder-based text to the SQL model. These scores contribute, as a factor, the probability of having already used columns and tables together in the query history. We experimented with our approach on the currently popular text-to-SQL dataset Spider. Spider is a complex data set containing multiple databases. This dataset includes query–question pairs along with schema information. We compared our exact match accuracy performance with a base model using their test and training data splits. It outperformed the base model’s accuracy, and accuracy was further boosted in experiments with the pretrained language model BERT.


2021 ◽  
Author(s):  
Xiaochun Zhang ◽  
Timothy M. Jones ◽  
Simone Campanoni

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7136
Author(s):  
Zhiqiang Zhang ◽  
Xin Qiu ◽  
Yongzhou Li

Feature Pyramid Network (FPN) is used as the neck of current popular object detection networks. Research has shown that the structure of FPN has some defects. In addition to the loss of information caused by the reduction of the channel number, the features scale of different levels are also different, and the corresponding information at different abstract levels are also different, resulting in a semantic gap between each level. We call the semantic gap level imbalance. Correlation convolution is a way to alleviate the imbalance between adjacent layers; however, how to alleviate imbalance between all levels is another problem. In this article, we propose a new simple but effective network structure called Scale-Equalizing Feature Pyramid Network (SEFPN), which generates multiple features of different scales by iteratively fusing the features of each level. SEFPN improves the overall performance of the network by balancing the semantic representation of each layer of features. The experimental results on the MS-COCO2017 dataset show that the integration of SEFPN as a standalone module into the one-stage network can further improve the performance of the detector, by ∼1AP, and improve the detection performance of Faster R-CNN, a typical two-stage network, especially for large object detection APL∼2AP.


Author(s):  
TENGDA ZHOU ◽  
SHAOYANG MEN ◽  
JINGXIAN LIANG ◽  
BAOXIAN YU ◽  
HAN ZHANG ◽  
...  

Heart rate measurement through Ballistocardiogram (BCG) signal is an efficient method for long-term cardiac activity monitoring in real-time, especially for patients with cardiovascular and cerebrovascular disease. In this study, we propose a one-dimensional (1D) U-net++ to identify the position of J-peak in BCG signals automatically. The proposed 1D U-net++ is based on a 1D convolution neural network through dense skip connection backward transfer data features. The low-level and high-level data features of the BCG signals are combined with the last layer features of 1D U-net++ to shorten the semantic gap when the encoder and decoder feature skip connection. The BCG signals of eight healthy subjects were collected for experimental verification, and the accuracy and precision of J-peak detection reached 99.4% and 99.3%, respectively. The experimental results demonstrate that our proposed method can effectively identify J-peak in BCG signal.


Information ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 402
Author(s):  
Stefano Marchesin ◽  
Giorgio Maria Di Nunzio ◽  
Maristella Agosti

In Information Retrieval (IR), the semantic gap represents the mismatch between users’ queries and how retrieval models answer to these queries. In this paper, we explore how to use external knowledge resources to enhance bag-of-words representations and reduce the effect of the semantic gap between queries and documents. In this regard, we propose several simple but effective knowledge-based query expansion and reduction techniques, and we evaluate them for the medical domain. The query reformulations proposed are used to increase the probability of retrieving relevant documents through the addition to, or the removal from, the original query of highly specific terms. The experimental analyses on different test collections for Precision Medicine IR show the effectiveness of the developed techniques. In particular, a specific subset of query reformulations allow retrieval models to achieve top performing results in all the considered test collections.


2021 ◽  
Vol 13 (18) ◽  
pp. 3755
Author(s):  
Yongjie Guo ◽  
Feng Wang ◽  
Yuming Xiang ◽  
Hongjian You

Deep convolutional neural networks (DCNNs) have been used to achieve state-of-the-art performance on land cover classification thanks to their outstanding nonlinear feature extraction ability. DCNNs are usually designed as an encoder–decoder architecture for the land cover classification in very high-resolution (VHR) remote sensing images. The encoder captures semantic representation by stacking convolution layers and shrinking image spatial resolution, while the decoder restores the spatial information by an upsampling operation and combines it with different level features through a summation or skip connection. However, there is still a semantic gap between different-level features; a simple summation or skip connection will reduce the performance of land-cover classification. To overcome this problem, we propose a novel end-to-end network named Dual Gate Fusion Network (DGFNet) to restrain the impact of the semantic gap. In detail, the key of DGFNet consists of two main components: Feature Enhancement Module (FEM) and Dual Gate Fusion Module (DGFM). Firstly, the FEM combines local information with global contents and strengthens the feature representation in the encoder. Secondly, the DGFM is proposed to reduce the semantic gap between different level features, effectively fusing low-level spatial information and high-level semantic information in the decoder. Extensive experiments conducted on the LandCover dataset and the ISPRS Potsdam dataset proved the effectiveness of the proposed network. The DGFNet achieves state-of-art performance 88.87% MIoU on the LandCover dataset and 72.25% MIoU on the ISPRS Potsdam dataset.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Subhadip Maji ◽  
Smarajit Bose

In a Content-based Image Retrieval (CBIR) System, the task is to retrieve similar images from a large database given a query image. The usual procedure is to extract some useful features from the query image and retrieve images that have a similar set of features. For this purpose, a suitable similarity measure is chosen, and images with high similarity scores are retrieved. Naturally, the choice of these features play a very important role in the success of this system, and high-level features are required to reduce the “semantic gap.” In this article, we propose to use features derived from pre-trained network models from a deep-learning convolution network trained for a large image classification problem. This approach appears to produce vastly superior results for a variety of databases, and it outperforms many contemporary CBIR systems. We analyse the retrieval time of the method and also propose a pre-clustering of the database based on the above-mentioned features, which yields comparable results in a much shorter time in most of the cases.


Author(s):  
Qianli Xu ◽  
Ana Garcia Del Molino ◽  
Jie Lin ◽  
Fen Fang ◽  
Vigneshwaran Subbaraju ◽  
...  

Lifelog analytics is an emerging research area with technologies embracing the latest advances in machine learning, wearable computing, and data analytics. However, state-of-the-art technologies are still inadequate to distill voluminous multimodal lifelog data into high quality insights. In this article, we propose a novel semantic relevance mapping ( SRM ) method to tackle the problem of lifelog information access. We formulate lifelog image retrieval as a series of mapping processes where a semantic gap exists for relating basic semantic attributes with high-level query topics. The SRM serves both as a formalism to construct a trainable model to bridge the semantic gap and an algorithm to implement the training process on real-world lifelog data. Based on the SRM, we propose a computational framework of lifelog analytics to support various applications of lifelog information access, such as image retrieval, summarization, and insight visualization. Systematic evaluations are performed on three challenging benchmarking tasks to show the effectiveness of our method.


Sign in / Sign up

Export Citation Format

Share Document