feature search
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 12)

H-INDEX

16
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Harrisson Li ◽  
Evan Gunnell ◽  
Yu Sun

When reading, many people frequently come across words they struggle with, and so they approach an online dictionary to help them define the word and better comprehend it. However, this conventional method of defining unknown vocabulary seems to be inefficient and ineffective, particularly for individuals who easily get distracted. Therefore, we asked ourselves: “how we could develop an application such that it will simultaneously aim to help define difficult words and improve users’ vocabulary while also minimizing distraction?”. In response to that question, this paper will go in depth about an application we created, utilizing an eye-tracking device, to assist users in defining words, and enhance their vocabulary skills. Moreover, it includes supplemental materials such as an image feature, “search” button, and generation report to better support users' vocabulary.


2021 ◽  
pp. 174702182110502
Author(s):  
Azuwan Musa ◽  
Alison R Lane ◽  
Amanda Ellison

Visual search is a task often used in the rehabilitation of patients with cortical and non-cortical visual pathologies such as visual field loss. Reduced visual acuity is often comorbid with these disorders, and it remains poorly defined how low visual acuity may affect a patient’s ability to recover visual function through visual search training. The two experiments reported here investigated whether induced blurring of vision (from 6/15 to 6/60) in a neurotypical population differentially affected various types of feature search tasks, whether there is a minimal acceptable level of visual acuity required for normal search performance, and whether these factors affected the degree to which participants could improve with training. From the results, it can be seen that reducing visual acuity did reduce search speed, but only for tasks where the target was defined by shape or size (not colour), and only when acuity was worse than 6/15. Furthermore, searching behaviour was seen to improve with training in all three feature search tasks, irrespective of the degree of blurring that was induced. The improvement also generalised to a non-trained search task, indicating that an enhanced search strategy had been developed. These findings have important implications for the use of visual search as a rehabilitation aid for partial visual loss, indicating that individuals with even severe comorbid blurring should still be able to benefit from such training.


2021 ◽  
Vol 21 (9) ◽  
pp. 2499
Author(s):  
Safaa Abassi Abu Rukab ◽  
Noam Khayat ◽  
Shaul Hochstein

2021 ◽  
Author(s):  
Yehansen Chen ◽  
Lin Wan ◽  
Zhihang Li ◽  
Qianyan Jing ◽  
Zongyuan Sun
Keyword(s):  

2021 ◽  
Vol 10 (4) ◽  
pp. 245
Author(s):  
Cheng Ding ◽  
Liguo Weng ◽  
Min Xia ◽  
Haifeng Lin

Building and road extraction from remote sensing images is of great significance to urban planning. At present, most of building and road extraction models adopt deep learning semantic segmentation method. However, the existing semantic segmentation methods did not pay enough attention to the feature information between hidden layers, which led to the neglect of the category of context pixels in pixel classification, resulting in these two problems of large-scale misjudgment of buildings and disconnection of road extraction. In order to solve these problem, this paper proposes a Non-Local Feature Search Network (NFSNet) that can improve the segmentation accuracy of remote sensing images of buildings and roads, and to help achieve accurate urban planning. By strengthening the exploration of hidden layer feature information, it can effectively reduce the large area misclassification of buildings and road disconnection in the process of segmentation. Firstly, a Self-Attention Feature Transfer (SAFT) module is proposed, which searches the importance of hidden layer on channel dimension, it can obtain the correlation between channels. Secondly, the Global Feature Refinement (GFR) module is introduced to integrate the features extracted from the backbone network and SAFT module, it enhances the semantic information of the feature map and obtains more detailed segmentation output. The comparative experiments demonstrate that the proposed method outperforms state-of-the-art methods, and the model complexity is the lowest.


2021 ◽  
pp. 197-217
Author(s):  
Tripti Agarwal ◽  
Amit Chattopadhyay ◽  
Vijay Natarajan

2020 ◽  
Vol 73 (6) ◽  
pp. 908-919
Author(s):  
Tobias Schoeberl ◽  
Florian Goller ◽  
Ulrich Ansorge

In spatial cueing, presenting a peripheral cue at the same position as a to-be-searched-for target (valid condition) facilitates search relative to a cue presented away from the target (invalid condition). It is assumed that this cueing effect reflects spatial attentional capture to the cued position that facilitates search in valid relative to invalid conditions. However, the effect is typically stronger for top-down matching cues that resemble the targets than for non-matching cues that are different from targets. One factor which could contribute to this effect is that in valid non-matching conditions, a cue-to-target colour difference could prompt an object-updating cost of the target that counteracts facilitative influences of attention capture by the valid cues (this has been shown especially in known-singleton search). We tested this prediction by introducing colour changes at target locations in valid and invalid conditions in feature search. This should compensate for selective updating costs in valid conditions and unmask the true capture effect of non-matching cues. In addition, in top-down matching conditions, colour changes at target positions in invalid conditions should increase the cueing effect, now by selective updating costs in addition to capture away from the targets in invalid conditions. Both predictions were borne out by the results, supporting a contribution of object-file updating to net cueing effects. However, we found little evidence for attentional capture by non-matching cues in feature search even when the selective cost by object-file updating in only valid conditions was compensated for.


2020 ◽  
Author(s):  
Jesse Sherwood ◽  
Jesse Lowe ◽  
Reza Derakhshani

Abstract[Finding suitable common feature sets for use in multiclass subject independent brain-computer interface (BCI) classifiers is problematic due to characteristically large inter-subject variation of electroencephalographic signatures. We propose a wrapper search method using a one versus the rest discrete output classifier. Obtaining and evaluating the quality of feature sets requires the development of appropriate classifier metrics. A one versus the rest classifier must be evaluated by a scalar performance metric that provides feedback for the feature search algorithm. However, the one versus the rest discrete classifier is prone to settling into degenerate states for difficult discrimination problems. The chance of occurrence of degeneracy increases with the number of classes, number of subjects and imbalance between the number of samples in the majority and minority classes. This paper proposes a scalar Quality (Q)-factor to compensate for classifier degeneracy and to improve the convergence of the wrapper search. The Q-factor, calculated from the ratio of sensitivity to specificity of the confusion matrix, is applied as a penalty to the accuracy (1-error rate). This method is successfully applied to a multiclass subject independent BCI using 10 untrained subjects performing 4 motor tasks in conjunction with the Sequential Floating Forward Selection feature search algorithm and Support Vector Machine classifiers.]


Author(s):  
Xiangning Chen ◽  
Bo Qiao ◽  
Weiyi Zhang ◽  
Wei Wu ◽  
Murali Chintalapati ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document