parallel feature
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 36)

H-INDEX

10
(FIVE YEARS 2)

Author(s):  
Lumin Liu

Removing undesired re ection from a single image is in demand for computational photography. Re ection removal methods are gradually effective because of the fast development of deep neural networks. However, current results of re ection removal methods usually leave salient re ection residues due to the challenge of recognizing diverse re ection patterns. In this paper, we present a one-stage re ection removal framework with an end-to-end manner that considers both low-level information correlation and efficient feature separation. Our approach employs the criss-cross attention mechanism to extract low-level features and to efficiently enhance contextual correlation. To thoroughly remove re ection residues in the background image, we punish the similar texture feature by contrasting the parallel feature separa- tion networks, and thus unrelated textures in the background image could be progressively separated during model training. Experiments on both real-world and synthetic datasets manifest our approach can reach the state-of-the-art effect quantitatively and qualitatively.


Neuroforum ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Klaudia P. Szatko ◽  
Katrin Franke

Abstract To provide a compact and efficient input to the brain, sensory systems separate the incoming information into parallel feature channels. In the visual system, parallel processing starts in the retina. Here, the image is decomposed into multiple retinal output channels, each selective for a specific set of visual features like motion, contrast, or edges. In this article, we will summarize recent findings on the functional organization of the retinal output, the neural mechanisms underlying its diversity, and how single visual features, like color, are extracted by the retinal network. Unraveling how the retina – as the first stage of the visual system – filters the visual input is an important step toward understanding how visual information processing guides behavior.


2021 ◽  
Author(s):  
Archana Shivdas Sumant ◽  
Dipak V. Patil

High dimensional data analytics is emerging research field in this digital world. The gene expression microarray data, remote sensor data, medical data, image, video data are some of the examples of high dimensional data. Feature subset selection is challenging task for such data. To achieve diversity and accuracy with high dimensional data is important aspect of this research. To reduce time complexity parallel stepwise feature subset selection approach is adopted for feature subset selection in this paper. Our aim is to reduce time complexity and enhancing the classification accuracy with minimum number of selected feature subset. With this approach 88.18% average accuracy is achieved.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7286
Author(s):  
Muhammad Attique Khan ◽  
Majed Alhaisoni ◽  
Usman Tariq ◽  
Nazar Hussain ◽  
Abdul Majid ◽  
...  

In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach—parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.


Author(s):  
Kottilingam Kottursamy

The role of facial expression recognition in social science and human-computer interaction has received a lot of attention. Deep learning advancements have resulted in advances in this field, which go beyond human-level accuracy. This article discusses various common deep learning algorithms for emotion recognition, all while utilising the eXnet library for achieving improved accuracy. Memory and computation, on the other hand, have yet to be overcome. Overfitting is an issue with large models. One solution to this challenge is to reduce the generalization error. We employ a novel Convolutional Neural Network (CNN) named eXnet to construct a new CNN model utilising parallel feature extraction. The most recent eXnet (Expression Net) model improves on the previous model's inaccuracy while having many fewer parameters. Data augmentation techniques that have been in use for decades are being utilized with the generalized eXnet. It employs effective ways to reduce overfitting while maintaining overall size under control.


Sign in / Sign up

Export Citation Format

Share Document