Multi-level feature extraction network for person re-identification

2021 ◽  
pp. 1-15
Author(s):  
Yang Ge ◽  
Ding Xin

In the task of Person re-identification (reID), the range of motion of pedestrians often spans multiple camera areas, and their motion direction and behavior cannot be constrained, and irrelevant people or objects in different scenes will also obtain target pedestrian information for us Cause interference. At the same time, the surveillance system also has many characteristics such as a fixed shooting angle of a single camera, different angles between different cameras, and low image resolution. These characteristics make the task of Person re-identification difficult. This paper proposes a Multi-level Feature Extraction Network (MFEN) based on SEResNet-50. Extracting richer and more diverse pedestrian features from poor-quality images will effectively improve the re-identification ability of the network, and MFEN can obtain Multistage key features in the image through the Feature Re-extraction Method (FRM) proposed in this paper. Experiments show that compared with AANet-50, MFEN has 3.85% /0.71% improvements of mAP/ Rank-1 on the Market1501 dataset, and 2.74% /1.28% improvements of mAP/ Rank-1 on the DukeMTMC-reID dataset.

2021 ◽  
Vol 13 (8) ◽  
pp. 1602
Author(s):  
Qiaoqiao Sun ◽  
Xuefeng Liu ◽  
Salah Bourennane

Deep learning models have strong abilities in learning features and they have been successfully applied in hyperspectral images (HSIs). However, the training of most deep learning models requires labeled samples and the collection of labeled samples are labor-consuming in HSI. In addition, single-level features from a single layer are usually considered, which may result in the loss of some important information. Using multiple networks to obtain multi-level features is a solution, but at the cost of longer training time and computational complexity. To solve these problems, a novel unsupervised multi-level feature extraction framework that is based on a three dimensional convolutional autoencoder (3D-CAE) is proposed in this paper. The designed 3D-CAE is stacked by fully 3D convolutional layers and 3D deconvolutional layers, which allows for the spectral-spatial information of targets to be mined simultaneously. Besides, the 3D-CAE can be trained in an unsupervised way without involving labeled samples. Moreover, the multi-level features are directly obtained from the encoded layers with different scales and resolutions, which is more efficient than using multiple networks to get them. The effectiveness of the proposed multi-level features is verified on two hyperspectral data sets. The results demonstrate that the proposed method has great promise in unsupervised feature learning and can help us to further improve the hyperspectral classification when compared with single-level features.


2021 ◽  
Vol 11 (3) ◽  
pp. 968
Author(s):  
Yingchun Sun ◽  
Wang Gao ◽  
Shuguo Pan ◽  
Tao Zhao ◽  
Yahui Peng

Recently, multi-level feature networks have been extensively used in instance segmentation. However, because not all features are beneficial to instance segmentation tasks, the performance of networks cannot be adequately improved by synthesizing multi-level convolutional features indiscriminately. In order to solve the problem, an attention-based feature pyramid module (AFPM) is proposed, which integrates the attention mechanism on the basis of a multi-level feature pyramid network to efficiently and pertinently extract the high-level semantic features and low-level spatial structure features; for instance, segmentation. Firstly, we adopt a convolutional block attention module (CBAM) into feature extraction, and sequentially generate attention maps which focus on instance-related features along the channel and spatial dimensions. Secondly, we build inter-dimensional dependencies through a convolutional triplet attention module (CTAM) in lateral attention connections, which is used to propagate a helpful semantic feature map and filter redundant informative features irrelevant to instance objects. Finally, we construct branches for feature enhancement to strengthen detailed information to boost the entire feature hierarchy of the network. The experimental results on the Cityscapes dataset manifest that the proposed module outperforms other excellent methods under different evaluation metrics and effectively upgrades the performance of the instance segmentation method.


Author(s):  
Shrugal Varde* ◽  
◽  
Dr. M.S. Panse ◽  

This paper introduces a novel travel for blind users that can assist them to detects location of doors in corridors and also give information about location of stairs. The developed system uses camera to capture images in front of the user. Feature extraction algorithm is used to extract key features that distinguish doors and stairs from other structures observed in indoor environments. This information is then conveyed to the user using simple auditory feedback. The mobility aid was validated on 50 visually impaired users. The subjects walked in a controlled test environment. The accuracy of the device to help the user detect doors and stairs was determined. The results obtained were satisfactory and the device has the potential for use in standalone mode for indoor navigations.


Author(s):  
William McMahan ◽  
Bryan Jones ◽  
Ian Walker ◽  
Vilas Chitrakaran ◽  
Arjun Seshadri ◽  
...  

This paper connects the investigation of the biomechanics and behavior of octopus in the performance of a wide range of dexterous manipulations to the creation of octopus arm-like robots. This is achieved via the development of a series of octopus arm models which aid in both explaining the underlying octopus biomechanics and in developing a specification for the design of robotic manipulators. Robotic manipulators which match the key features of these models are then introduced, followed by the development of inverse kinematics for the circular (constant) curvature model.


Animals ◽  
2019 ◽  
Vol 9 (4) ◽  
pp. 172 ◽  
Author(s):  
Sharma ◽  
Kennedy ◽  
Schuetze ◽  
Phillips

Cow shelters (gaushalas) are unique traditional institutions in India, where aged, infertile, diseased, rescued, and abandoned cows are sheltered for the rest of their life, until they die of natural causes. These institutions owe their existence to the reverence for the cow as a holy mother goddess for Hindus, the majority religion in India. There is a religious and legal prohibition on cow slaughter in most Indian states. A cross-sectional study was conducted to assess the welfare of cows in these shelters, which included the development of a welfare assessment protocol, based on direct animal-based measurements, indirect resource-based assessments, and description of the herd characteristics by the manager. A total of 54 cow shelters in 6 states of India were studied and 1620 animals were clinically examined, based on 37 health, welfare, and behavior parameters. Thirty resources provided to the animals, including housing, flooring, feeding, watering, ease of movement, cleanliness of facilities, lighting, temperature, humidity, and noise levels in the sheds were measured. The study showed that the shelters contained mostly non-lactating cows, with a mean age of 11 years. The primary welfare problems appeared to be different to those in Western countries, as the major issues found in the shelters were facility-related—the low space allowance per cow, poor quality of the floors, little freedom of movement, and a lack of pasture grazing. Very few cows were recorded as lame, but about one half had carpal joint hair loss and swelling, and slightly less had lesions from interacting with shelter furniture. Some shelters also had compromised biosecurity and risks of zoonosis. These issues need to be addressed to aid in ensuring the acceptability of these institutions to the public. This welfare assessment protocol aims to address the welfare issues and problems in the shelters, by providing feedback for improvement to the stakeholders.


2017 ◽  
Vol 17 (22) ◽  
pp. 7497-7501 ◽  
Author(s):  
Sanket Goyal ◽  
Pranali Desai ◽  
Vasanth Swaminathan

Sign in / Sign up

Export Citation Format

Share Document