background feature
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 1)

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4315
Author(s):  
Pei-Yun Tsai ◽  
Chiu-Hua Huang ◽  
Jia-Wei Guo ◽  
Yu-Chuan Li ◽  
An-Yeu Andy Wu ◽  
...  

Background: Feature extraction from photoplethysmography (PPG) signals is an essential step to analyze vascular and hemodynamic information. Different morphologies of PPG waveforms from different measurement sites appear. Various phenomena of missing or ambiguous features exist, which limit subsequent signal processing. Methods: The reasons that cause missing or ambiguous features of finger and wrist PPG pulses are analyzed based on the concept of component waves from pulse decomposition. Then, a systematic approach for missing-feature imputation and ambiguous-feature resolution is proposed. Results: From the experimental results, with the imputation and ambiguity resolution technique, features from 35,036 (98.7%) of 35,502 finger PPG cycles and 36307 (99.1%) of 36,652 wrist PPG cycles can be successfully identified. The extracted features became more stable and the standard deviations of their distributions were reduced. Furthermore, significant correlations up to 0.92 were shown between the finger and wrist PPG waveforms regarding the positions and widths of the third to fifth component waves. Conclusion: The proposed missing-feature imputation and ambiguous-feature resolution solve the problems encountered during PPG feature extraction and expand the feature availability for further processing. More intrinsic properties of finger and wrist PPG are revealed. The coherence between the finger and wrist PPG waveforms enhances the applicability of the wrist PPG.


2021 ◽  
Vol 29 ◽  
pp. 519-529
Author(s):  
Sang-Hong Lee

BACKGROUND: Feature selection is a technology that improves the performance result by eliminating overlapping or unrelated features. OBJECTIVE: To improve the performance result, this study proposes a new feature selection that uses the distance between the centers. METHODS: This study uses the distance between the centers of gravity (DBCG) of the bounded sum of the weighted fuzzy memberships (BSWFMs) supported by a neural network with weighted fuzzy membership (NEWFM). RESULTS: Using distance-based feature selection, 22 minimum features with a high performance result are selected, with the shortest DBCG of BSWFMs removed individually from the initial 24 features. The NEWFM used 22 minimum features as inputs to obtain a sensitivity, accuracy, and specificity of 99.3%, 99.5%, and 99.7%, respectively. CONCLUSIONS: In this study, only the mean DBCG is used to select the features; in the future, however, it will be necessary to incorporate statistical methods such as the standard deviation, maximum, and normal distribution.


2021 ◽  
Vol 97 (2) ◽  
pp. 267-285
Author(s):  
Andrew Hom ◽  
Ryan Beasley

Abstract Temporal considerations play a role in many models of foreign policy analysis, particularly those focused on decision-making processes. While time features prominently as a background feature against which sequence, cadence and psychological consequence are measured, little attention has been given to how foreign policy agents actively construct their temporal environments. We propose that different foreign policy-making actors develop distinct relationships with time, and that variations in these relationships can help account for the ways in which ‘events’ are transformed into routine practices, change opportunities or full-blown foreign policy crises. We advance a novel conception of time in foreign policy-making through our development of timing theory and the linguistic constructions of ‘time’ by foreign policy actors. We propose a typology of timing agency, which highlights the impact of these orientations on decision-making processes as well as the characteristics of foreign policy behaviours. Using the case of Brexit, we elaborate differences in actors' temporal orientations and show how such differences impact the making of foreign policy.


2020 ◽  
Vol 34 (07) ◽  
pp. 12967-12974
Author(s):  
Shizhen Zhao ◽  
Changxin Gao ◽  
Yuanjie Shao ◽  
Lerenhan Li ◽  
Changqian Yu ◽  
...  

We propose a Generative Transfer Network (GTNet) for zero-shot object detection (ZSD). GTNet consists of an Object Detection Module and a Knowledge Transfer Module. The Object Detection Module can learn large-scale seen domain knowledge. The Knowledge Transfer Module leverages a feature synthesizer to generate unseen class features, which are applied to train a new classification layer for the Object Detection Module. In order to synthesize features for each unseen class with both the intra-class variance and the IoU variance, we design an IoU-Aware Generative Adversarial Network (IoUGAN) as the feature synthesizer, which can be easily integrated into GTNet. Specifically, IoUGAN consists of three unit models: Class Feature Generating Unit (CFU), Foreground Feature Generating Unit (FFU), and Background Feature Generating Unit (BFU). CFU generates unseen features with the intra-class variance conditioned on the class semantic embeddings. FFU and BFU add the IoU variance to the results of CFU, yielding class-specific foreground and background features, respectively. We evaluate our method on three public datasets and the results demonstrate that our method performs favorably against the state-of-the-art ZSD approaches.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1728 ◽  
Author(s):  
Bo Yang ◽  
Sheng Zhang ◽  
Yan Tian ◽  
Bijun Li

Assisted driving and unmanned driving have been areas of focus for both industry and academia. Front-vehicle detection technology, a key component of both types of driving, has also attracted great interest from researchers. In this paper, to achieve front-vehicle detection in unmanned or assisted driving, a vision-based, efficient, and fast front-vehicle detection method based on the spatial and temporal characteristics of the front vehicle is proposed. First, a method to extract the motion vector of the front vehicle is put forward based on Oriented FAST and Rotated BRIEF (ORB) and the spatial position constraint. Then, by analyzing the differences between the motion vectors of the vehicle and those of the background, feature points of the vehicle are extracted. Finally, a feature-point clustering method based on a combination of temporal and spatial characteristics are applied to realize front-vehicle detection. The effectiveness of the proposed algorithm is verified using a large number of videos.


Sign in / Sign up

Export Citation Format

Share Document