deep feature
Recently Published Documents





2022 ◽  
Vol 18 (2) ◽  
pp. 1-20
Yantao Li ◽  
Peng Tao ◽  
Shaojiang Deng ◽  
Gang Zhou

Smartphones have become crucial and important in our daily life, but the security and privacy issues have been major concerns of smartphone users. In this article, we present DeFFusion, a CNN-based continuous authentication system using Deep Feature Fusion for smartphone users by leveraging the accelerometer and gyroscope ubiquitously built into smartphones. With the collected data, DeFFusion first converts the time domain data into frequency domain data using the fast Fourier transform and then inputs both of them into a designed CNN, respectively. With the CNN-extracted features, DeFFusion conducts the feature selection utilizing factor analysis and exploits balanced feature concatenation to fuse these deep features. Based on the one-class SVM classifier, DeFFusion authenticates current users as a legitimate user or an impostor. We evaluate the authentication performance of DeFFusion in terms of impact of training data size and time window size, accuracy comparison on different features over different classifiers and on different classifiers with the same CNN-extracted features, accuracy on unseen users, time efficiency, and comparison with representative authentication methods. The experimental results demonstrate that DeFFusion has the best accuracy by achieving the mean equal error rate of 1.00% in a 5-second time window size.

2022 ◽  
Vol 40 (4) ◽  
pp. 1-27
Zhongwei Xie ◽  
Ling Liu ◽  
Yanzhao Wu ◽  
Luo Zhong ◽  
Lin Li

This article introduces a two-phase deep feature engineering framework for efficient learning of semantics enhanced joint embedding, which clearly separates the deep feature engineering in data preprocessing from training the text-image joint embedding model. We use the Recipe1M dataset for the technical description and empirical validation. In preprocessing, we perform deep feature engineering by combining deep feature engineering with semantic context features derived from raw text-image input data. We leverage LSTM to identify key terms, deep NLP models from the BERT family, TextRank, or TF-IDF to produce ranking scores for key terms before generating the vector representation for each key term by using Word2vec. We leverage Wide ResNet50 and Word2vec to extract and encode the image category semantics of food images to help semantic alignment of the learned recipe and image embeddings in the joint latent space. In joint embedding learning, we perform deep feature engineering by optimizing the batch-hard triplet loss function with soft-margin and double negative sampling, taking into account also the category-based alignment loss and discriminator-based alignment loss. Extensive experiments demonstrate that our SEJE approach with deep feature engineering significantly outperforms the state-of-the-art approaches.

Chetan Gedam

Cancer is a heterogeneous disorder comprising various types and sub-types. Early detection, screening, and diagnosis of cancer types are necessary for facilitating cancer research in early diagnosis, management, and the evolution of successful therapies. Existing methodologies were only able to classify and diagnose a single variety of cancer based on a homogeneous dataset but more focused on predicting patient survivability then cure. This research defines a machine learning-based methodology to develop an universal approach in diagnosis, detection, symptoms-based prediction, and screening of histopathology cancer, their types, and sub types using a heterogeneous dataset based on images and scans. In this architecture, we use VGG-19 based 3D-Convolutional Neural Network for deep feature extraction and later perform regression using a random forest algorithm. We create a heterogeneous dataset consisting of results from laboratory tests, imaging tests and biopsy reports, not only relying on clinical images. Initially, we categorize tumors and lesions as benign or malignant and classify the malignant lesions into their sub-types, detecting their severity and growth rate. Our system is designed to predict risk at multiple time-points, leverage optional risk factors if they are available and produce predictions that are consistent across mammography machines. We found the classification accuracy for categorizing tumors as cancerous to be 95% whereas the accuracy for classification of malignant lesions into their sub-types to be 94%..

D. Minola Davids ◽  
C. Seldev Christopher

The visual data attained from surveillance single-camera or multi-view camera networks is exponentially increasing every day. Identifying the important shots in the presented video which faithfully signify the original video is the major task in video summarization. For executing efficient video summarization of the surveillance systems, optimization algorithm like LFOB-COA is proposed in this paper. Data collection, pre-processing, deep feature extraction (FE), shot segmentation JSFCM, classification using Rectified Linear Unit activated BLSTM, and LFOB-COA are the proposed method’s five steps. Finally a post-processing step is utilized. For recognizing the proposed method’s effectiveness, the results are then contrasted with the existent methods.

2022 ◽  
Vol 70 (2) ◽  
pp. 2261-2276
Farrukh Zia ◽  
Isma Irum ◽  
Nadia Nawaz Qadri ◽  
Yunyoung Nam ◽  
Kiran Khurshid ◽  

Sign in / Sign up

Export Citation Format

Share Document