scholarly journals A survey on compact features for visual content analysis

Author(s):  
Luca Baroffio ◽  
Alessandro E. C. Redondi ◽  
Marco Tagliasacchi ◽  
Stefano Tubaro

Visual features constitute compact yet effective representations of visual content, and are being exploited in a large number of heterogeneous applications, including augmented reality, image registration, content-based retrieval, and classification. Several visual content analysis applications are distributed over a network and require the transmission of visual data, either in the pixel or in the feature domain, to a central unit that performs the task at hand. Furthermore, large-scale applications need to store a database composed of up to billions of features and perform matching with low latency. In this context, several different implementations of feature extraction algorithms have been proposed over the last few years, with the aim of reducing computational complexity and memory footprint, while maintaining an adequate level of accuracy. Besides extraction, a large body of research addressed the problem of ad-hoc feature encoding methods, and a number of networking and transmission protocols enabling distributed visual content analysis have been proposed. In this survey, we present an overview of state-of-the-art methods for the extraction, encoding, and transmission of compact features for visual content analysis, thoroughly addressing each step of the pipeline and highlighting the peculiarities of the proposed methods.

Author(s):  
Xiao Liang ◽  
Fuyi Li ◽  
Jinxiang Chen ◽  
Junlong Li ◽  
Hao Wu ◽  
...  

Abstract Anti-cancer peptides (ACPs) are known as potential therapeutics for cancer. Due to their unique ability to target cancer cells without affecting healthy cells directly, they have been extensively studied. Many peptide-based drugs are currently evaluated in the preclinical and clinical trials. Accurate identification of ACPs has received considerable attention in recent years; as such, a number of machine learning-based methods for in silico identification of ACPs have been developed. These methods promote the research on the mechanism of ACPs therapeutics against cancer to some extent. There is a vast difference in these methods in terms of their training/testing datasets, machine learning algorithms, feature encoding schemes, feature selection methods and evaluation strategies used. Therefore, it is desirable to summarize the advantages and disadvantages of the existing methods, provide useful insights and suggestions for the development and improvement of novel computational tools to characterize and identify ACPs. With this in mind, we firstly comprehensively investigate 16 state-of-the-art predictors for ACPs in terms of their core algorithms, feature encoding schemes, performance evaluation metrics and webserver/software usability. Then, comprehensive performance assessment is conducted to evaluate the robustness and scalability of the existing predictors using a well-prepared benchmark dataset. We provide potential strategies for the model performance improvement. Moreover, we propose a novel ensemble learning framework, termed ACPredStackL, for the accurate identification of ACPs. ACPredStackL is developed based on the stacking ensemble strategy combined with SVM, Naïve Bayesian, lightGBM and KNN. Empirical benchmarking experiments against the state-of-the-art methods demonstrate that ACPredStackL achieves a comparative performance for predicting ACPs. The webserver and source code of ACPredStackL is freely available at http://bigdata.biocie.cn/ACPredStackL/ and https://github.com/liangxiaoq/ACPredStackL, respectively.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Ping Yu ◽  
Wei Ni ◽  
Guangsheng Yu ◽  
Hua Zhang ◽  
Ren Ping Liu ◽  
...  

Vehicular ad hoc network (VANET) encounters a critical challenge of efficiently and securely authenticating massive on-road data while preserving the anonymity and traceability of vehicles. This paper designs a new anonymous authentication approach by using an attribute-based signature. Each vehicle is defined by using a set of attributes, and each message is signed with multiple attributes, enabling the anonymity of vehicles. First, a batch verification algorithm is developed to accelerate the verification processes of a massive volume of messages in large-scale VANETs. Second, replicate messages captured by different vehicles and signed under different sets of attributes can be dereplicated with the traceability of all the signers preserved. Third, the malicious vehicles forging data can be traced from their signatures and revoked from attribute groups. The security aspects of the proposed approach are also analyzed by proving the anonymity of vehicles and the unforgeability of signatures. The efficiency of the proposed approach is numerically verified, as compared to the state of the art.


2020 ◽  
Vol 27 ◽  
Author(s):  
Zaheer Ullah Khan ◽  
Dechang Pi

Background: S-sulfenylation (S-sulphenylation, or sulfenic acid) proteins, are special kinds of post-translation modification, which plays an important role in various physiological and pathological processes such as cytokine signaling, transcriptional regulation, and apoptosis. Despite these aforementioned significances, and by complementing existing wet methods, several computational models have been developed for sulfenylation cysteine sites prediction. However, the performance of these models was not satisfactory due to inefficient feature schemes, severe imbalance issues, and lack of an intelligent learning engine. Objective: In this study, our motivation is to establish a strong and novel computational predictor for discrimination of sulfenylation and non-sulfenylation sites. Methods: In this study, we report an innovative bioinformatics feature encoding tool, named DeepSSPred, in which, resulting encoded features is obtained via n-segmented hybrid feature, and then the resampling technique called synthetic minority oversampling was employed to cope with the severe imbalance issue between SC-sites (minority class) and non-SC sites (majority class). State of the art 2DConvolutional Neural Network was employed over rigorous 10-fold jackknife cross-validation technique for model validation and authentication. Results: Following the proposed framework, with a strong discrete presentation of feature space, machine learning engine, and unbiased presentation of the underline training data yielded into an excellent model that outperforms with all existing established studies. The proposed approach is 6% higher in terms of MCC from the first best. On an independent dataset, the existing first best study failed to provide sufficient details. The model obtained an increase of 7.5% in accuracy, 1.22% in Sn, 12.91% in Sp and 13.12% in MCC on the training data and12.13% of ACC, 27.25% in Sn, 2.25% in Sp, and 30.37% in MCC on an independent dataset in comparison with 2nd best method. These empirical analyses show the superlative performance of the proposed model over both training and Independent dataset in comparison with existing literature studies. Conclusion : In this research, we have developed a novel sequence-based automated predictor for SC-sites, called DeepSSPred. The empirical simulations outcomes with a training dataset and independent validation dataset have revealed the efficacy of the proposed theoretical model. The good performance of DeepSSPred is due to several reasons, such as novel discriminative feature encoding schemes, SMOTE technique, and careful construction of the prediction model through the tuned 2D-CNN classifier. We believe that our research work will provide a potential insight into a further prediction of S-sulfenylation characteristics and functionalities. Thus, we hope that our developed predictor will significantly helpful for large scale discrimination of unknown SC-sites in particular and designing new pharmaceutical drugs in general.


2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


2020 ◽  
Vol 4 ◽  
pp. 239784732097975
Author(s):  
Stéphanie Boué ◽  
Didier Goedertier ◽  
Julia Hoeng ◽  
Anita Iskandar ◽  
Arkadiusz K Kuczaj ◽  
...  

E-vapor products (EVP) have become popular alternatives for cigarette smokers who would otherwise continue to smoke. EVP research is challenging and complex, mostly because of the numerous and rapidly evolving technologies and designs as well as the multiplicity of e-liquid flavors and solvents available on the market. There is an urgent need to standardize all stages of EVP assessment, from the production of a reference product to e-vapor generation methods and from physicochemical characterization methods to nonclinical and clinical exposure studies. The objective of this review is to provide a detailed description of selected experimental setups and methods for EVP aerosol generation and collection and exposure systems for their in vitro and in vivo assessment. The focus is on the specificities of the product that constitute challenges and require development of ad hoc assessment frameworks, equipment, and methods. In so doing, this review aims to support further studies, objective evaluation, comparison, and verification of existing evidence, and, ultimately, formulation of standardized methods for testing EVPs.


2021 ◽  
pp. 001083672198936
Author(s):  
Lene Hansen ◽  
Rebecca Adler-Nissen ◽  
Katrine Emilie Andersen

The European refugee crisis has been communicated visually through images such as those of Alan Kurdi lying dead on the beach, by body bags on the harbor front of Lampedusa, by people walking through Europe and by border guards and fences. This article examines the broader visual environment within which EU policy-making took place from October 2013 to October 2015. It identifies ‘tragedy’ as the key term used by the EU to explain its actions and decisions and points out that discourses of humanitarianism and border control were both in place. The article provides a theoretical account of how humanitarianism and border control might be visualized by news photography. Adopting a multi-method design and analyzing a dataset of more than 1000 photos, the article presents a visual discourse analysis of five generic iconic motifs and a quantitative visual content analysis of shifts and continuity across four moments in time. The article connects these visual analyses to the policies and discourses of the EU holding that the ambiguity of the EU’s discourse was mirrored by the wider visual environment.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


Sign in / Sign up

Export Citation Format

Share Document