scholarly journals Video Based Person Re-Identification Through Selective Knowledge Distillation

Author(s):  
Gudavalli Sai Abhilash ◽  
Kantheti Rajesh ◽  
Jangam Dileep Shaleem ◽  
Grandi Sai Sarath ◽  
Palli R Krishna Prasad

The creation and deployment of face recognition models need to identify low-resolution faces with extremely low computational cost. To address this problem, a feasible solution is compressing a complex face model to achieve higher speed and lower memory at the cost of minimal performance drop. Inspired by that, this paper proposes a learning approach to recognize low-resolution faces via selective knowledge distillation in live video. In this approach, a two-stream convolution neural network (CNN) is first initialized to recognize high-resolution faces and resolution-degraded faces with a teacher stream and a student stream, respectively. The teacher stream is represented by a complex CNN for high-accuracy recognition, and the student stream is represented by a much simpler CNN for low-complexity recognition. To avoid significant performance drop at the student stream, we then selectively distil the most informative facial features from the teacher stream by solving a sparse graph optimization problem, which are then used to regularize the fine- tuning process of the student stream. In this way, the student stream is actually trained by simultaneously handling two tasks with limited computational resources approximating the most informative facial cues via feature regression, and recovering the missing facial cues via low-resolution face classification.

2019 ◽  
Author(s):  
Mohsen Sadeghi ◽  
Frank Noé

Biomembranes are two-dimensional assemblies of phospholipids that are only a few nanometres thick, but form micrometer-sized structures vital to cellular function. Explicit modelling of biologically relevant membrane systems is computationally expensive, especially when the large number of solvent particles and slow membrane kinetics are taken into account. While highly coarse-grained solvent-free models are available to study equilibrium behaviour of membranes, their efficiency comes at the cost of sacrificing realistic kinetics, and thereby the ability to predict pathways and mechanisms of membrane processes. Here, we present a framework for integrating coarse-grained membrane models with anisotropic stochastic dynamics and continuum-based hydrodynamics, allowing us to simulate large biomembrane systems with realistic kinetics at low computational cost. This paves the way for whole-cell simulations that still include nanometer/nanosecond spatiotemporal resolutions. As a demonstration, we obtain and verify fluctuation spectrum of a full-sized human red blood cell in a 150-milliseconds-long single trajectory. We show how the kinetic effects of different cytoplasmic viscosities can be studied with such a simulation, with predictions that agree with single-cell experimental observations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Moritz Ebeling-Rump ◽  
Dietmar Hömberg ◽  
Robert Lasarzik ◽  
Thomas Petzold

AbstractIn topology optimization the goal is to find the ideal material distribution in a domain subject to external forces. The structure is optimal if it has the highest possible stiffness. A volume constraint ensures filigree structures, which are regulated via a Ginzburg–Landau term. During 3D printing overhangs lead to instabilities. As a remedy an additive manufacturing constraint is added to the cost functional. First order optimality conditions are derived using a formal Lagrangian approach. With an Allen-Cahn interface propagation the optimization problem is solved iteratively. At a low computational cost the additive manufacturing constraint brings about support structures, which can be fine tuned according to demands and increase stability during the printing process.


2020 ◽  
Vol 20 (5) ◽  
pp. 799-814
Author(s):  
RICHARD TAUPE ◽  
ANTONIUS WEINZIERL ◽  
GERHARD FRIEDRICH

AbstractGeneralising and re-using knowledge learned while solving one problem instance has been neglected by state-of-the-art answer set solvers. We suggest a new approach that generalises learned nogoods for re-use to speed-up the solving of future problem instances. Our solution combines well-known ASP solving techniques with deductive logic-based machine learning. Solving performance can be improved by adding learned non-ground constraints to the original program. We demonstrate the effects of our method by means of realistic examples, showing that our approach requires low computational cost to learn constraints that yield significant performance benefits in our test cases. These benefits can be seen with ground-and-solve systems as well as lazy-grounding systems. However, ground-and-solve systems suffer from additional grounding overheads, induced by the additional constraints in some cases. By means of conflict minimization, non-minimal learned constraints can be reduced. This can result in significant reductions of grounding and solving efforts, as our experiments show.


2021 ◽  
Author(s):  
Adyn Miles ◽  
Mahdi S. Hosseini ◽  
Sheyang Tang ◽  
Zhou Wang ◽  
Savvas Damaskinos ◽  
...  

Abstract Out-of-focus sections of whole slide images are a significant source of false positives and other systematic errors in clinical diagnoses. As a result, focus quality assessment (FQA) methods must be able to quickly and accurately differentiate between focus levels in a scan. Recently, deep learning methods using convolutional neural networks (CNNs) have been adopted for FQA. However, the biggest obstacles impeding their wide usage in clinical workflows are their generalizability across different test conditions and their potentially high computational cost. In this study, we focus on the transferability and scalability of CNN-based FQA approaches. We carry out an investigation on ten architecturally diverse networks using five datasets with stain and tissue diversity. We evaluate the computational complexity of each network and scale this to realistic applications involving hundreds of whole slide images. We assess how well each full model transfers to a separate, unseen dataset without fine-tuning. We show that shallower networks transfer well when used on small input patch sizes, while deeper networks work more effectively on larger inputs. Furthermore, we introduce neural architecture search (NAS) to the field and learn an automatically designed low-complexity CNN architecture using differentiable architecture search which achieved competitive performance relative to established CNNs.


2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Paulo Somaini ◽  
Frank A. Wolak

AbstractWe present an algorithm to estimate the two-way fixed effect linear model. The algorithm relies on the Frisch-Waugh-Lovell theorem and applies to ordinary least squares (OLS), two-stage least squares (TSLS) and generalized method of moments (GMM) estimators. The coefficients of interest are computed using the residuals from the projection of all variables on the two sets of fixed effects. Our algorithm has three desirable features. First, it manages memory and computational resources efficiently which speeds up the computation of the estimates. Second, it allows the researcher to estimate multiple specifications using the same set of fixed effects at a very low computational cost. Third, the asymptotic variance of the parameters of interest can be consistently estimated using standard routines on the residualized data.


2019 ◽  
Vol 11 (24) ◽  
pp. 2908 ◽  
Author(s):  
Yakoub Bazi ◽  
Mohamad M. Al Rahhal ◽  
Haikel Alhichri ◽  
Naif Alajlan

The current literature of remote sensing (RS) scene classification shows that state-of-the-art results are achieved using feature extraction methods, where convolutional neural networks (CNNs) (mostly VGG16 with 138.36 M parameters) are used as feature extractors and then simple to complex handcrafted modules are added for additional feature learning and classification, thus coming back to feature engineering. In this paper, we revisit the fine-tuning approach for deeper networks (GoogLeNet and Beyond) and show that it has not been well exploited due to the negative effect of the vanishing gradient problem encountered when transferring knowledge to small datasets. The aim of this work is two-fold. Firstly, we provide best practices for fine-tuning pre-trained CNNs using the root-mean-square propagation (RMSprop) method. Secondly, we propose a simple yet effective solution for tackling the vanishing gradient problem by injecting gradients at an earlier layer of the network using an auxiliary classification loss function. Then, we fine-tune the resulting regularized network by optimizing both the primary and auxiliary losses. As for pre-trained CNNs, we consider in this work inception-based networks and EfficientNets with small weights: GoogLeNet (7 M) and EfficientNet-B0 (5.3 M) and their deeper versions Inception-v3 (23.83 M) and EfficientNet-B3 (12 M), respectively. The former networks have been used previously in the context of RS and yielded low accuracies compared to VGG16, while the latter are new state-of-the-art models. Extensive experimental results on several benchmark datasets reveal clearly that if fine-tuning is done in an appropriate way, it can settle new state-of-the-art results with low computational cost.


Author(s):  
Janet Pomares Betancourt ◽  
◽  
Chastine Fatichah ◽  
Martin Leonard Tangel ◽  
Fei Yan ◽  
...  

A method for ECG and capnogram signals classification is proposed based on fuzzy similarity evaluation, where shape exchange algorithm and fuzzy inference are combined. It aims to be applied to quasi-periodic biomedical signals and has low computational cost. On the experiments for atrial fibrillation (AF) classification using two databases: MIT-BIH AF and MITBIH Normal Sinus Rhythm, values of 100%, 94.4%, and 97.6% for sensitivity, specificity, and accuracy respectively, and execution time of 0.6 s are obtained. The proposal is capable of been extended to classify different diseases, from ECG and capnogram signals, such as: Brugada syndrome, AV block, hypoventilation, and asthma among others to be implemented in low computational resources devices.


GigaScience ◽  
2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Fan Zhang ◽  
Hyun Min Kang

Abstract Background Rapid and thorough quality assessment of sequenced genomes on an ultra-high-throughput scale is crucial for successful large-scale genomic studies. Comprehensive quality assessment typically requires full genome alignment, which costs a substantial amount of computational resources and turnaround time. Existing tools are either computationally expensive owing to full alignment or lacking essential quality metrics by skipping read alignment. Findings We developed a set of rapid and accurate methods to produce comprehensive quality metrics directly from a subset of raw sequence reads (from whole-genome or whole-exome sequencing) without full alignment. Our methods offer orders of magnitude faster turnaround time than existing full alignment–based methods while providing comprehensive and sophisticated quality metrics, including estimates of genetic ancestry and cross-sample contamination. Conclusions By rapidly and comprehensively performing the quality assessment, our tool will help investigators detect potential issues in ultra-high-throughput sequence reads in real time within a low computational cost at the early stages of the analyses, ensuring high-quality downstream results and preventing unexpected loss in time, money, and invaluable specimens.


Author(s):  
Sarat Chandra Nayak ◽  
Subhranginee Das ◽  
Mohammad Dilsad Ansari

Background and Objective: Stock closing price prediction is enormously complicated. Artificial Neural Networks (ANN) are excellent approximation algorithms applied to this area. Several nature-inspired evolutionary optimization techniques are proposed and used in the literature to search the optimum parameters of ANN based forecasting models. However, most of them need fine-tuning of several control parameters as well as algorithm specific parameters to achieve optimal performance. Improper tuning of such parameters either leads toward additional computational cost or local optima. Methods: Teaching Learning Based Optimization (TLBO) is a newly proposed algorithm which does not necessitate any parameters specific to it. The intrinsic capability of Functional Link Artificial Neural Network (FLANN) to recognize the multifaceted nonlinear relationship present in the historical stock data made it popular and got wide applications in the stock market prediction. This article presents a hybrid model termed as Teaching Learning Based Optimization of Functional Neural Networks (TLBO-FLN) by combining the advantages of both TLBO and FLANN. Results and Conclusion: The model is evaluated by predicting the short, medium, and long-term closing prices of four emerging stock markets. The performance of the TLBO-FLN model is measured through Mean Absolute Percentage of Error (MAPE), Average Relative Variance (ARV), and coefficient of determination (R2); compared with that of few other state-of-the-art models similarly trained and found superior.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Chih-Kai Huang ◽  
Shan-Hsiang Shen

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache , which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.


Sign in / Sign up

Export Citation Format

Share Document