scholarly journals Filter pruning-based two-step feature map reconstruction

Author(s):  
Yongsheng Liang ◽  
Wei Liu ◽  
Shuangyan Yi ◽  
Huoxiang Yang ◽  
Zhenyu He

AbstractIn deep neural network compression, channel/filter pruning is widely used for compressing the pre-trained network by judging the redundant channels/filters. In this paper, we propose a two-step filter pruning method to judge the redundant channels/filters layer by layer. The first step is to design a filter selection scheme based on $$\ell _{2,1}$$ ℓ 2 , 1 -norm by reconstructing the feature map of current layer. More specifically, the filter selection scheme aims to solve a joint $$\ell _{2,1}$$ ℓ 2 , 1 -norm minimization problem, i.e., both the regularization term and feature map reconstruction error term are constrained by $$\ell _{2,1}$$ ℓ 2 , 1 -norm. The $$\ell _{2,1}$$ ℓ 2 , 1 -norm regularization plays a role in the channel/filter selection, while the $$\ell _{2,1}$$ ℓ 2 , 1 -norm feature map reconstruction error term plays a role in the robust reconstruction. In this way, the proposed filter selection scheme can learn a column-sparse coefficient representation matrix that can indicate the redundancy of filters. Since pruning the redundant filters in current layer might dramatically influence the output feature map of the following layer, the second step needs to update the filters of the following layer to assure output of feature map approximates to that of baseline. Experimental results demonstrate the effectiveness of this proposed method. For example, our pruned VGG-16 on ImageNet achieves $$4\times $$ 4 × speedup with 0.95% top-5 accuracy drop. Our pruned ResNet-50 on ImageNet achieves $$2\times $$ 2 × speedup with 1.56% top-5 accuracy drop. Our pruned MobileNet on ImageNet achieves $$2\times $$ 2 × speedup with 1.20% top-5 accuracy drop.

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3430
Author(s):  
Jean Mário Moreira de Lima ◽  
Fábio Meneghetti Ugulino de Araújo

Soft sensors based on deep learning have been growing in industrial process applications, inferring hard-to-measure but crucial quality-related variables. However, applications may present strong non-linearity, dynamicity, and a lack of labeled data. To deal with the above-cited problems, the extraction of relevant features is becoming a field of interest in soft-sensing. A novel deep representative learning soft-sensor modeling approach is proposed based on stacked autoencoder (SAE), mutual information (MI), and long-short term memory (LSTM). SAE is trained layer by layer with MI evaluation performed between extracted features and targeted output to evaluate the relevance of learned representation in each layer. This approach highlights relevant information and eliminates irrelevant information from the current layer. Thus, deep output-related representative features are retrieved. In the supervised fine-tuning stage, an LSTM is coupled to the tail of the SAE to address system inherent dynamic behavior. Also, a k-fold cross-validation ensemble strategy is applied to enhance the soft-sensor reliability. Two real-world industrial non-linear processes are employed to evaluate the proposed method performance. The obtained results show improved prediction performance in comparison to other traditional and state-of-art methods. Compared to the other methods, the proposed model can generate more than 38.6% and 39.4% improvement of RMSE for the two analyzed industrial cases.


2020 ◽  
Vol 8 (8) ◽  
pp. 4244-4251 ◽  
Author(s):  
Yan Zhao ◽  
Yanling Liu ◽  
Chao Wang ◽  
Emily Ortega ◽  
Xiaomao Wang ◽  
...  

Target ionic control membrane surface multilayers containing target ion channels and target ion exchange sites were created based on ionic control principle and alternating current layer-by-layer assembly technology for extraction of target ions.


2021 ◽  
Vol 13 (14) ◽  
pp. 2812
Author(s):  
Changyu Hu ◽  
Ling Wang ◽  
Daiyin Zhu ◽  
Otmar Loffeld

Sparse imaging relies on sparse representations of the target scenes to be imaged. Predefined dictionaries have long been used to transform radar target scenes into sparse domains, but the performance is limited by the artificially designed or existing transforms, e.g., Fourier transform and wavelet transform, which are not optimal for the target scenes to be sparsified. The dictionary learning (DL) technique has been exploited to obtain sparse transforms optimized jointly with the radar imaging problem. Nevertheless, the DL technique is usually implemented in a manner of patch processing, which ignores the relationship between patches, leading to the omission of some feature information during the learning of the sparse transforms. To capture the feature information of the target scenes more accurately, we adopt image patch group (IPG) instead of patch in DL. The IPG is constructed by the patches with similar structures. DL is performed with respect to each IPG, which is termed as group dictionary learning (GDL). The group oriented sparse representation (GOSR) and target image reconstruction are then jointly optimized by solving a l1 norm minimization problem exploiting GOSR, during which a generalized Gaussian distribution hypothesis of radar image reconstruction error is introduced to make the imaging problem tractable. The imaging results using the real ISAR data show that the GDL-based imaging method outperforms the original DL-based imaging method in both imaging quality and computational speed.


Author(s):  
J. Kang ◽  
L. Chen ◽  
F. Deng ◽  
C. Heipke

Abstract. Recently, great progress has been made in formulating dense disparity estimation as a pixel-wise learning task to be solved by deep convolutional neural networks. However, most resulting pixel-wise disparity maps only show little detail for small structures. In this paper, we propose a two-stage architecture: we first learn initial disparities using an initial network, and then employ a disparity refinement network, guided by the initial results, which directly learns disparity corrections. Based on the initial disparities, we construct a residual cost volume between shared left and right feature maps in a potential disparity residual interval, which can capture more detailed context information. Then, the right feature map is warped with the initial disparity and a reconstruction error volume is constructed between the warped right feature map and the original left feature map, which provides a measure of correctness of the initial disparities. The main contribution of this paper is to combine the residual cost volume and the reconstruction error volume to guide training of the refinement network. We use a shallow encoder-decoder module in the refinement network and do learning from coarse to fine, which simplifies the learning problem. We evaluate our method on several challenging stereo datasets. Experimental results demonstrate that our refinement network can significantly improve the overall accuracy by reducing the estimation error by 30% compared with our initial network. Moreover, our network also achieves competitive performance compared with other CNN-based methods.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Miao Tian ◽  
Ying Cui ◽  
Haixia Long ◽  
Junxia Li

Purpose In novelty detection, the autoencoder based image reconstruction strategy is one of the mainstream solutions. The basic idea is that once the autoencoder is trained on normal data, it has a low reconstruction error on normal data. However, when faced with complex natural images, the conventional pixel-level reconstruction becomes poor and does not show the promising results. This paper aims to provide a new method for improving the performance of novelty detection based autoencoder. Design/methodology/approach To solve the problem that conventional pixel-level reconstruction cannot effectively extract the global semantic information of the image, a novel model with the combination of attention mechanism and self-supervised learning method is proposed. First, an auxiliary task, reconstruct rotated image, is set to enable the network to learn global semantic feature information. Then, the channel attention mechanism is introduced to perform adaptive feature refinement on the intermediate feature map to optimize the correspondingly passed feature map. Findings Experimental results on three public data sets show that the proposed method has potential performance for novelty detection. Originality/value This study explores the ability of self-supervised learning methods and attention mechanism to extract features on a single class of images. In this way, the performance of novelty detection can be improved.


Nanoscale ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 2264-2274 ◽  
Author(s):  
Yan Zhao ◽  
Congjie Gao ◽  
Bart Van der Bruggen

Durable multilayers with the selective separation of monovalent anions and antifouling properties of an anion exchange membrane were constructed via an alternating current layer-by-layer assembly.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6601
Author(s):  
Linsong Shao ◽  
Haorui Zuo ◽  
Jianlin Zhang ◽  
Zhiyong Xu ◽  
Jinzhen Yao ◽  
...  

Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring the relationship between the feature maps and the filters. In this paper, two novel pruning methods are proposed. First, a new pruning method is proposed, which reflects the importance of filters by exploring the information in the feature maps. Based on the premise that the more information there is, more important the feature map is, the information entropy of feature maps is used to measure information, which is used to evaluate the importance of each filter in the current layer. Further, normalization is used to realize cross layer comparison. As a result, based on the method mentioned above, the network structure is efficiently pruned while its performance is well reserved. Second, we proposed a parallel pruning method using the combination of our pruning method above and slimming pruning method which has better results in terms of computational cost. Our methods perform better in terms of accuracy, parameters, and FLOPs compared to most advanced methods. On ImageNet, it is achieved 72.02% top1 accuracy for ResNet50 with merely 11.41 M parameters and 1.12 B FLOPs.For DenseNet40, it is obtained 94.04% accuracy with only 0.38M parameters and 110.72M FLOPs on CIFAR10, and our parallel pruning method makes the parameters and FLOPs are just 0.37M and 100.12M, respectively, with little loss of accuracy.


Author(s):  
M.A. Parker ◽  
K.E. Johnson ◽  
C. Hwang ◽  
A. Bermea

We have reported the dependence of the magnetic and recording properties of CoPtCr recording media on the thickness of the Cr underlayer. It was inferred from XRD data that grain-to-grain epitaxy of the Cr with the CoPtCr was responsible for the interaction observed between these layers. However, no cross-sectional TEM (XTEM) work was performed to confirm this inference. In this paper, we report the application of new techniques for preparing XTEM specimens from actual magnetic recording disks, and for layer-by-layer micro-diffraction with an electron probe elongated parallel to the surface of the deposited structure which elucidate the effect of the crystallographic structure of the Cr on that of the CoPtCr.XTEM specimens were prepared from magnetic recording disks by modifying a technique used to prepare semiconductor specimens. After 3mm disks were prepared per the standard XTEM procedure, these disks were then lapped using a tripod polishing device. A grid with a single 1mmx2mm hole was then glued with M-bond 610 to the polished side of the disk.


Author(s):  
Yoshichika Bando ◽  
Takahito Terashima ◽  
Kenji Iijima ◽  
Kazunuki Yamamoto ◽  
Kazuto Hirata ◽  
...  

The high quality thin films of high-Tc superconducting oxide are necessary for elucidating the superconducting mechanism and for device application. The recent trend in the preparation of high-Tc films has been toward “in-situ” growth of the superconducting phase at relatively low temperatures. The purpose of “in-situ” growth is to attain surface smoothness suitable for fabricating film devices but also to obtain high quality film. We present the investigation on the initial growth manner of YBCO by in-situ reflective high energy electron diffraction (RHEED) technique and on the structural and superconducting properties of the resulting ultrathin films below 100Å. The epitaxial films have been grown on (100) plane of MgO and SrTiO, heated below 650°C by activated reactive evaporation. The in-situ RHEED observation and the intensity measurement was carried out during deposition of YBCO on the substrate at 650°C. The deposition rate was 0.8Å/s. Fig. 1 shows the RHEED patterns at every stage of deposition of YBCO on MgO(100). All the patterns exhibit the sharp streaks, indicating that the film surface is atomically smooth and the growth manner is layer-by-layer.


Sign in / Sign up

Export Citation Format

Share Document