current layer
Recently Published Documents


TOTAL DOCUMENTS

157
(FIVE YEARS 29)

H-INDEX

24
(FIVE YEARS 3)

Author(s):  
K-J. Hwang ◽  
K. Dokgo ◽  
E. Choi ◽  
J. L. Burch ◽  
D. G. Sibeck ◽  
...  

On May 5, 2017 MMS observed a bifurcated current sheet at the boundary of Kelvin-Helmholtz vortices (KHVs) developed on the dawnside tailward magnetopause. We use the event to enhance our understanding of the formation and structure of asymmetric current sheets in the presence of density asymmetry, flow shear, and guide field, which have been rarely studied. The entire current layer comprises three separate current sheets, each corresponding to magnetosphere-side sunward separatrix region, central near-X-line region, and magnetosheath-side tailward separatrix region. Two off-center structures are identified as slow-mode discontinuities. All three current sheets have a thickness of ∼0.2 ion inertial length, demonstrating the sub-ion-scale current layer, where electrons mainly carry the current. We find that both the diamagnetic and electron anisotropy currents substantially support the bifurcated currents in the presence of density asymmetry and weak velocity shear. The combined effects of strong guide field, low density asymmetry, and weak flow shear appear to lead to asymmetries in the streamlines and the current-layer structure of the quadrupolar reconnection geometry. We also investigate intense electrostatics waves observed on the magnetosheath side of the KHV boundary. These waves may pre-heat a magnetosheath population that is to participate into the reconnection process, leading to two-step energization of the magnetosheath plasma entering into the magnetosphere via KHV-driven reconnection.


2021 ◽  
Vol 13 (21) ◽  
pp. 4379
Author(s):  
Cuiping Shi ◽  
Xinlei Zhang ◽  
Jingwei Sun ◽  
Liguo Wang

For remote sensing scene image classification, many convolution neural networks improve the classification accuracy at the cost of the time and space complexity of the models. This leads to a slow running speed for the model and cannot realize a trade-off between the model accuracy and the model running speed. As the network deepens, it is difficult to extract the key features with a sample double branched structure, and it also leads to the loss of shallow features, which is unfavorable to the classification of remote sensing scene images. To solve this problem, we propose a dual branch multi-level feature dense fusion-based lightweight convolutional neural network (BMDF-LCNN). The network structure can fully extract the information of the current layer through 3 × 3 depthwise separable convolution and 1 × 1 standard convolution, identity branches, and fuse with the features extracted from the previous layer 1 × 1 standard convolution, thus avoiding the loss of shallow information due to network deepening. In addition, we propose a downsampling structure that is more suitable for extracting the shallow features of the network by using the pooled branch to downsample and the convolution branch to compensate for the pooled features. Experiments were carried out on four open and challenging remote sensing image scene data sets. The experimental results show that the proposed method has higher classification accuracy and lower model complexity than some state-of-the-art classification methods and realizes the trade-off between model accuracy and model running speed.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6601
Author(s):  
Linsong Shao ◽  
Haorui Zuo ◽  
Jianlin Zhang ◽  
Zhiyong Xu ◽  
Jinzhen Yao ◽  
...  

Neural network pruning, an important method to reduce the computational complexity of deep models, can be well applied to devices with limited resources. However, most current methods focus on some kind of information about the filter itself to prune the network, rarely exploring the relationship between the feature maps and the filters. In this paper, two novel pruning methods are proposed. First, a new pruning method is proposed, which reflects the importance of filters by exploring the information in the feature maps. Based on the premise that the more information there is, more important the feature map is, the information entropy of feature maps is used to measure information, which is used to evaluate the importance of each filter in the current layer. Further, normalization is used to realize cross layer comparison. As a result, based on the method mentioned above, the network structure is efficiently pruned while its performance is well reserved. Second, we proposed a parallel pruning method using the combination of our pruning method above and slimming pruning method which has better results in terms of computational cost. Our methods perform better in terms of accuracy, parameters, and FLOPs compared to most advanced methods. On ImageNet, it is achieved 72.02% top1 accuracy for ResNet50 with merely 11.41 M parameters and 1.12 B FLOPs.For DenseNet40, it is obtained 94.04% accuracy with only 0.38M parameters and 110.72M FLOPs on CIFAR10, and our parallel pruning method makes the parameters and FLOPs are just 0.37M and 100.12M, respectively, with little loss of accuracy.


Author(s):  
Diyar Qader Zeebaree ◽  
Adnan Mohsin Abdulazeez ◽  
Lozan M. Abdullrhman ◽  
Dathar Abas Hasan ◽  
Omar Sedqi Kareem

Prediction is vital in our daily lives, as it is used in various ways, such as learning, adapting, predicting, and classifying. The prediction of parameters capacity of RNNs is very high; it provides more accurate results than the conventional statistical methods for prediction. The impact of a hierarchy of recurrent neural networks on Predicting process is studied in this paper. A recurrent network takes the hidden state of the previous layer as input and generates as output the hidden state of the current layer. Some of deep Learning algorithms can be utilized in as prediction tools in video analysis, musical information retrieval and time series applications. Recurrent networks may process examples simultaneously, maintaining a state or memory that recreates an arbitrarily long background window. Long Short-Term Memory (LSTM) and Bidirectional RNN (BRNN) are examples of recurrent networks. This paper aims to give a comprehensive assessment of predictions based on RNN. Additionally, each paper presents all relevant facts, such as dataset, method, architecture, and the accuracy of the predictions they deliver.


2021 ◽  
Author(s):  
Alexander Kolomytsev ◽  
◽  
Yulia Pronyaeva Pronyaeva ◽  

Most conventional log interpretation technics use the radial model, which was developed for vertical wells and work well in them. But applying this model to horizontal wells can result in false conclusions. The reasons for this are property changes in vertical direction and different depth of investigation (DOI) of logging tools. DOI area probably can include a response from different layers with different properties. All of this complicates petrophysical modeling. The 3D approach for high angle well evaluation (HAWE) is forward modeling in 3D. For this modeling, it is necessary to identify the geological concept near the horizontal well section using multiscale data. The accuracy of modeling depends on the details of the accepted geological model based on the data of borehole images, logs, geosteering inversion, and seismic data. 3D modeling can be applied to improve the accuracy of reservoir characterization, well placement, and completion. The radial model is often useless for HAWE because LWD tools have different DOI and the invasion zone was not formed. But the difference between volumetric and azimuthal measurements is important for comprehensive interpretation because various formations have different properties in vertical directions. Resistivity tools have the biggest DOI. It is important to understand and be able to determine the reason for changes in log response: a change in the properties of the current layer or approaching the layers with other properties. For this, it is necessary to know the distance to the boundaries of formations with various properties and, therefore, to understand the geological structure of the discovered deposits, and such information on the scale of well logs can be obtained either by modeling or by using extra deep resistivity inversion (mapping). The largest amount of multidisciplinary information is needed for modeling purposes - from images and logs to mapping and seismic data. Case studies include successful examples from Western Siberia clastic formations. In frame of the cases, different tasks have been solved: developed geological concept, updated petrophysical properties for STOIIP and completion, and provided solutions during geosteering. Multiscale modeling, which includes seismic, geosteering mapping data, LWD, and imagers, has been used for all cases.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3430
Author(s):  
Jean Mário Moreira de Lima ◽  
Fábio Meneghetti Ugulino de Araújo

Soft sensors based on deep learning have been growing in industrial process applications, inferring hard-to-measure but crucial quality-related variables. However, applications may present strong non-linearity, dynamicity, and a lack of labeled data. To deal with the above-cited problems, the extraction of relevant features is becoming a field of interest in soft-sensing. A novel deep representative learning soft-sensor modeling approach is proposed based on stacked autoencoder (SAE), mutual information (MI), and long-short term memory (LSTM). SAE is trained layer by layer with MI evaluation performed between extracted features and targeted output to evaluate the relevance of learned representation in each layer. This approach highlights relevant information and eliminates irrelevant information from the current layer. Thus, deep output-related representative features are retrieved. In the supervised fine-tuning stage, an LSTM is coupled to the tail of the SAE to address system inherent dynamic behavior. Also, a k-fold cross-validation ensemble strategy is applied to enhance the soft-sensor reliability. Two real-world industrial non-linear processes are employed to evaluate the proposed method performance. The obtained results show improved prediction performance in comparison to other traditional and state-of-art methods. Compared to the other methods, the proposed model can generate more than 38.6% and 39.4% improvement of RMSE for the two analyzed industrial cases.


Author(s):  
Yongsheng Liang ◽  
Wei Liu ◽  
Shuangyan Yi ◽  
Huoxiang Yang ◽  
Zhenyu He

AbstractIn deep neural network compression, channel/filter pruning is widely used for compressing the pre-trained network by judging the redundant channels/filters. In this paper, we propose a two-step filter pruning method to judge the redundant channels/filters layer by layer. The first step is to design a filter selection scheme based on $$\ell _{2,1}$$ ℓ 2 , 1 -norm by reconstructing the feature map of current layer. More specifically, the filter selection scheme aims to solve a joint $$\ell _{2,1}$$ ℓ 2 , 1 -norm minimization problem, i.e., both the regularization term and feature map reconstruction error term are constrained by $$\ell _{2,1}$$ ℓ 2 , 1 -norm. The $$\ell _{2,1}$$ ℓ 2 , 1 -norm regularization plays a role in the channel/filter selection, while the $$\ell _{2,1}$$ ℓ 2 , 1 -norm feature map reconstruction error term plays a role in the robust reconstruction. In this way, the proposed filter selection scheme can learn a column-sparse coefficient representation matrix that can indicate the redundancy of filters. Since pruning the redundant filters in current layer might dramatically influence the output feature map of the following layer, the second step needs to update the filters of the following layer to assure output of feature map approximates to that of baseline. Experimental results demonstrate the effectiveness of this proposed method. For example, our pruned VGG-16 on ImageNet achieves $$4\times $$ 4 × speedup with 0.95% top-5 accuracy drop. Our pruned ResNet-50 on ImageNet achieves $$2\times $$ 2 × speedup with 1.56% top-5 accuracy drop. Our pruned MobileNet on ImageNet achieves $$2\times $$ 2 × speedup with 1.20% top-5 accuracy drop.


2021 ◽  
Author(s):  
Elena Grigorenko ◽  
Makar Leonenko ◽  
Lev Zelenyi ◽  
Helmi Malova ◽  
Victor Popov

<p>Current sheets (CSs) play a crucial role in the storage and conversion of magnetic energy in planetary magnetotails. Spacecraft observations in the terrestrial magnetotail reported that the CS thinning and intensification can result in formation of multiscale current structure in which a very thin and intense current layer at the center of the CS is embedded into a thicker sheet. To describe such CSs fully kinetic description taking into account all peculiarities of non-adiabatic particle dynamics is required. Kinetic description brings kinetic scales to the CS models. Ion scales are controlled by thermal ion Larmor radius, while scales of sub-ion embedded CS are controlled by the topology of magnetic field lines until the electron motion is magnetized by a small component of the magnetic field existing in a very center of the CS. MMS observations in the Earth magnetotail as well as MAVEN observations in the Martian magnetotail with high time resolution revealed the formation of similar multiscale structure of the cross-tail CS in spite of very different local plasma characteristics. We revealed that the typical half‐thickness of the embedded Super Thin Current Sheet (STCSs) observed at the center of the CS in the magnetotails of both planets is much less than the gyroradius of thermal protons. The formation of STCS does not depend on ion composition, density and temperature,  but it is controlled by the small value of the normal component of the magnetic field at the neutral plane. Our analysis showed that there is a good agreement between the spatial scaling of multiscale CSs observed in both magnetotails and the scaling predicted by the quasi-adiabatic model of thin anisotropic CS taking into account the coupling between ion and electron currents. Thus, in spite of the significant differences in the CS formation, ion composition, and plasma characteristics in the Earth’s and Martian magnetotails, similar kinetic features are observed in the CS structures in the magnetotails of both planets. This phenomenon can be explained by the universal principles of nature. The CS once has been formed, then it should be self-consistently supported by the internal coupling of the total current carried by particles in the CS and its magnetic configuration, and as soon as the system achieved the quasi-equilibrium state, it “forgets” the mechanisms of its formation, and its following existence is ruled by the general principles of plasma kinetic described by Vlasov–Maxwell equations.</p><p>This work is supported by the Russian Science Foundation grant № 20-42-04418</p>


Sign in / Sign up

Export Citation Format

Share Document