network layers
Recently Published Documents


TOTAL DOCUMENTS

219
(FIVE YEARS 124)

H-INDEX

12
(FIVE YEARS 5)

Author(s):  
Andriy Dudnik ◽  
◽  
Ivan Bakhov ◽  
Oleksandr Makhovych ◽  
Yulia Ryabokin ◽  
...  

The paper discusses models and methods for improving the performance of wireless computer networks built based on the decomposition of the lower levels of the OSI reference model. A method to improve the performance of networks is suggested, which functionally combines the physical and network layers, which improves its efficiency in marginal reception areas almost twice. A model of the block diagram of a device for improving data transmission quality in marginal reception areas or those with insufficient noise immunity is developed based on the so-called communication quality status monitoring, as well as a model of the block diagram of a wireless adaptive capacity reallocation router based on dynamic channels capacity reallocation, which allows adequately reallocating IS resources depending on traffic and user priority. Keywords— Bluetooth, FIFO discipline, IEEE 802.11, OSI/ISO reference model, wireless computer networks.


Author(s):  
Jiang Chang ◽  
Shengqi Guan

In order to solve the problem of dataset expansion in deep learning tasks such as image classification, this paper proposed an image generation model called Class Highlight Generative Adversarial Networks (CH-GANs). In order to highlight image categories, accelerate the convergence speed of the model and generate true-to-life images with clear categories, first, the image category labels were deconvoluted and integrated into the generator through [Formula: see text] convolution. Second, a novel discriminator that cannot only judge the authenticity of the image but also the image category was designed. Finally, in order to quickly and accurately classify strip steel defects, the lightweight image classification network GhostNet was appropriately improved by modifying the number of network layers and the number of network channels, adding SE modules, etc., and was trained on the dataset expanded by CH-GAN. In the comparative experiments, the average FID of CH-GAN is 7.59; the accuracy of the improved GhostNet is 95.67% with 0.19[Formula: see text]M parameters. The experimental results prove the effectiveness and superiority of the methods proposed in this paper in the generation and classification of strip steel defect images.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 222
Author(s):  
Tomasz Wichary ◽  
Jordi Mongay Batalla ◽  
Constandinos X. Mavromoustakis ◽  
Jerzy Żurek ◽  
George Mastorakis

This paper focuses on the security challenges of network slice implementation in 5G networks. We propose that network slice controllers support security by enabling security controls at different network layers. The slice controller orchestrates multilevel domains with resources at a very high level but needs to understand how to define the resources at lower levels. In this context, the main outstanding security challenge is the compromise of several resources in the presence of an attack due to weak resource isolation at different levels. We analysed the current standards and trends directed to mitigate the vulnerabilities mentioned above, and we propose security controls and classify them by efficiency and applicability (easiness to develop). Security controls are a common way to secure networks, but they enforce security policies only in respective areas. Therefore, the security domains allow for structuring the orchestration principles by considering the necessary security controls to be applied. This approach is common for both vendor-neutral and vendor-dependent security solutions. In our classification, we considered the controls in the following fields: (i) fair resource allocation with dynamic security assurance, (ii) isolation in a multilayer architecture and (iii) response to DDoS attacks without service and security degradation.


2022 ◽  
Author(s):  
Amogh Palasamudram

<p>This research introduces and evaluates the Neural Layer Bypassing Network (NLBN), a new neural network architecture to improve the speed and effectiveness of forward propagation in deep learning. This architecture utilizes 1 additional (fully connected) neural network layer after every layer in the main network. This new layer determines whether finishing the rest of the forward propagation is required to predict the output of the given input. To test the effectiveness of the NLBN, I programmed coding examples for this architecture with 3 different image classification models trained on 3 different datasets: MNIST Handwritten Digits Dataset, Horses or Humans Dataset, and Colorectal Histology Dataset. After training 1 standard convolutional neural network (CNN) and 1 NLBN per dataset (both of equivalent architectures), I performed 5 trials per dataset to analyze the performance of these two architectures. For the NLBN, I also collected data regarding the accuracy, time period, and speed of the network with respect to the percentage of the model the inputs are passed through. It was found that this architecture increases the speed of forward propagation by 6% - 25% while the accuracy tended to decrease by 0% - 4%; the results vary based on the dataset and structure of the model, but the increase in speed was normally at least twice the decrease in accuracy. In addition to the NLBN’s performance during predictions, it takes roughly 40% longer to train and requires more memory due to its complexity. However, the architecture can be made more efficient if integrated into TensorFlow libraries. Overall, by being able to autonomously skip neural network layers, this architecture can potentially be a foundation for neural networks to teach themselves to become more efficient for applications that require fast, accurate, and less computationally intensive predictions.<br></p>


Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 59
Author(s):  
Baihan Lin

Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.


2021 ◽  
Author(s):  
Bahrad A Sokhansanj ◽  
Zhengqiao Zhao ◽  
Gail L Rosen

As the COVID-19 pandemic continues, the SARS-CoV-2 virus continues to rapidly mutate and change in ways that impact virulence, transmissibility, and immune evasion. Genome sequencing is a critical tool, as other biological techniques can be more costly, time-consuming, and difficult. However, the rapid and complex evolution of SARS-CoV-2 challenges conventional sequence analysis methods like phylogenetic analysis. The virus picks up and loses mutations independently in multiple subclades, often in novel or unexpected combinations, and, as for the newly emerged Omicron variant, sometimes with long explained branches. We propose interpretable deep sequence models trained by machine learning to complement conventional methods. We apply Transformer-based neural network models developed for natural language processing to analyze protein sequences. We add network layers to generate sample embeddings and sequence-wide attention to interpret models and visualize multiscale patterns. We demonstrate and validate our framework by modeling SARS-CoV-2 and coronavirus taxonomy. We then develop an interpretable predictive model of disease severity that integrates SARS-CoV-2 spike protein sequence and patient demographic variables, using publicly available data from the GISAID database. We also apply our model to Omicron. Based on knowledge prior to the availability of empirical data for Omicron, we predict: 1) reduced neutralization antibody activity (15-50 fold) greater than any previously characterized variant, varying between Omicron sublineages, and 2) reduced risk of severe disease (by 35-40%) relative to Delta. Both predictions are in accord with recent epidemiological and experimental data.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Lubna Farhi ◽  
Hira Abbasi ◽  
Rija Rehman

Identity management system in most academic and office environments is presently achieved primarily by a manual method where the user has to input their attendance into the system. The manual method sometimes results in human error and makes the process less efficient and time-consuming. The proposed system highlights the implementation and design of a smart face identification-based management system while taking into account both the background luminosity and distance. This system detects and recognizes the person and marks their attendance with the timestamp. In this methodology, the face is initially resized to 3 different sizes of 256, 384, and 512 pixels for multiscale testing. The overall outcome size descriptor is the overall mean for these characteristic vectors, and the deep convolution neural network calculates 22 facial features in 128 distinct embeddings in 22-deep network layers. The pose of the 2D face from −15 to +15° provides identification with 98% accuracy in low computation time. Another feature of the proposed system is that it is able to accurately perform identification with an accuracy of 99.92% from a distance of 5 m under optimal light conditions. The accuracy is also dependent on the light intensity where it varies from 96% to 99% under 100 to 1000 lumen/m2, respectively. The presented model not only improves accuracy and identity under realistic conditions but also reduces computation time.


2021 ◽  
Vol 13 (24) ◽  
pp. 5143
Author(s):  
Bo Huang ◽  
Zhiming Guo ◽  
Liaoni Wu ◽  
Boyong He ◽  
Xianjiang Li ◽  
...  

Image super-resolution (SR) technology aims to recover high-resolution images from low-resolution originals, and it is of great significance for the high-quality interpretation of remote sensing images. However, most present SR-reconstruction approaches suffer from network training difficulties and the challenge of increasing computational complexity with increasing numbers of network layers. This indicates that these approaches are not suitable for application scenarios with limited computing resources. Furthermore, the complex spatial distributions and rich details of remote sensing images increase the difficulty of their reconstruction. In this paper, we propose the pyramid information distillation attention network (PIDAN) to solve these issues. Specifically, we propose the pyramid information distillation attention block (PIDAB), which has been developed as a building block in the PIDAN. The key components of the PIDAB are the pyramid information distillation (PID) module and the hybrid attention mechanism (HAM) module. Firstly, the PID module uses feature distillation with parallel multi-receptive field convolutions to extract short- and long-path feature information, which allows the network to obtain more non-redundant image features. Then, the HAM module enhances the sensitivity of the network to high-frequency image information. Extensive validation experiments show that when compared with other advanced CNN-based approaches, the PIDAN achieves a better balance between image SR performance and model size.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Ling Guo

This paper aims to explore the application value of SonoVue contrast-enhanced ultrasonography based on deep unsupervised learning (DNS) in the diagnosis of nipple discharge. In this paper, a new model (ODNS) is proposed based on the unsupervised learning model and stack self-coding network. The ultrasonic images of 1,725 patients with breast lesions in the shared database are used as the test data of the model. The differences in accuracy (Acc), recall (RE), sensitivity (Sen), and running time between the two models before and after optimization and other algorithms are compared. A total of 48 female patients with nipple discharge are enrolled. The differences in SE, specificity (SP), positive predictive value (PPV), and negative predictive value (NPV) of conventional ultrasound and contrast-enhanced ultrasonography are analyzed based on pathological examination results. The results showed that when the number of network layers is 5, the classification accuracies of DNS and ODNS model data reached the highest values, which were 91.45% and 98.64%, respectively.


2021 ◽  
pp. 1-14
Author(s):  
Yan Zhang ◽  
Gongping Yang ◽  
Yikun Liu ◽  
Chong Wang ◽  
Yilong Yin

Detection of cotton bolls in the field environments is one of crucial techniques for many precision agriculture applications, including yield estimation, disease and pest recognition and automatic harvesting. Because of the complex conditions, such as different growth periods and occlusion among leaves and bolls, detection in the field environments is a task with considerable challenges. Despite this, the development of deep learning technologies have shown great potential to effectively solve this task. In this work, we propose an Improved YOLOv5 network to detect unopened cotton bolls in the field accurately and with lower cost, which combines DenseNet, attention mechanism and Bi-FPN. Besides, we modify the architecture of the network to get larger feature maps from shallower network layers to enhance the ability of detecting bolls due to the size of cotton boll is generally small. We collect image data of cotton in Aodu Farm in Xinjiang Province, China and establish a dataset containing 616 high-resolution images. The experiment results show that the proposed method is superior to the original YOLOv5 model and other methods such as YOLOv3,SSD and FasterRCNN considering the detection accuracy, computational cost, model size and speed at the same time. The detection of cotton boll can be further applied for different purposes such as yield prediction and identification of diseases and pests in earlier stage which can effectively help farmers take effective approaches in time and reduce the crop losses and therefore increase production.


Sign in / Sign up

Export Citation Format

Share Document