complexity reduction
Recently Published Documents


TOTAL DOCUMENTS

769
(FIVE YEARS 117)

H-INDEX

27
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Xiaoning Huang ◽  
Yongping Xin ◽  
Ting Lu

One defining goal of microbiome research is to uncover mechanistic causation that dictates the emergence of structural and functional traits of microbiomes. However, the extraordinary degree of ecosystem complexity has hampered the realization of the goal. Here we developed a systematic, complexity-reducing strategy to mechanistically elucidate the compositional and metabolic characteristics of microbiome by using the kombucha tea microbiome as an example. The strategy centered around a two-species core that was abstracted from but recapitulated the native counterpart. The core was convergent in its composition, coordinated on temporal metabolic patterns, and capable for pellicle formation. Controlled fermentations uncovered the drivers of these characteristics, which were also demonstrated translatable to provide insights into the properties of communities with increased complexity and altered conditions. This work unravels the pattern and process underlying the kombucha tea microbiome, providing a potential conceptual framework for mechanistic investigation of microbiome behaviors.


2022 ◽  
Author(s):  
Diego Argüello Ron ◽  
Pedro Jorge Freire De Carvalho Sourza ◽  
Jaroslaw E. Prilepsky ◽  
Morteza Kamalian-Kopae ◽  
Antonio Napoli ◽  
...  

Abstract The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed, while keeping an acceptable performance level. In this work, we address this problem by applying pruning and quantization techniques to an NN-based optical channel equalizer. We use an exemplary NN architecture, the multi-layer perceptron (MLP), and address its complexity reduction for the 30 GBd 1000 km transmission over a standard single-mode fiber. We demonstrate that it is feasible to reduce the equalizer’s memory by up to 87.12%, and its complexity by up to 91.5%, without noticeable performance degradation. In addition to this, we accurately define the computational complexity of a compressed NN-based equalizer in the digital signal processing (DSP) sense and examine the impact of using different CPU and GPU settings on power consumption and latency for the compressed equalizer. We also verify the developed technique experimentally, using two standard edge-computing hardware units: Raspberry Pi 4 and Nvidia Jetson Nano.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3112
Author(s):  
Jinchao Zhao ◽  
Pu Dai ◽  
Qiuwen Zhang

Compared with High Efficiency Video Coding (HEVC), the latest video coding standard Versatile Video Coding Standard (VVC), due to the introduction of many novel technologies and the introduction of the Quad-tree with nested Multi-type Tree (QTMT) division scheme in the block division method, the coding quality has been greatly improved. Due to the introduction of the QTMT scheme, the encoder needs to perform rate–distortion optimization for each division mode during Coding Unit (CU) division, so as to select the best division mode, which also leads to an increase in coding time and coding complexity. Therefore, we propose a VVC intra prediction complexity reduction algorithm based on statistical theory and the Size-adaptive Convolutional Neural Network (SAE-CNN). The algorithm combines the establishment of a pre-decision dictionary based on statistical theory and a Convolutional Neural Network (CNN) model based on adaptively adjusting the size of the pooling layer to form an adaptive CU size division decision process. The algorithm can make a decision on whether to divide CUs of different sizes, thereby avoiding unnecessary Rate–distortion Optimization (RDO) and reducing coding time. Experimental results show that compared with the original algorithm, our suggested algorithm can save 35.60% of the coding time and only increases the Bjøntegaard Delta Bit Rate (BD-BR) by 0.91%.


2021 ◽  
Author(s):  
S. Sivasaravanababu ◽  
T.R. Dineshkumar ◽  
G. Saravana Kumar

The Multiply-Accumulate Unit (MAC) is the core computational block in many DSP and wireless application but comes with more complicated architectures. Moreover the MAC block also decides the energy consumption and the performance of the overall design; due to its lies in the maximal path delay critical propagation. Developing high performance and energy optimized MAC core is essential to optimized DSP core. In this work, a high speed and low power signed booth radix enabled MAC Unit is proposed with highly configurable assertion driven modified booth algorithm (AD-MBE). The proposed booth core is based on core optimized booth radix-4 with hierarchical partial product accumulation design and associated path delay optimization and computational complexity reduction. Here all booth generated partial products are added as post summation adder network which consists of carry select adder (CSA) & carry look ahead (CLA) sequentially which narrow down the energy and computational complexity. Here increasing the operating frequency is achieved by accumulating encoding bits of each of the input operand into assertion unit before generating end results instead of going through the entire partial product accumulation. The FPGA implementation of the proposed signed asserted booth radix-4 based MAC shows significant complexity reduction with improved system performance as compared to the conventional booth unit and conventional array multiplier.


2021 ◽  
pp. 85-99
Author(s):  
Mads Christiansen

This article gives an introduction to linguistic complexity and investigates the complexity of sentences in Danish from a diachronic perspective. By taking a recursion-based approach to the phenomenon, it can be shown that in the old part of the corpus (eighteenth/ nineteenth century) sentences are more complex than in the new part (twentieth/twentyfirst century). For instance, the older texts are found to contain more clauses per sentence, more clause complexes and more subordinate clauses of a higher degree of dependency than the contemporary texts. The observation that a similar development occurs in Swedish and German should be considered when trying to explain the process of complexity reduction.


2021 ◽  
Vol 149 ◽  
pp. 111090
Author(s):  
Ana Elisa D. Barioni ◽  
Marcus A.M. de Aguiar

Sign in / Sign up

Export Citation Format

Share Document