extensive computation
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Eerik Aunin ◽  
Matthew Berriman ◽  
Adam James Reid

AbstractGenome architecture describes how genes and other features are arranged in genomes. These arrangements reflect the evolutionary pressures on genomes and underlie biological processes such as chromosomal segregation and the regulation of gene expression. We present a new tool called Genome Decomposition Analysis (GDA) that characterises genome architectures and acts as an accessible approach for discovering hidden features of a genome assembly. With the imminent deluge of high quality genome assemblies from projects such as the Darwin Tree of Life and the Earth BioGenome Project, GDA has been designed to facilitate their exploration and the discovery of novel genome biology. We highlight the effectiveness of our approach in characterising the genome architectures of single-celled eukaryotic parasites from the phylum Apicomplexa and show that it scales well to large genomes.SignificanceGenome sequencing has revealed that there are functionally important arrangements of genes, repetitive elements and regulatory sequences within chromosomes. Identifying these arrangements requires extensive computation and analysis. Furthermore, improvements in genome sequencing technology and the establishment of consortia aiming to sequence all species of eukaryotes mean that there is a need for high throughput methods for discovering new genome biology. Here we present a software pipeline, named GDA, which determines the patterns of genomic features across chromosomes and uses these to characterise genome architecture. We show that it recapitulates the known genome architecture of several Apicomplexan parasites and use it to identify features in a recently sequenced, less well-characterised genome. GDA scales well to large genomes and is freely available.


2021 ◽  
Vol 26 (6) ◽  
pp. 1-18
Author(s):  
Anni Lu ◽  
Xiaochen Peng ◽  
Yandong Luo ◽  
Shanshi Huang ◽  
Shimeng Yu

Compute-in-memory (CIM) is an attractive solution to address the “memory wall” challenges for the extensive computation in deep learning hardware accelerators. For custom ASIC design, a specific chip instance is restricted to a specific network during runtime. However, the development cycle of the hardware is normally far behind the emergence of new algorithms. Although some of the reported CIM-based architectures can adapt to different deep neural network (DNN) models, few details about the dataflow or control were disclosed to enable such an assumption. Instruction set architecture (ISA) could support high flexibility, but its complexity would be an obstacle to efficiency. In this article, a runtime reconfigurable design methodology of CIM-based accelerators is proposed to support a class of convolutional neural networks running on one prefabricated chip instance with ASIC-like efficiency. First, several design aspects are investigated: (1) the reconfigurable weight mapping method; (2) the input side of data transmission, mainly about the weight reloading; and (3) the output side of data processing, mainly about the reconfigurable accumulation. Then, a system-level performance benchmark is performed for the inference of different DNN models, such as VGG-8 on a CIFAR-10 dataset and AlexNet GoogLeNet, ResNet-18, and DenseNet-121 on an ImageNet dataset to measure the trade-offs between runtime reconfigurability, chip area, memory utilization, throughput, and energy efficiency.


2021 ◽  
Vol 16 ◽  
pp. 166-177
Author(s):  
P. Kanirajan ◽  
M. Joly ◽  
T. Eswaran

This paper presents a new approach to detect and classify power quality disturbances in the power system using Fuzzy C-means clustering, Fuzzy logic (FL) and Radial basis Function Neural Networks (RBFNN). Feature extracted through wavelet is used for training, after training, the obtained weight is used to classify the power quality problems in RBFNN, but it suffers from extensive computation and low convergence speed. Then to detect and classify the events, FL is proposed, the extracted characters are used to find out membership functions and fuzzy rules being determined from the power quality inherence. For the classification,5 types of disturbance are taken in to account. The classification performance of FL is compared with RBFNN.The clustering analysis is used to group the data in to clusters to identifying the class of the data with Fuzzy C-means algorithm. The classification accuracy of FL and Fuzzy C-means clustering is improved with the help of cognitive as well as the social behavior of particles along with fitness value using Particle swarm optimization (PSO),just by determining the ranges of the feature of the membership funtion for each rules to identify each disturbance specifically.The simulation result using Fuzzy C-means clustering possess significant improvements and gives classification results in less than a cycle when compared over other considered approach.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Wenjing Wang ◽  
Yihong Wang ◽  
Gonçalo Homem de Almeida Correia ◽  
Yusen Chen

In a multimodal public transport network, transfers are inevitable. Planning and managing an efficient transfer connection is thus important and requires an understanding of the factors that influence those transfers. Existing studies on predicting passenger transfer flows have mainly used transit assignment models based on route choice, which need extensive computation and underlying behavioral assumptions. Inspired by studies that use network properties to estimate public transport (PT) demand, this paper proposes to use the network properties of a multimodal PT system to explain transfer flows. A statistical model is estimated to identify the relationship between transfer flow and the network properties in a joint bus and metro network. Apart from transfer time, the number of stops, and bus lines, the most important network property we propose in this study is transfer accessibility. Transfer accessibility is a newly defined indicator for the geographic factors contributing to the possibility of transferring at a station, given its position in a multimodal PT network, based on an adapted gravity-based measure. It assumes that transfer accessibility at each station is proportional to the number of reachable points of interest within the network and dependent on a cost function describing the effect of distance. The R-squared of the regression model we propose is 0.69, based on the smart card data, PT network data, and Points of Interest (POIs) data from the city of Beijing, China. This suggests that the model could offer some decision support for PT planners especially when complex network assignment models are too computationally intensive to calibrate and use.


2020 ◽  
Author(s):  
Anna Nowakowska ◽  
Alasdair D F Clarke ◽  
Amelia R. Hunt ◽  
Jacqueline von Seth

When searching for an object, do we minimize the number of eye movements we need to make? Under most circumstances, the cost of saccadic parsimony likely outweighs the benefit, given the cost is extensive computation and the benefit is a few hundred milliseconds of time saved. Previous research has measured the proportion of eye movements directed to locations where the target would have been visible in the periphery, as a way of quantifying the proportion of superfluous fixations. A surprisingly large range of individual differences has emerged from these studies, suggesting some people are highly efficient and others much less so. Our question in the current study is whether these individual differences can be explained by differences in motivation. In two experiments, we demonstrate that neither time pressure, nor financial incentive, led to improvements of visual search strategies; the majority of participants continued to make many superfluous fixations in both experiments. The wide range of individual differences in efficiency observed previously was replicated here. We observed small but consistent improvements in strategy over the course of the experiment (regardless of reward or time pressure) suggesting practice, not motivation, makes participants more efficient.


2020 ◽  
Vol 11 ◽  
pp. 60-71
Author(s):  
P. Kanirajan ◽  
M. Joly

This paper presents a new approach to detect and classify power quality disturbances in the power system using Fuzzy C-means clustering, Fuzzy logic (FL) and Radial basis Function Neural Networks (RBFNN). Feature extracted through wavelet is used for training, after training, the obtained weight is used to classify the power quality problems in RBFNN, but it suffers from extensive computation and low convergence speed. Then to detect and classify the events, FL is proposed, the extracted characters are used to find out membership functions and fuzzy rules being determined from the power quality inherence. For the classification,5 types of disturbance are taken in to account. The classification performance of FL is compared with RBFNN.The clustering analysis is used to group the data in to clusters to identifying the class of the data with Fuzzy C-means algorithm. The classification accuracy of FL and Fuzzy C-means clustering is improved with the help of cognitive as well as the social behavior of particles along with fitness value using Particle swarm optimization (PSO),just by determining the ranges of the feature of the membership funtion for each rules to identify each disturbance specifically.The simulation result using Fuzzy C-means clustering possess significant improvements and gives classification results in less than a cycle when compared over other considered approach.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Z. Xiao ◽  
Q. C. Zhao ◽  
Z. J. Wen ◽  
M. F. Cao

In practical engineering problems, the distribution parameters of random variables cannot be determined precisely due to limited experimental data. The hybrid uncertain model of interval and probability can deal with the problem, but it will produce extensive computation and it is difficult to meet the requirement of the complex engineering problem analysis. In this scenario, this paper presents a vertex method for the uncertainty analysis of the hybrid model. By combining the traditional finite element method, it can be applied to the structural uncertainty analysis. The key of this method is to demonstrate the monotonicity between expectation and variance of the function and distribution parameters of random variables. Based on the monotonicity analysis, interval bounds of the expectation and variance are directly calculated by means of vertex of distribution parameter intervals. Two numerical examples are used to evaluate the effectiveness and accuracy of the proposed method. The results show the vertex method is computationally more efficient than the common interval Monte Carlo method under the same accuracy. Two practical engineering examples are to show that the vertex method makes the engineering application of the hybrid uncertain model easy.


2018 ◽  
Vol 7 (4.10) ◽  
pp. 928
Author(s):  
Prayline Rajabai C ◽  
Sivanantham S

Various video coding standards like H.264 and H.265 are used for video compression and decompression. These coding standards use multiple modules to perform video compression. Motion Estimation (ME) is one of the critical blocks in the video codec which requires extensive computation. Hence it is computationally complex, it critically consumes a massive amount of time to process the video data. Motion Estimation is the process which improves the compression efficiency of these coding standards by determining the minimum distortion between the current frame and the reference frame. For the past two decades, various Motion Estimation algorithms are implemented in hardware and research is still going on for realizing an optimized hardware solution for this critical module. Efficient implementation of ME in hardware is essential for high-resolution video applications such as HDTV to increase the decoding throughput and to achieve high compression ratio. A review and analysis of various hardware architectures of ME used for H.264 and H.265 coding standards is presented in this paper.  


Author(s):  
Jeffrey J. Gory ◽  
Radu Herbei ◽  
Laura S. Kubatko

Abstract The increasing availability of population-level allele frequency data across one or more related populations necessitates the development of methods that can efficiently estimate population genetics parameters, such as the strength of selection acting on the population(s), from such data. Existing methods for this problem in the setting of the Wright-Fisher diffusion model are primarily likelihood-based, and rely on numerical approximation for likelihood computation and on bootstrapping for assessment of variability in the resulting estimates, requiring extensive computation. Recent work has provided a method for obtaining exact samples from general Wright-Fisher diffusion processes, enabling the development of methods for Bayesian estimation in this setting. We develop and implement a Bayesian method for estimating the strength of selection based on the Wright-Fisher diffusion for data sampled at a single time point. The method utilizes the latest algorithms for exact sampling to devise a Markov chain Monte Carlo procedure to draw samples from the joint posterior distribution of the selection coefficient and the allele frequencies. We demonstrate that when assumptions about the initial allele frequencies are accurate the method performs well for both simulated data and for an empirical data set on hypoxia in flies, where we find evidence for strong positive selection in a region of chromosome 2L previously identified. We discuss possible extensions of our method to the more general settings commonly encountered in practice, highlighting the advantages of Bayesian approaches to inference in this setting.


10.29007/rckt ◽  
2018 ◽  
Author(s):  
Chintan Patel ◽  
Ronak Vashi ◽  
Anish Vahora

Image processing requires extensive computation and usually done on Personal Computer or CPU. Due to its sequential processing method of Image processing or CPU task takes long time to get desire output. However FPGAs can be one of the options to speed up image processing without increasing the clock speed. One of the main features of FPGA is allowing parallel processing which speed up the processing of Image and get desire output in limited time-bound. In this paper, Digilent Nexys 4 XC7A100T-1CSG324C FPGA is used to implement the edge detection operation on image. The Sobel algorithm is used to detect the edge in an image and is implemented on FPGA using Hardware Description Language (VHDL).


Sign in / Sign up

Export Citation Format

Share Document