computing time
Recently Published Documents


TOTAL DOCUMENTS

1371
(FIVE YEARS 508)

H-INDEX

42
(FIVE YEARS 8)

2022 ◽  
Vol 13 (2) ◽  
pp. 0-0

The Maximum Clique Problem (MCP) is a classical NP-hard problem that has gained considerable attention due to its numerous real-world applications and theoretical complexity. It is inherently computationally complex, and so exact methods may require prohibitive computing time. Nature-inspired meta-heuristics have proven their utility in solving many NP-hard problems. In this research, we propose a simulated annealing-based algorithm that we call Clique Finder algorithm to solve the MCP. Our algorithm uses a logarithmic cooling schedule and two moves that are selected in an adaptive manner. The objective (error) function is the total number of missing links in the clique, which is to be minimized. The proposed algorithm was evaluated using benchmark graphs from the open-source library DIMACS, and results show that the proposed algorithm had a high success rate.


2022 ◽  
Vol 13 (2) ◽  
pp. 1-22
Author(s):  
Sarab Almuhaideb ◽  
Najwa Altwaijry ◽  
Shahad AlMansour ◽  
Ashwaq AlMklafi ◽  
AlBandery Khalid AlMojel ◽  
...  

The Maximum Clique Problem (MCP) is a classical NP-hard problem that has gained considerable attention due to its numerous real-world applications and theoretical complexity. It is inherently computationally complex, and so exact methods may require prohibitive computing time. Nature-inspired meta-heuristics have proven their utility in solving many NP-hard problems. In this research, we propose a simulated annealing-based algorithm that we call Clique Finder algorithm to solve the MCP. Our algorithm uses a logarithmic cooling schedule and two moves that are selected in an adaptive manner. The objective (error) function is the total number of missing links in the clique, which is to be minimized. The proposed algorithm was evaluated using benchmark graphs from the open-source library DIMACS, and results show that the proposed algorithm had a high success rate.


2022 ◽  
Vol 14 (2) ◽  
pp. 302
Author(s):  
Chunchao Li ◽  
Xuebin Tang ◽  
Lulu Shi ◽  
Yuanxi Peng ◽  
Yuhua Tang

Effective feature extraction (FE) has always been the focus of hyperspectral images (HSIs). For aerial remote-sensing HSIs processing and its land cover classification, in this article, an efficient two-staged hyperspectral FE method based on total variation (TV) is proposed. In the first stage, the average fusion method was used to reduce the spectral dimension. Then, the anisotropic TV model with different regularization parameters was utilized to obtain featured blocks of different smoothness, each containing multi-scale structure information, and we stacked them as the next stage’s input. In the second stage, equipped with singular value transformation to reduce the dimension again, we followed an isotropic TV model based on split Bregman algorithm for further detail smoothing. Finally, the feature-extracted block was fed to the support vector machine for classification experiments. The results, with three hyperspectral datasets, demonstrate that our proposed method can competitively outperform state-of-the-art methods in terms of its classification accuracy and computing time. Also, our proposed method delivers robustness and stability by comprehensive parameter analysis.


2022 ◽  
Author(s):  
Guillaume Pirot ◽  
Ranee Joshi ◽  
Jérémie Giraud ◽  
Mark Douglas Lindsay ◽  
Mark Walter Jessell

Abstract. To support the needs of practitioners regarding 3D geological modelling and uncertainty quantification in the field, in particular from the mining industry, we propose a Python package called loopUI-0.1 that provides a set of local and global indicators to measure uncertainty and features dissimilarities among an ensemble of voxet models. Results are presented of a survey launched among practitioners in the mineral industry, enquiring about their modelling and uncertainty quantification practice and needs. It reveals that practitioners acknowledge the importance of uncertainty quantification even if they do not perform it. Four main factors preventing practitioners to perform uncertainty quantification were identified: lack of data uncertainty quantification, (computing) time requirement to generate one model, poor tracking of assumptions and interpretations, relative complexity of uncertainty quantification. The paper reviews and proposes solutions to alleviate these issues. Elements of an answer to these problems are already provided in the special issue hosting this paper and more are expected to come.


2022 ◽  
Vol 15 ◽  
Author(s):  
Zhaobo Li ◽  
Xinzui Wang ◽  
Weidong Shen ◽  
Shiming Yang ◽  
David Y. Zhao ◽  
...  

Purpose: Tinnitus is a common but obscure auditory disease to be studied. This study will determine whether the connectivity features in electroencephalography (EEG) signals can be used as the biomarkers for an efficient and fast diagnosis method for chronic tinnitus.Methods: In this study, the resting-state EEG signals of tinnitus patients with different tinnitus locations were recorded. Four connectivity features [including the Phase-locking value (PLV), Phase lag index (PLI), Pearson correlation coefficient (PCC), and Transfer entropy (TE)] and two time-frequency domain features in the EEG signals were extracted, and four machine learning algorithms, included two support vector machine models (SVM), a multi-layer perception network (MLP) and a convolutional neural network (CNN), were used based on the selected features to classify different possible tinnitus sources.Results: Classification accuracy was highest when the SVM algorithm or the MLP algorithm was applied to the PCC feature sets, achieving final average classification accuracies of 99.42 or 99.1%, respectively. And based on the PLV feature, the classification result was also particularly good. And MLP ran the fastest, with an average computing time of only 4.2 s, which was more suitable than other methods when a real-time diagnosis was required.Conclusion: Connectivity features of the resting-state EEG signals could characterize the differentiation of tinnitus location. The connectivity features (PCC and PLV) were more suitable as the biomarkers for the objective diagnosing of tinnitus. And the results were helpful for clinicians in the initial diagnosis of tinnitus.


2022 ◽  
pp. 002199832110635
Author(s):  
Junhong Zhu ◽  
Tim Frerich ◽  
Adli Dimassi ◽  
Michael Koerdt ◽  
Axel S. Herrmann

Structural aerospace composite parts are commonly cured through autoclave processing. To optimize the autoclave process, manufacturing process simulations have been increasingly used to investigate the thermal behavior of the cure assembly. Performing such a simulation, computational fluid dynamics (CFD) coupled with finite element method (FEM) model can be used to deal with the conjugate heat transfer problem between the airflow and solid regions inside the autoclave. A transient CFD simulation requires intensive computing resources. To avoid a long computing time, a quasi-transient coupling approach is adopted to allow a significant acceleration of the simulation process. This approach has been validated for a simple geometry in a previous study. This paper provides an experimental and numerical study on heat transfer in a medium-sized autoclave for a more complicated loading condition and a composite structure, a curved shell with three stringers, that mocks the fuselage structure of an aircraft. Two lumped mass calorimeters are used for the measurement of the heat transfer coefficients (HTCs) during the predefined curing cycle. Owing to some uncertainty in the inlet flow velocity, a correction parameter and calibration method are proposed to reduce the numerical error. The simulation results are compared to the experimental results, which consist of thermal measurements and temperature distributions of the composite shell, to validate the simulation model. This study shows the capability and potential of the quasi-transient coupling approach for the modeling of heat transfer in autoclave processing with reduced computational cost and high correlation between the experimental and numerical results.


Author(s):  
Abdelhamid Amar ◽  
Bouchaïb Radi ◽  
Abdelkhalak El Hami

The electro-thermomechanical modeling study of the High Electron Mobility Transistor (HEMT) has been presented, all the necessary equations are detailed and coupled. This proposed modeling by the finite element method using the Comsol multiphysics software, allowed to study the multiphysics behaviour of the transistor and to observe the different degradations in the structure of the component. Then, an optimization study is necessary to avoid failures in the transistor. In this work, we have used the Covariance Matrix Adaptation-Evolution Strategy (CMA-ES) method to solve the optimization problem, but it requires a very important computing time. Therefore, we proposed the kriging assisted CMA-ES method (KA-CMA-ES), it is an integration of the kriging metamodel in the CMA-ES method, it allows us to solve the problem of optimization and overcome the constraint of calculation time. All these methods are well detailed in this paper. The coupling of the finite element model developed on Comsol Multiphysics and the KA-CMA-ES method on Matlab software, allowed to optimize the multiphysics behaviour of the transistors. We made a comparison between the results of the numerical simulations of the initial state and the optimal state of the component. It was found that the proposed KA-CMA-ES method is efficient in solving optimization problems.


GigaScience ◽  
2022 ◽  
Vol 11 (1) ◽  
Author(s):  
Dries Decap ◽  
Louise de Schaetzen van Brienen ◽  
Maarten Larmuseau ◽  
Pascal Costanza ◽  
Charlotte Herzeel ◽  
...  

Abstract Background The accurate detection of somatic variants from sequencing data is of key importance for cancer treatment and research. Somatic variant calling requires a high sequencing depth of the tumor sample, especially when the detection of low-frequency variants is also desired. In turn, this leads to large volumes of raw sequencing data to process and hence, large computational requirements. For example, calling the somatic variants according to the GATK best practices guidelines requires days of computing time for a typical whole-genome sequencing sample. Findings We introduce Halvade Somatic, a framework for somatic variant calling from DNA sequencing data that takes advantage of multi-node and/or multi-core compute platforms to reduce runtime. It relies on Apache Spark to provide scalable I/O and to create and manage data streams that are processed on different CPU cores in parallel. Halvade Somatic contains all required steps to process the tumor and matched normal sample according to the GATK best practices recommendations: read alignment (BWA), sorting of reads, preprocessing steps such as marking duplicate reads and base quality score recalibration (GATK), and, finally, calling the somatic variants (Mutect2). Our approach reduces the runtime on a single 36-core node to 19.5 h compared to a runtime of 84.5 h for the original pipeline, a speedup of 4.3 times. Runtime can be further decreased by scaling to multiple nodes, e.g., we observe a runtime of 1.36 h using 16 nodes, an additional speedup of 14.4 times. Halvade Somatic supports variant calling from both whole-genome sequencing and whole-exome sequencing data and also supports Strelka2 as an alternative or complementary variant calling tool. We provide a Docker image to facilitate single-node deployment. Halvade Somatic can be executed on a variety of compute platforms, including Amazon EC2 and Google Cloud. Conclusions To our knowledge, Halvade Somatic is the first somatic variant calling pipeline that leverages Big Data processing platforms and provides reliable, scalable performance. Source code is freely available.


Physics ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 1-11
Author(s):  
Pablo Dopazo ◽  
Carola de Benito ◽  
Oscar Camps ◽  
Stavros G. Stavrinides ◽  
Rodrigo Picos

Memristive technology is a promising game-changer in computers and electronics. In this paper, a system exploring the optimal paths through a maze, utilizing a memristor-based setup, is developed and concreted on a FPGA (field-programmable gate array) device. As a memristor, a digital emulator has been used. According to the proposed approach, the memristor is used as a delay element, further configuring the test graph as a memristor network. A parallel algorithm is then applied, successfully reducing computing time and increasing the system’s efficiency. The proposed system is simple, easy to scale up and capable of implementing different graph configurations. The operation of the algorithm in the MATLAB (matrix laboratory) programming enviroment is checked beforehand and then exported to two different Intel FPGAs: a DE0-Nano board and an Arria 10 GX 220 FPGA. In both cases, reliable results are obtained quickly and conveniently, even for the case of a 300 × 300 nodes maze.


2021 ◽  
Vol 2 (1) ◽  
pp. 62-76
Author(s):  
Maria Nikoghosyan ◽  
Henry Loeffler-Wirth ◽  
Suren Davidavyan ◽  
Hans Binder ◽  
Arsen Arakelyan

The self-organizing maps portraying has been proven to be a powerful approach for analysis of transcriptomic, genomic, epigenetic, single-cell, and pathway-level data as well as for “multi-omic” integrative analyses. However, the SOM method has a major disadvantage: it requires the retraining of the entire dataset once a new sample is added, which can be resource- and time-demanding. It also shifts the gene landscape, thus complicating the interpretation and comparison of results. To overcome this issue, we have developed two approaches of transfer learning that allow for extending SOM space with new samples, meanwhile preserving its intrinsic structure. The extension SOM (exSOM) approach is based on adding secondary data to the existing SOM space by “meta-gene adaptation”, while supervised SOM portrayal (supSOM) adds support vector machine regression model on top of the original SOM algorithm to “predict” the portrait of a new sample. Both methods have been shown to accurately combine existing and new data. With simulated data, exSOM outperforms supSOM for accuracy, while supSOM significantly reduces the computing time and outperforms exSOM for this parameter. Analysis of real datasets demonstrated the validity of the projection methods with independent datasets mapped on existing SOM space. Moreover, both methods well handle the projection of samples with new characteristics that were not present in training datasets.


Sign in / Sign up

Export Citation Format

Share Document