Physical Complexity of Algorithms

Author(s):  
I. P. Deshko ◽  
V. Ya. Tsvetkov
2019 ◽  
Vol 8 (4) ◽  
pp. 9461-9464

Current quantum computer simulation strategies are inefficient in simulation and their realizations are also failed to minimize those impacts of the exponential complexity for simulated quantum computations. We proposed a Quantum computer simulator model in this paper which is a coordinated Development Environment – QuIDE (Quantum Integrated Development Environment) to support the improvement of algorithm for future quantum computers. The development environment provides the circuit diagram of graphical building and flexibility of source code. Analyze the complexity of algorithms shows the performance results of the simulator and used for simulation as well as result of its deployment during simulation


Complexity ◽  
2011 ◽  
Vol 17 (3) ◽  
pp. 26-42 ◽  
Author(s):  
Hector Zenil ◽  
Jean-Paul Delahaye ◽  
Cédric Gaucherel
Keyword(s):  

Author(s):  
B. M. Shubik ◽  

The processes of development of hydrocarbon deposits are accompanied, as a rule, by an increase in the level of seismicity and, in particular, by the occurrence of technogenic earthquakes and other deformation phenomena associated with changes in the geodynamic regime. To monitor deformation and geodynamic processes, a seismic monitoring service should be organized. A similar monitoring system is also required for the analysis of aftershock and volcanic activity. Monitoring technology should be based on the use of reliable and fast methods of automatic detection and localization of seismic events of various scales. Traditional approaches to the detection and localization of earthquake epicenters and hypocenters are based on the analysis of data recorded by one or more single seismic stations. In that case, seismic event coordinates are estimated by means of signal extraction from noise and accurately measuring arrival times of a number of specific phases of the seismic signal at each recording point. Existing computational techniques have inherited this traditional approach. However, automatic procedures based on the ideology of manual processing turn out to be extremely laborious and ineffective due to the complexity of algorithms adequate to the actions of an experienced geophysicist-interpreter. The article contains a description of new approaches to the synthesis of automatic monitoring systems, which are based on the principles of emission tomography, use of spatial registration systems, energy analysis of wave fields and methods of converting real waveforms into low-frequency model signals (so-called filter masks/templates). The monitoring system was successfully tested in the process of detecting and locating the epicenters and hypocenters of 19 weak local earthquakes in Israel, as well as a quarry explosion.


1992 ◽  
Vol 31 (3) ◽  
pp. 525-543 ◽  
Author(s):  
R. G�nther ◽  
B. Schapiro ◽  
P. Wagner

1992 ◽  
Vol 24 (2) ◽  
pp. 289-304 ◽  
Author(s):  
P J Densham ◽  
G Rushton

Solution techniques for location-allocation problems usually are not a part of microcomputer-based geoprocessing systems because of the large volumes of data to process and store and the complexity of algorithms. In this paper, it is shown that processing costs for the most accurate, heuristic, location-allocation algorithm can be drastically reduced by exploiting the spatial structure of location-allocation problems. The strategies used, preprocessing interpoint distance data as both candidate and demand strings, and use of them to update an allocation table, allow the solution of large problems (3000 nodes) in a microcomputer-based, interactive decisionmaking environment. Moreover, these strategies yield solution times which increase approximately linearly with problem size. Tests on four network problems validate these claims.


2021 ◽  
Vol 4 ◽  
Author(s):  
Fan Zhang ◽  
Melissa Petersen ◽  
Leigh Johnson ◽  
James Hall ◽  
Sid E. O’Bryant

Driven by massive datasets that comprise biomarkers from both blood and magnetic resonance imaging (MRI), the need for advanced learning algorithms and accelerator architectures, such as GPUs and FPGAs has increased. Machine learning (ML) methods have delivered remarkable prediction for the early diagnosis of Alzheimer’s disease (AD). Although ML has improved accuracy of AD prediction, the requirement for the complexity of algorithms in ML increases, for example, hyperparameters tuning, which in turn, increases its computational complexity. Thus, accelerating high performance ML for AD is an important research challenge facing these fields. This work reports a multicore high performance support vector machine (SVM) hyperparameter tuning workflow with 100 times repeated 5-fold cross-validation for speeding up ML for AD. For demonstration and evaluation purposes, the high performance hyperparameter tuning model was applied to public MRI data for AD and included demographic factors such as age, sex and education. Results showed that computational efficiency increased by 96%, which helped to shed light on future diagnostic AD biomarker applications. The high performance hyperparameter tuning model can also be applied to other ML algorithms such as random forest, logistic regression, xgboost, etc.


2000 ◽  
Vol 137 (1-2) ◽  
pp. 62-69 ◽  
Author(s):  
C. Adami ◽  
N.J. Cerf

Sign in / Sign up

Export Citation Format

Share Document