complexity of algorithms
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 27)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
pp. 11-20
Author(s):  
L. L. Bosova ◽  
D. I. Pavlov ◽  
T. V. Tkach ◽  
K. V. Butarev

The article discusses approaches to the organization of pre-professional training of students in 10–11th grades within the framework of the project "IT class in Moscow school", the task of which is to create a platform for training specialists in IT sphere. The basis for training students in informatics in the project "IT class in Moscow school" is the discipline "Basic informatics course", designed for two years of study, four hours a week. Thematic course planning is presented in the article. The content of the new textbook for the 11th grade is described in detail,  including the following topics: "Graphics and multimedia", "Files and file system", "Modeling and game theory", "Databases", "Web programming", "Uneven coding, noiseproof codes", "Complexity of algorithms", "Algorithms", "Programming paradigms".


2021 ◽  
Vol 4 ◽  
Author(s):  
Fan Zhang ◽  
Melissa Petersen ◽  
Leigh Johnson ◽  
James Hall ◽  
Sid E. O’Bryant

Driven by massive datasets that comprise biomarkers from both blood and magnetic resonance imaging (MRI), the need for advanced learning algorithms and accelerator architectures, such as GPUs and FPGAs has increased. Machine learning (ML) methods have delivered remarkable prediction for the early diagnosis of Alzheimer’s disease (AD). Although ML has improved accuracy of AD prediction, the requirement for the complexity of algorithms in ML increases, for example, hyperparameters tuning, which in turn, increases its computational complexity. Thus, accelerating high performance ML for AD is an important research challenge facing these fields. This work reports a multicore high performance support vector machine (SVM) hyperparameter tuning workflow with 100 times repeated 5-fold cross-validation for speeding up ML for AD. For demonstration and evaluation purposes, the high performance hyperparameter tuning model was applied to public MRI data for AD and included demographic factors such as age, sex and education. Results showed that computational efficiency increased by 96%, which helped to shed light on future diagnostic AD biomarker applications. The high performance hyperparameter tuning model can also be applied to other ML algorithms such as random forest, logistic regression, xgboost, etc.


2021 ◽  
pp. 134-141
Author(s):  
Ш.С. Фахми ◽  
Н.В. Шаталова ◽  
Е.В. Костикова ◽  
Н.Ю. Пышкина ◽  
Ю.И. Васильев

На современном этапе развития интеллектуальных морских технологий необходимо включить в состав видеосистемы обработки изображений две подсистемы передачи видеоинформации морских сюжетов. Во первых на основе спектрального преобразования сигналов из пространственной области в частотную для оперативной доставки видеоинформации, полученной с различных камер подводного и надводного наблюдения. Во вторых, на основе пространственных методов обработки, без перехода в спектральную область сигнала для передачи выделенных ключевых точек объектов на изображениях. При этом важнейшая особенность этих подсистем заключается в улучшении информационных показателей качества морских видеосистем автоматизированной обработкой видеоинформации: точность визуальных данных, битовая скорость передачи по каналам связи и вычислительная сложность алгоритмов анализа и передачи видеоинформации. В предлагаемом исследовании приводятся алгоритмы спектральной и пространственной обработки видеоинформации, проведена оценка эффективности алгоритмов обработки изображений. А также отражены результаты моделирования алгоритмов и сравнительная оценка информационных показателей интеллектуальных морских видеосистем: точность, битовая скорость и вычислительная сложность видеосистем обработки морских изображений. At the present stage of the development of intelligent marine technologies, it is necessary to include two subsystems for the transmission of video information of marine scenes in the video image processing system: 1) based on the spectral conversion of signals from the spatial domain to the frequency domain for the rapid delivery of video information obtained from various underwater and surface surveillance cameras; 2) based on spatial processing methods without switching to the spectral domain of the signal to transmit selected key points of objects in the images. At the same time, the most important feature of these subsystems is to improve the information quality indicators of marine video systems by automated processing of video information: the accuracy of visual data, the bit rate of transmission over communication channels and the computational complexity of algorithms for analyzing and transmitting video information. The proposed study provides algorithms for spectral and spatial processing of video information. The results of algorithm modeling and comparative evaluation of information indicators of intelligent marine video systems are also presented: accuracy, bit rate and computational complexity of marine image processing video systems.


2021 ◽  
Vol 10 (3) ◽  
pp. 78-85
Author(s):  
Ismail Sati Alom Harahap

One aspect of success in data communication is security. Data security can be done by using steganography. Steganography is a way to hide messages in the media in such a way that other people do not realize there is a message in the media. Many of the algorithms used in steganography one of which is the Least Significant Bit. In this research, the authors modify Least Significant Bit algorithms with Alternate Insertion method. Modification is doing of embedding and extracting messages process by changing the RGB values ??at each pixel of image data with confidential file by inserting message alternately and insertionly until the all message inserted. This steganography applications have input password that is used as the key of a message can't be opened in addition to the addressee so that the data will be kept confidential. The parameters used to measure the performance of the Least Significant Bit and Alternate Insertion are using runtime program and analysist algorithms of Big . The message insertion applications built by using C# (C Sharp) programming language. From the results of research that extracting process messages work faster than the embedding process. More larger of image in used it will be spend longer time to process the inserting message. Time complexity of algorithms (Big ) embedding and extracting processes obtained in testing the system is T (n) = (xy). From the results of research that extracting process messages work faster than the embedding process. More larger of image in used it will be spend longer time to process the inserting message. Time complexity of algorithms (Big ) embedding and extracting processes obtained in testing the system is T (n) = (xy). From the results of research that extracting process messages work faster than the embedding process. More larger of image in used it will be spend longer time to process the inserting message. Time complexity of algorithms (Big ) embedding and extracting processes obtained in testing the system is T (n) = (xy).


Author(s):  
Christopher Yeates ◽  
Cornelia Schmidt-Hattenberger ◽  
Wolfgang Weinzierl ◽  
David Bruhn

AbstractDesigning low-cost network layouts is an essential step in planning linked infrastructure. For the case of capacitated trees, such as oil or gas pipeline networks, the cost is usually a function of both pipeline diameter (i.e. ability to carry flow or transferred capacity) and pipeline length. Even for the case of incompressible, steady flow, minimizing cost becomes particularly difficult as network topology itself dictates local flow material balances, rendering the optimization space non-linear. The combinatorial nature of potential trees requires the use of graph optimization heuristics to achieve good solutions in reasonable time. In this work we perform a comparison of known literature network optimization heuristics and metaheuristics for finding minimum-cost capacitated trees without Steiner nodes, and propose novel algorithms, including a metaheuristic based on transferring edges of high valency nodes. Our metaheuristic achieves performance above similar algorithms studied, especially for larger graphs, usually producing a significantly higher proportion of optimal solutions, while remaining in line with time-complexity of algorithms found in the literature. Data points for graph node positions and capacities are first randomly generated, and secondly obtained from the German emissions trading CO2 source registry. As political will for applications and storage for hard-to-abate industry CO2 emissions is growing, efficient network design methods become relevant for new large-scale CO2 pipeline networks.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Feng Tian ◽  
Ying Li ◽  
Jing Wang ◽  
Wei Chen

An improved blood vessel segmentation algorithm on the basis of traditional Frangi filtering and the mathematical morphological method was proposed to solve the low accuracy of automatic blood vessel segmentation of fundus retinal images and high complexity of algorithms. First, a global enhanced image was generated by using the contrast-limited adaptive histogram equalization algorithm of the retinal image. An improved Frangi Hessian model was constructed by introducing the scale equivalence factor and eigenvector direction angle of the Hessian matrix into the traditional Frangi filtering algorithm to enhance blood vessels of the global enhanced image. Next, noise interferences surrounding small blood vessels were eliminated through the improved mathematical morphological method. Then, blood vessels were segmented using the Otsu threshold method. The improved algorithm was tested by the public DRIVE and STARE data sets. According to the test results, the average segmentation accuracy, sensitivity, and specificity of retinal images in DRIVE and STARE are 95.54%, 69.42%, and 98.02% and 94.92%, 70.19%, and 97.71%, respectively. The improved algorithm achieved high average segmentation accuracy and low complexity while promising segmentation sensitivity. This improved algorithm can segment retinal vessels more accurately than other algorithms.


2021 ◽  
Vol 6 (5(55)) ◽  
pp. 20-21
Author(s):  
Lyubov Petrovna Afanasyeva

The aim of the study is to compare the best libraries to use for exporting models from a graphics editor and create WebGL objects from them, defining principles of their use and the distinctive features of each. For this, the following empirical methods were used: observing the development process using the studied tools, comparing the complexity of algorithms and describing the features of each tool. The article discusses options for placing ready-made 3D models created using the Blender graphics editor on a website and the possibility of subsequent work with them. The relevance of using the following libraries is analyzed: Blend4Web, Three.js. The use of a relatively new engine Verge3D,which works on these libraries, is alsodescribed, replacing all the programmer’s actions with an interface that is understandable for a 3D artist.


T-Comm ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 4-10
Author(s):  
Vitaly B. Kreyndelin ◽  
◽  
Elena D. Grigorieva ◽  

Algorithms of implementation of vector-matrix multiplication are presented, which are intended for application in banks (sets) of digital filters. These algorithms provide significant savings in computational costs over traditional algorithms. At the same time, reduction of computational complexity of algorithms is achieved without any performance loss of banks (sets) of digital filters. As the basis for the construction of algorithms proposed in the article, the previously known Winograd method of multiplication of real matrices and vectors and two versions of the method of type 3M for multiplication of complex matrices and vectors are used. Methods of combining these known methods of multiplying matrices and vectors for building digital filter banks (sets) are considered. The analysis of computing complexity of such ways which showed a possibility of reduction of computing complexity in comparison with a traditional algorithm of realization of bank (set) of digital filters approximately in 2.66 times – at realization on the processor without hardware multiplier is carried out; and by 1.33 times – at realization on the processor with the hardware multiplier. These indicators are markedly higher than those of known algorithms. Analysis of sensitivity of algorithms proposed in this article to rounding errors arising by digital signal processing was carried out. Based on this analysis, an algorithm is selected that has a computational complexity smaller than that of a traditional algorithm, but its sensitivity to rounding errors is the same as that of a traditional algorithm. Recommendations are given on its practical application in the development of a bank (set) of digital filters.


Sign in / Sign up

Export Citation Format

Share Document