random search
Recently Published Documents


TOTAL DOCUMENTS

1083
(FIVE YEARS 322)

H-INDEX

43
(FIVE YEARS 6)

Author(s):  
Hamza Abubakar ◽  
Abdullahi Muhammad ◽  
Smaiala Bello

The Boolean Satisfiability Problem (BSAT) is one of the most important decision problems in mathematical logic and computational sciences for determining whether or not a solution to a Boolean formula.. Hopfield neural network (HNN) is one of the major type artificial neural network (NN) popularly known for it used in solving various optimization and decision problems based on its energy minimization machinism. The existing models that incorporate standalone network projected non-versatile framework as fundamental Hopfield type of neural network (HNN) employs random search in its training stages and sometimes get trapped at local optimal solution. In this study, Ants Colony Optimzation Algorithm (ACO) as a novel variant of probabilistic metaheuristic algorithm (MA) inspired by the behavior of real Ants, has been incorporated in the training phase of Hopfield types of the neural network (HNN) to accelerate the training process for Random Boolean kSatisfiability reverse analysis (RANkSATRA) based for logic mining. The proposed hybrid model has been evaluated according to robustness and accuracy of the induced logic obtained based on the agricultural soil fertility data set (ASFDS). Based on the experimental simulation results, it reveals that the ACO can effectively work with the Hopfield type of neural network (HNN) for Random 3 Satisfiability Reverse Analysis with 87.5 % classification accuracy


2022 ◽  
Vol 3 ◽  
Author(s):  
Luís Vinícius de Moura ◽  
Christian Mattjie ◽  
Caroline Machado Dartora ◽  
Rodrigo C. Barros ◽  
Ana Maria Marques da Silva

Both reverse transcription-PCR (RT-PCR) and chest X-rays are used for the diagnosis of the coronavirus disease-2019 (COVID-19). However, COVID-19 pneumonia does not have a defined set of radiological findings. Our work aims to investigate radiomic features and classification models to differentiate chest X-ray images of COVID-19-based pneumonia and other types of lung patterns. The goal is to provide grounds for understanding the distinctive COVID-19 radiographic texture features using supervised ensemble machine learning methods based on trees through the interpretable Shapley Additive Explanations (SHAP) approach. We use 2,611 COVID-19 chest X-ray images and 2,611 non-COVID-19 chest X-rays. After segmenting the lung in three zones and laterally, a histogram normalization is applied, and radiomic features are extracted. SHAP recursive feature elimination with cross-validation is used to select features. Hyperparameter optimization of XGBoost and Random Forest ensemble tree models is applied using random search. The best classification model was XGBoost, with an accuracy of 0.82 and a sensitivity of 0.82. The explainable model showed the importance of the middle left and superior right lung zones in classifying COVID-19 pneumonia from other lung patterns.


2022 ◽  
Author(s):  
Kaveri Mahapatra ◽  
Xiaoyuan Fan ◽  
Xinya Li ◽  
Yunzhi Huang ◽  
Qiuhua Huang

2021 ◽  
Vol 21 (2) ◽  
pp. 122
Author(s):  
Hiya Nalatissifa ◽  
Hilman Ferdinandus Pardede

Customer churn is the most important problem in the business world, especially in the telecommunications industry, because it greatly influences company profits. Getting new customers for a company is much more difficult and expensive than retaining existing customers. Machine learning, part of data mining, is a sub-field of artificial intelligence widely used to make predictions, including predicting customer churn. Deep neural network (DNN) has been used for churn prediction, but selecting hyperparameters in modeling requires more time and effort, making the process more challenging for the researcher. Therefore, the purpose of this study is to propose a better architecture for the DNN algorithm by using a hard tuner to obtain more optimal hyperparameters. The tuning hyperparameter used is random search in determining the number of nodes in each hidden layer, dropout, and learning rate. In addition, this study also uses three variations of the number of hidden layers, two variations of the activation function, namely rectified linear unit (ReLu) and Sigmoid, then uses five variations of the optimizer (stochastic gradient descent (SGD), adaptive moment estimation (Adam), adaptive gradient algorithm (Adagrad), Adadelta, and root mean square propagation (RMSprop)). Experiments show that the DNN algorithm using hyperparameter tuning random search produces a performance value of 83.09 % accuracy using three hidden layers, the number of nodes in each hidden layer is [20, 35, 15], using the RMSprop optimizer, dropout 0.1, the learning rate is 0.01, with the fastest tuning time of 21 seconds. Better than modeling using k-nearest neighbor (K-NN), random forest (RF), and decision tree (DT) as comparison algorithms.


Author(s):  
Aleksandar Jovanović ◽  
Dušan Teodorović

The superstreet intersection (or restricted crossing U-turn-, J-turn intersection) fixed-time traffic control system was developed in this study. The optimal (or near-optimal) values of cycle length, splits, and offsets were discovered by minimizing the experienced travel time of all network users traveling through the superstreet intersection. The optimization procedure used was based on the bee colony optimization (BCO) metaheuristic. The BCO is a stochastic, random-search, population-based technique, inspired by the foraging behavior of honey bees. The BCO belongs to the class of swarm intelligence methods. A set of numerical experiments was performed. Superstreet intersection configurations that allowed direct left turns from the major street, as well as configurations with no direct left turns, were analyzed within numerical experiments. The obtained results showed that BCO outperformed the traditional Webster approach in the superstreet geometrical configurations considered.


Author(s):  
O. Gorobсhenko

The article is devoted to the problem of implementation of intelligent control systems in transport. An important task is to assess the information parameters of the control systems. In the existing works the question of definition of one of the basic parameters of functioning of locomotive control systems - information value of separate signs of a train situation is not considered. This does not make it possible to determine the order of signal processing at the input and assess their contribution to the adoption of a control decision. Moreover, informativeness is a relative value, which is expressed in the different information value of a particular feature for the classification of different train situations. Also, the informativeness of the feature may depend on the type of decisive rules in the classification procedure. The quality of recognition of a train situation in which the locomotive crew is, depends on the quality of the features used by the classification system. The decisive criterion for the informativeness of the features in the problem of pattern recognition is the magnitude of losses from errors. To determine the range of the most informative features of train situations, the method of random search with adaptation was used. The results of the work make it possible to optimize the operation of automated and intelligent train control systems by reducing the amount of calculations and simplifying their algorithm.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1297
Author(s):  
Md. Shabiul Islam ◽  
Most Tahamina Khatoon ◽  
Kazy Noor-e-Alam Siddiquee ◽  
Wong Hin Yong ◽  
Mohammad Nurul Huda

Problem solving and modelling in traditional substitution methods at large scale for systems using sets of simultaneous equations is time consuming. For such large scale global-optimization problem, Simulated Annealing (SA) algorithm and Genetic Algorithm (GA) as meta-heuristics for random search technique perform faster. Therefore, this study applies the SA to solve the problem of linear equations and evaluates its performances against Genetic Algorithms (GAs), a population-based search meta-heuristic, which are widely used in Travelling Salesman problems (TSP), Noise reduction and many more. This paper presents comparison between performances of the SA and GA for solving real time scientific problems. The significance of this paper is to solve the certain real time systems with a set of simultaneous linear equations containing different unknown variable samples those were simulated in Matlab using two algorithms-SA and GA. In all of the experiments, the generated random initial solution sets and the random population of solution sets were used in the SA and GA respectively. The comparison and performances of the SA and GA were evaluated for the optimization to take place for providing sets of solutions on certain systems. The SA algorithm is superior to GA on the basis of experimentation done on the sets of simultaneous equations, with a lower fitness function evaluation count in MATLAB simulation. Since, complex non-linear systems of equations have not been the primary focus of this research, in future, performances of SA and GA using such equations will be addressed. Even though GA maintained a relatively lower number of average generations than SA, SA still managed to outperform GA with a reasonably lower fitness function evaluation count. Although SA sometimes converges slowly, still it is efficient for solving problems of simultaneous equations in this case. In terms of computational complexity, SA was far more superior to GAs.


2021 ◽  
Author(s):  
Assima Rakhimbekova ◽  
Anton Lopukhov ◽  
Natalia L. Klyachko ◽  
Alexander Kabanov ◽  
Timur I. Madzhidov ◽  
...  

Active learning (AL) has become a subject of active recent research both in industry and academia as an efficient approach for rapid design and discovery of novel chemicals, materials, and polymers. The key advantages of this approach relate to its ability to (i) employ relatively small datasets for model development, (ii) iterate between model development and model assessment using small external datasets that can be either generated in focused experimental studies or formed from subsets of the initial training data, and (iii) progressively evolve models toward increasingly more reliable predictions and the identification of novel chemicals with the desired properties. Herein, we first compared various AL protocols for their effectiveness in finding biologically active molecules using synthetic datasets. We have investigated the dependency of AL performance on the size of the initial training set, the relative complexity of the task, and the choice of the initial training dataset. We found that AL techniques as applied to regression modeling offer no benefits over random search, while AL used for classification tasks performs better than models built for randomly selected training sets but still quite far from perfect. Using the best performing AL protocol, we have assessed the applicability of AL for the discovery of polymeric micelle formulations for poorly soluble drugs. Finally, the best performing AL approach was employed to discover and experimentally validate novel binding polymers for a case study of asialoglycoprotein receptor (ASGPR).


2021 ◽  
Author(s):  
Luiz Carlos Felix Ribeiro ◽  
Gustavo Henrique de Rosa ◽  
Douglas Rodrigues ◽  
João Paulo Papa

Abstract Convolutional Neural Networks have been widely employed in a diverse range of computer vision-based applications, including image classification, object recognition, and object segmentation. Nevertheless, one weakness of such models concerns their hyperparameters' setting, being highly specific for each particular problem. One common approach is to employ meta-heuristic optimization algorithms to find suitable sets of hyperparameters at the expense of increasing the computational burden, being unfeasible under real-time scenarios. In this paper, we address this problem by creating Convolutional Neural Networks ensembles through Single-Iteration Optimization, a fast optimization composed of only one iteration that is no more effective than a random search. Essentially, the idea is to provide the same capability offered by long-term optimizations, however, without their computational loads. The results among four well-known literature datasets revealed that creating one-iteration optimized ensembles provide promising results while diminishing the time to achieve them.


Sign in / Sign up

Export Citation Format

Share Document