INTELIGENCIA ARTIFICIAL
Latest Publications


TOTAL DOCUMENTS

558
(FIVE YEARS 57)

H-INDEX

8
(FIVE YEARS 2)

Published By Iberamia: Sociedad Iberoamericana De Inteligencia Artificial

1988-3064, 1137-3601

2021 ◽  
Vol 24 (68) ◽  
pp. 123-137
Author(s):  
Sami Nasser Lauar ◽  
Mario Mestria

In this work, we present a metaheuristic based on the genetic and greedy algorithms to solve an application of the set covering problem (SCP), the data aggregator positioning in smart grids. The GGH (Greedy Genetic Hybrid) is structured as a genetic algorithm, but it has many modifications compared to the classic version. At the mutation step, only columns included in the solution can suffer mutation and be removed. At the recombination step, only columns from the parent’s solutions are available to generate the offspring. Moreover, the greedy algorithm generates the initial population, reconstructs solutions after mutation, and generates new solutions from the recombination step. Computational results using OR-Library problems showed that the GGH reached optimal solutions for 40 instances in a total of 75 and, in the other instances, obtained good and promising values, presenting a medium gap of 1,761%.


2021 ◽  
Vol 24 (68) ◽  
pp. 104-122
Author(s):  
Rupinder Kaur ◽  
Anurag Sharma

Several studies have been reported the use of machine learning algorithms in the detection of Tuberculosis, but studies that discuss the detection of both types of TB, i.e., Pulmonary and Extra Pulmonary Tuberculosis, using machine learning algorithms are lacking. Therefore, an integrated system based on machine learning models has been proposed in this paper to assist doctors and radiologists in interpreting patients’ data to detect of PTB and EPTB. Three basic machine learning algorithms, Decision Tree, Naïve Bayes, SVM, have been used to predict and compare their performance. The clinical data and the image data are used as input to the models and these datasets have been collected from various hospitals of Jalandhar, Punjab, India. The dataset used to train the model comprises 200 patients’ data containing 90 PTB patients, 67 EPTB patients, and 43 patients having NO TB. The validation dataset contains 49 patients, which exhibited the best accuracy of 95% for classifying PTB and EPTB using Decision Tree, a machine learning algorithm.


2021 ◽  
Vol 24 (68) ◽  
pp. 89-103
Author(s):  
João Batista Pacheco Junior ◽  
Henrique Mariano Costa do Amaral

The design and manual insertion of new terrestrial roads into geographic databases is a frequent activity in geoprocessing and their demand usually occurs as the most up-to-date satellite imagery of the territory is acquired. Continually, new urban and rural occupations emerge, for which specific vector geometries need to be designed to characterize the cartographic inputs and accommodate the relevant associated data. Therefore, it is convenient to develop a computational tool that, with the help of artificial intelligence, automates what is possible in this respect, since manual editing depends on the limits of user agility, and does it in images that are usually easy and free to access. To test the feasibility of this proposal, a database of RGB images containing asphalted urban roads is presented to the K-Means++ algorithm and the SegNet Convolutional Neural Network, and the performance of each was evaluated and compared for accuracy and IoU of road identification. Under the conditions of the experiment, K-Means++ achieved poor and unviable results for use in a real-life application involving tarmac detection in RGB satellite images, with average accuracy ranging from 41.67% to 64.19% and average IoU of 12.30% to 16.16%, depending on the preprocessing strategy used. On the other hand, the SegNet Convolutional Neural Network proved to be appropriate for precision applications not sensitive to discontinuities, achieving an average accuracy of 87.12% and an average IoU of 71.93%.


2021 ◽  
Vol 24 (68) ◽  
pp. 72-88
Author(s):  
Mohammad Alshayeb ◽  
Mashaan A. Alshammari

The ongoing development of computer systems requires massive software projects. Running the components of these huge projects for testing purposes might be a costly process; therefore, parameter estimation can be used instead. Software defect prediction models are crucial for software quality assurance. This study investigates the impact of dataset size and feature selection algorithms on software defect prediction models. We use two approaches to build software defect prediction models: a statistical approach and a machine learning approach with support vector machines (SVMs). The fault prediction model was built based on four datasets of different sizes. Additionally, four feature selection algorithms were used. We found that applying the SVM defect prediction model on datasets with a reduced number of measures as features may enhance the accuracy of the fault prediction model. Also, it directs the test effort to maintain the most influential set of metrics. We also found that the running time of the SVM fault prediction model is not consistent with dataset size. Therefore, having fewer metrics does not guarantee a shorter execution time. From the experiments, we found that dataset size has a direct influence on the SVM fault prediction model. However, reduced datasets performed the same or slightly lower than the original datasets.


2021 ◽  
Vol 24 (68) ◽  
pp. 21-32
Author(s):  
Yaming Cao ◽  
ZHEN YANG ◽  
CHEN GAO

Convolutional neural networks (CNNs) have shown strong learning capabilities in computer vision tasks such as classification and detection. Especially with the introduction of excellent detection models such as YOLO (V1, V2 and V3) and Faster R-CNN, CNNs have greatly improved detection efficiency and accuracy. However, due to the special angle of view, small size, few features, and complicated background, CNNs that performs well in the ground perspective dataset, fails to reach a good detection accuracy in the remote sensing image dataset. To this end, based on the YOLO V3 model, we used feature maps of different depths as detection outputs to explore the reasons for the poor detection rate of small targets in remote sensing images by deep neural networks. We also analyzed the effect of neural network depth on small target detection, and found that the excessive deep semantic information of neural network has little effect on small target detection. Finally, the verification on the VEDAI dataset shows, that the fusion of shallow feature maps with precise location information and deep feature maps with rich semantics in the CNNs can effectively improve the accuracy of small target detection in remote sensing images.


2021 ◽  
Vol 24 (68) ◽  
pp. 1-20
Author(s):  
Jorge E Camargo ◽  
Rigoberto Sáenz

We want to measure the impact of the curriculum learning technique on a reinforcement training setup, several experiments were designed with different training curriculums adapted for the video game chosen as a case study. Then all were executed on a selected game simulation platform, using two reinforcement learning algorithms, and using the mean cumulative reward as a performance measure. Results suggest that curriculum learning has a significant impact on the training process, increasing training times in some cases, and decreasing them up to 40% percent in some other cases.


2021 ◽  
Vol 24 (68) ◽  
pp. 53-71
Author(s):  
D. Gonzalez-Calvo ◽  
R.M. Aguilar ◽  
C. Criado-Hernandez ◽  
L.A. Gonzalez-Mendoza

The planning of industrial maintenance associated with the production of electricity is vital, as it yields a current and future snapshot of an industrial component in order to optimize the human, technical and economic resources of the installation. This study focuses on the degradation due to fouling of a gas turbine in the Canary Islands, and analyzes fouling levels over time based on the operating regime and local meteorological variables. In particular, we study the relationship between degradation and the suspended dust that originates in the Sahara Desert. To this end, we use a computational procedure that relies on a set of artificial neural networks to build an ensemble, using a cross-validated committees approach, to yield the compressor efficiency. The use of trained models makes it possible to know in advance how the local fouling of an industrial rotating component will evolve, which is useful for maintenance planning and for calculating the relative importance of the variables that make up the system


2021 ◽  
Vol 24 (68) ◽  
pp. 37-52
Author(s):  
Moussa Demba

In relational databases, it is essential to know all minimal keys since the concept of database normaliza-tion is based on keys and functional dependencies of a relation schema. Existing algorithms for determining keysor computing the closure of arbitrary sets of attributes are generally time-consuming. In this paper we present anefficient algorithm, called KeyFinder, for solving the key-finding problem. We also propose a more direct methodfor computing the closure of a set of attributes. KeyFinder is based on a powerful proof procedure for findingkeys called tableaux. Experimental results show that KeyFinder outperforms its predecessors in terms of searchspace and execution time.


2021 ◽  
Vol 24 (67) ◽  
pp. 147-156
Author(s):  
Amin Rezaeipanah ◽  
Neda Boroumand

Nowadays, breast cancer is one of the leading causes of death women in the worldwide. If breast cancer is detected at the beginning stage, it can ensure long-term survival. Numerous methods have been proposed for the early prediction of this cancer, however, efforts are still ongoing given the importance of the problem. Artificial Neural Networks (ANN) have been established as some of the most dominant machine learning algorithms, where they are very popular for prediction and classification work. In this paper, an Intelligent Ensemble Classification method based on Multi-Layer Perceptron neural network (IEC-MLP) is proposed for breast cancer diagnosis. The proposed method is split into two stages, parameters optimization and ensemble classification. In the first stage, the MLP Neural Network (MLP-NN) parameters, including optimal features, hidden layers, hidden nodes and weights, are optimized with an Evolutionary Algorithm (EA) for maximize the classification accuracy. In the second stage, an ensemble classification algorithm of MLP-NN is applied to classify the patient with optimized parameters. Our proposed IEC-MLP method which can not only help to reduce the complexity of MLP-NN and effectively selection the optimal feature subset, but it can also obtain the minimum misclassification cost. The classification results were evaluated using the IEC-MLP for different breast cancer datasets and the prediction results obtained were very promising (98.74% accuracy on the WBCD dataset). Meanwhile, the proposed method outperforms the GAANN and CAFS algorithms and other state-of-the-art classifiers. In addition, IEC-MLP could also be applied to other cancer diagnosis.


2021 ◽  
Vol 24 (67) ◽  
pp. 121-128
Author(s):  
Gerardo Ernesto Rolong Agudelo ◽  
Carlos Enrique Montenegro Marin ◽  
Paulo Alonso Gaona-Garcia

In the world and some countries like Colombia, the number of missing person is a phenome very worrying and growing, every year, thousands of people are reported missing all over the world, the fact that this keeps happening might indicate that there are still analyses that have not been done and tools that have not been considered in order to find patterns in the information of missing person. The present article presents a study of the way informatics and computational tools can be used to help find missing person and what patterns can be found in missing person datasets using as a study case open data about missing person in Colombia in 2017. The goal of this study is to review how computational tools like data mining and image analysis can be used to help find missing person and draw patterns in the available information about missing person. For this, first it will be review of the state of art of image analysis in real world applications was made in order to explore the possibilities when studying the photos of missing person, then a data mining process with data of missing person in Colombia was conducted to produce a set of decision rules that can explain the cause of the disappearance, as a result is generated decision rules algorithm suggest links between socioeconomic stratification, age, gender and specific locations of Colombia and the missing person reports. In conclusion, this work reviews what information about missing person is available publicly and what analysis can me made with them, showing that data mining and face recognition can be useful tools to extract patterns and identify patterns in missing person data.


Sign in / Sign up

Export Citation Format

Share Document