Interactive search algorithm of artificial intelligence for household classification on smart electricity meter data

Author(s):  
M.S. Anbarasi ◽  
M. Suresh
2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


2021 ◽  
Author(s):  
Jon Gustav Vabø ◽  
Evan Thomas Delaney ◽  
Tom Savel ◽  
Norbert Dolle

Abstract This paper describes the transformational application of Artificial Intelligence (AI) in Equinor's annual well planning and maturation process. Well planning is a complex decision-making process, like many other processes in the industry. There are thousands of choices, conflicting business drivers, lots of uncertainty, and hidden bias. These complexities all add up, which makes good decision making very hard. In this application, AI has been used for automated and unbiased evaluation of the full solution space, with the objective to optimize the selection of drilling campaigns while taking into account complex issues such as anti-collision with existing wells, drilling hazards and trade-offs between cost, value and risk. Designing drillable well trajectories involves a sequence of decisions, which makes the process very suitable for AI algorithms. Different solver architectures, or algorithms, can be used to play this game. This is similar to how companies such as Google-owned DeepMind develop customized solvers for games such as Go and StarCraft. The chosen method is a Tree Search algorithm with an evolutionary layer on top, providing a good balance in terms of performance (i.e., speed) vs. exploration capability (i.e., it looks "wide" in the option space). The algorithm has been deployed in a full stack web-based application that allows users to follow an end-2-end workflow: from defining well trajectory design rules and constraints to running the AI engine and evaluating results to the optimization of multi-well drilling campaigns based on risk, value and cost objectives. The full-size paper describes different Norwegian Continental Shelf (NCS) use cases of this AI assisted well trajectory planning. Results to-date indicate significant CAPEX savings potential and step-change improvements in decision speed (months to days) compared to routine manual workflows. There are very limited real transformative examples of Artificial Intelligence in multi- disciplinary workflows. This paper therefore gives a unique insight how a combination of data science, domain expertise and end user feedback can lead to powerful and transformative AI solutions – implemented at scale within an existing organization.


2019 ◽  
Vol 2 (2) ◽  
pp. 114
Author(s):  
Insidini Fawwaz ◽  
Agus Winarta

<p class="8AbstrakBahasaIndonesia"><em>Games have the basic meaning of games, games in this case refer to the notion of intellectual agility. In its application, a Game certainly requires an AI (Artificial Intelligence), and the AI used in the construction of this police and thief game is the dynamic programming algorithm. This algorithm is a search algorithm to find the shortest route with the minimum cost, algorithm dynamic programming searches for the shortest route by adding the actual distance to the approximate distance so that it makes it optimum and complete. Police and thief is a game about a character who will try to run from </em><em>police.</em><em> The genre of this game is arcade, built with microsoft visual studio 2008, the AI used is the </em><em>Dynamic Programming</em> <em>algorithm which is used to search the path to attack players. The results of this test are police in this game managed to find the closest path determined by the </em><em>Dynamic Programming</em> <em>algorithm to attack players</em></p>


2020 ◽  
Vol 26 (8) ◽  
pp. 100-111
Author(s):  
V. Blanutsa ◽  

The national strategy for the development of artificial intelligence for the period up to 2030 sets the task of a significant increase in the number of scientific articles by Russian scientists in this field. To do this, it is necessary to navigate the priorities, problems and prospects of scientific research carried out all over the world. However, there is currently not a single generalizing work on regional economic studies using artificial intelligence algorithms. Therefore, the object of the study was the world array of scientific publications on regional economic research, and the subject of the study was a lot of articles on the use of artificial intelligence algorithms in such studies. The purpose of the work was to generalize world experience. To select the necessary publications, a self-organizing semantic search algorithm has been developed, based on the ideas of content analysis, expert systems and machine learning. The search was carried out in the Scopus database. About a hundred articles were identified. A brief description of ten artificial intelligence algorithms used in regional economic research is given. Analysis of world experience has revealed five features: algorithms are not used to solve all research problems; are not aimed at creating a universal autonomous artificial intelligence system; are increasingly focusing outside of artificial neural networks; rarely used in combination; in domestic works are less diverse than in competing countries. It is proposed to focus efforts on identification of economic regions of specific functioning of production, transport and service systems of artificial intelligence; identification of territorial digital platforms; analysis of the intensity of gravitational interaction of geographically distributed socio-economic objects through 5G and 6G telecommunication networks; assessment of the direction and volume of regional information flows; determination of models of spatial diffusion of innovations in artificial intelligence among Russian regions. The accelerated development of these areas with significant government support will allow Russia to provide a methodological gap from other countries in the field of regional economic research by 2030


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5332
Author(s):  
Carlos A. Duchanoy ◽  
Hiram Calvo ◽  
Marco A. Moreno-Armendáriz

Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors (SS). These methodologies represent a new challenge for the generation of surrogate models since they require the implementation of elaborated artificial intelligence algorithms and minimize the number of physical experiments measured. To reduce the assessment of a physical system, several existing adaptive sequential sampling methodologies have been developed; however, they are limited in most part to the Kriging models and Kriging-model-based Monte Carlo Simulation. In this paper, we integrate a distinct adaptive sampling methodology to an automated machine learning methodology (AutoML) to help in the process of model selection while minimizing the system evaluation and maximizing the system performance for surrogate models based on artificial intelligence algorithms. In each iteration, this framework uses a grid search algorithm to determine the best candidate models and perform a leave-one-out cross-validation to calculate the performance of each sampled point. A Voronoi diagram is applied to partition the sampling region into some local cells, and the Voronoi vertexes are considered as new candidate points. The performance of the sample points is used to estimate the accuracy of the model for a set of candidate points to select those that will improve more the model’s accuracy. Then, the number of candidate models is reduced. Finally, the performance of the framework is tested using two examples to demonstrate the applicability of the proposed method.


Author(s):  
Thiago Castanheira Retes de Sousa ◽  
Rafael Lima de Carvalho

Artificial Intelligence has always been used in designing of automated agents for playing games such as Chess, Go, Defense of the Ancients 2, Snake Game, billiard and many others. In this work, we present the development and performance evaluation of an automated bot that mimics a real life player for the RPG Game Tibia. The automated bot is built using a combination of AI techniques such as graph search algorithm A* and computer vision tools like template matching. Using four algorithms to get global position of player in game, handle its health and mana, target monsters and walk through the game, we managed to develop a fully automated Tibia bot based in raw input image. We evaluated the performance of the agent in three different scenarios, collecting and analyzing metrics such as XP Gain, Supplies Usage and Balance. The simulation results shows that the developed bot is capable of producing competitive results according to in-game metrics when compared to human players.


2016 ◽  
Vol 9 (3) ◽  
pp. 1
Author(s):  
Oluwatobi, A. Ayilara ◽  
Anuoluwapo, O. Ajayi ◽  
Kudirat, O. Jimoh

Game playing especially, Ayὸ game has been an important topic of research in artificial intelligence and several machine learning approaches have been used, but the need to optimize computing resources is important to encourage significant interest of users. This study presents a synthetic player (Ayὸ) implemented using Alpha-beta search and Learning Vector Quantization network. The program for the board game was written in Java and MATLAB. Evaluation of the synthetic player was carried out in terms of the win percentage and game length. The synthetic player had a better efficiency compared to the traditional Alpha-beta search algorithm.


Sign in / Sign up

Export Citation Format

Share Document