Beautiful Fractals as a Crystal Ball for Financial Markets? - Investment Decision Support System Based on Image Recognition Using Artificial Intelligence

2020 ◽  
Vol 14 (2) ◽  
pp. 27-44
Author(s):  
Benjamin M. Abdel-Karim

The work by Mandelbrot develops a basic understanding of fractals and the artwork of Jackson Pollok to reveal the beauty fractal geometry. The pattern of recurring structures is also reflected in share prices. Mandelbrot himself speaks of the fractal heart of the financial markets. Previous research has shown the potential of image recognition. This paper presents the possibility of using the structure recognition capability of modern machine learning methods to make forecasts based on fractal course information. We generate training data from real and simulated data. These data are represented in images to train a special artificial neural network. Subsequently, real data are presented to the network for use in predicting. The results show that the forecast of time series based on stock price illustration, compared to a benchmark, delivers promising results. This paper makes two essential contributions to research. From a theoretical point of view, fractal geometry shows that it can serve as a means of legitimation for technical analysis. From a practical point of view, highly developed methods from the field of machine learning are able to recognize patterns in data through appropriate data transformation, and that models such as random walk have an informational content that can be used to train machine learning models.

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1835
Author(s):  
Antonio Barrera ◽  
Patricia Román-Román ◽  
Francisco Torres-Ruiz

A joint and unified vision of stochastic diffusion models associated with the family of hyperbolastic curves is presented. The motivation behind this approach stems from the fact that all hyperbolastic curves verify a linear differential equation of the Malthusian type. By virtue of this, and by adding a multiplicative noise to said ordinary differential equation, a diffusion process may be associated with each curve whose mean function is said curve. The inference in the resulting processes is presented jointly, as well as the strategies developed to obtain the initial solutions necessary for the numerical resolution of the system of equations resulting from the application of the maximum likelihood method. The common perspective presented is especially useful for the implementation of the necessary procedures for fitting the models to real data. Some examples based on simulated data support the suitability of the development described in the present paper.


2020 ◽  
pp. 105971231989648 ◽  
Author(s):  
David Windridge ◽  
Henrik Svensson ◽  
Serge Thill

We consider the benefits of dream mechanisms – that is, the ability to simulate new experiences based on past ones – in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize “dreaming” as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data. We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism. We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete.


Author(s):  
Marina Paolanti ◽  
Emanuele Frontoni ◽  
Adriano Mancini ◽  
Roberto Pierdicca ◽  
Primo Zingaretti

The mix-up is a phenomenon in which a tablet/capsule gets into a different package. It is an annoying problem because mixing different products in the same package could result dangerous for consumers that take the incorrect product or receive an unintended ingredient. So, the consequences could be very dangerous: overdose, interaction with other medications a consumer may be taking, or an allergic reaction. The manufacturers are not able to guarantee the contents of the packages and so for this reason they are very exposed to the risk in which users rightly want to obtain compensation for possible damages caused by the mix-up. The aim of this work is the identification of mix-up events, through machine learning approach based on data, coming from different embedded systems installed in the manufacturing facilities and from the information system, in order to implement integrated policies for data analysis and sensor fusion that leads to waste and detection of pieces that do not comply. In this field, two types of approaches from the point of view of embedded sensors (optical and NIR vision and interferometry) will be analyzed focusing in particular on data processing and their classification on advanced manufacturing scenarios. Results are presented considering a simulated scenario that uses pre-recorded real data to test, in a preliminary stage, the effectiveness and the novelty of the proposed approach.


2021 ◽  
Author(s):  
Jaydip Sen ◽  
Sidra Mehtab ◽  
Abhishek Dutta

Prediction of stock prices has been an important area of research for a long time. While supporters of the <i>efficient market hypothesis</i> believe that it is impossible to predict stock prices accurately, there are formal propositions demonstrating that accurate modeling and designing of appropriate variables may lead to models using which stock prices and stock price movement patterns can be very accurately predicted. Researchers have also worked on technical analysis of stocks with a goal of identifying patterns in the stock price movements using advanced data mining techniques. In this work, we propose an approach of hybrid modeling for stock price prediction building different machine learning and deep learning-based models. For the purpose of our study, we have used NIFTY 50 index values of the National Stock Exchange (NSE) of India, during the period December 29, 2014 till July 31, 2020. We have built eight regression models using the training data that consisted of NIFTY 50 index records from December 29, 2014 till December 28, 2018. Using these regression models, we predicted the <i>open</i> values of NIFTY 50 for the period December 31, 2018 till July 31, 2020. We, then, augment the predictive power of our forecasting framework by building four deep learning-based regression models using long-and short-term memory (LSTM) networks with a novel approach of walk-forward validation. Using the grid-searching technique, the hyperparameters of the LSTM models are optimized so that it is ensured that validation losses stabilize with the increasing number of epochs, and the convergence of the validation accuracy is achieved. We exploit the power of LSTM regression models in forecasting the future NIFTY 50 <i>open</i> values using four different models that differ in their architecture and in the structure of their input data. Extensive results are presented on various metrics for all the regression models. The results clearly indicate that the LSTM-based univariate model that uses one-week prior data as input for predicting the next week's <i>open</i> value of the NIFTY 50 time series is the most accurate model.


Author(s):  
A. A. Meldo ◽  
L. V. Utkin ◽  
T. N. Trofimova ◽  
M. A. Ryabinin ◽  
V. M. Moiseenko ◽  
...  

The relevance of developing an intelligent automated diagnostic system (IADS) for lung cancer (LC) detection stems from the social significance of this disease and its leading position among all cancer diseases. Theoretically, the use of IADS is possible at a stage of screening as well as at a stage of adjusted diagnosis of LC. The recent approaches to training the IADS do not take into account the clinical and radiological classification as well as peculiarities of the LC clinical forms, which are used by the medical community. This defines difficulties and obstacles of using the available IADS. The authors are of the opinion that the closeness of a developed IADS to the «doctor’s logic» contributes to a better reproducibility and interpretability of the IADS usage results. Most IADS described in the literature have been developed on the basis of neural networks, which have several disadvantages that affect reproducibility when using the system. This paper proposes a composite algorithm using machine learning methods such as Deep Forest and Siamese neural network, which can be regarded as a more efficient approach for dealing with a small amount of training data and optimal from the reproducibility point of view. The open datasets used for training IADS include annotated objects which in some cases are not confirmed morphologically. The paper provides a description of the LIRA dataset developed by using the diagnostic results of St. Petersburg Clinical Research Center of Specialized Types of Medical Care (Oncology), which includes only computed tomograms of patients with the verified diagnosis. The paper considers stages of the machine learning process on the basis of the shape features, of the internal structure features as well as a new developed system of differential diagnosis of LC based on the Siamese neural networks. A new approach to the feature dimension reduction is also presented in the paper, which aims more efficient and faster learning of the system.


2019 ◽  
Vol 30 (1) ◽  
pp. 45-66 ◽  
Author(s):  
Anette Rantanen ◽  
Joni Salminen ◽  
Filip Ginter ◽  
Bernard J. Jansen

Purpose User-generated social media comments can be a useful source of information for understanding online corporate reputation. However, the manual classification of these comments is challenging due to their high volume and unstructured nature. The purpose of this paper is to develop a classification framework and machine learning model to overcome these limitations. Design/methodology/approach The authors create a multi-dimensional classification framework for the online corporate reputation that includes six main dimensions synthesized from prior literature: quality, reliability, responsibility, successfulness, pleasantness and innovativeness. To evaluate the classification framework’s performance on real data, the authors retrieve 19,991 social media comments about two Finnish banks and use a convolutional neural network (CNN) to classify automatically the comments based on manually annotated training data. Findings After parameter optimization, the neural network achieves an accuracy between 52.7 and 65.2 percent on real-world data, which is reasonable given the high number of classes. The findings also indicate that prior work has not captured all the facets of online corporate reputation. Practical implications For practical purposes, the authors provide a comprehensive classification framework for online corporate reputation, which companies and organizations operating in various domains can use. Moreover, the authors demonstrate that using a limited amount of training data can yield a satisfactory multiclass classifier when using CNN. Originality/value This is the first attempt at automatically classifying online corporate reputation using an online-specific classification framework.


2021 ◽  
Vol 40 (1) ◽  
Author(s):  
Tuomas Koskinen ◽  
Iikka Virkkunen ◽  
Oskar Siljama ◽  
Oskari Jessen-Juhler

AbstractPrevious research (Li et al., Understanding the disharmony between dropout and batch normalization by variance shift. CoRR abs/1801.05134 (2018). http://arxiv.org/abs/1801.05134arXiv:1801.05134) has shown the plausibility of using a modern deep convolutional neural network to detect flaws from phased-array ultrasonic data. This brings the repeatability and effectiveness of automated systems to complex ultrasonic signal evaluation, previously done exclusively by human inspectors. The major breakthrough was to use virtual flaws to generate ample flaw data for the teaching of the algorithm. This enabled the use of raw ultrasonic scan data for detection and to leverage some of the approaches used in machine learning for image recognition. Unlike traditional image recognition, training data for ultrasonic inspection is scarce. While virtual flaws allow us to broaden the data considerably, original flaws with proper flaw-size distribution are still required. This is of course the same for training human inspectors. The training of human inspectors is usually done with easily manufacturable flaws such as side-drilled holes and EDM notches. While the difference between these easily manufactured artificial flaws and real flaws is obvious, human inspectors still manage to train with them and perform well in real inspection scenarios. In the present work, we use a modern, deep convolutional neural network to detect flaws from phased-array ultrasonic data and compare the results achieved from different training data obtained from various artificial flaws. The model demonstrated good generalization capability toward flaw sizes larger than the original training data, and the effect of the minimum flaw size in the data set affects the $$a_{90/95}$$ a 90 / 95 value. This work also demonstrates how different artificial flaws, solidification cracks, EDM notch and simple simulated flaws generalize differently.


Author(s):  
V. Serbin ◽  
U. Zhenisserov

Since the stock market is one of the most important areas for investors, stock market price trend prediction is still a hot subject for researchers in both financial and technical fields. Lately, a lot of work has been analyzed and done in the field of machine learning algorithms for analyzing price patterns and predicting stock prices and index changes. Currently, machine-learning methods are receiving a lot of attention for predicting prices in financial markets. The main goal of current research is to improve and develop a system for predicting future prices in financial markets with higher accuracy using machine-learning methods. Precise predicting stock market returns is a very difficult task due to the volatile and non-linear nature of financial stock markets. With the advent of artificial intelligence and machine learning, forecasting methods have become more effective at predicting stock prices. In this article, we looked at the machine learning techniques that have been used to trade stocks to predict price changes before an actual rise or fall in the stock price occurs. In particular, the article discusses in detail the use of support vector machines, linear regression, and prediction using decision stumps, classification using the nearest neighbor algorithm, and the advantages and disadvantages of each method. The paper introduces parameters and variables that can be used to recognize stock price patterns that might be useful in future stock forecasting, and how the boost can be combined with other learning algorithms to improve the accuracy of such forecasting systems.


2021 ◽  
Vol 12 (1) ◽  
pp. 76
Author(s):  
Juan A. Marin-Garcia ◽  
Angel Ruiz ◽  
Maheut Julien ◽  
Jose P. Garcia-Sabater

<p class="Abstract">This paper presents the generation of a plausible data set related to the needs of COVID-19 patients with severe or critical symptoms. Possible illness’ stages were proposed within the context of medical knowledge as of January 2021. The parameters chosen in this data set were customized to fit the population data of the Valencia region (Spain) with approximately 2.5 million inhabitants. They were based on the evolution of the pandemic between September 2020 and March 2021, a period that included two complete waves of the pandemic.</p><p class="Abstract">Contrary to expectation and despite the European and national transparency laws (BOE-A2013-12887, 2013; European Parliament and Council of the European Union, 2019), the actual COVID-19 pandemic-related data, at least in Spain, took considerable time to be updated and made available (usually a week or more). Moreover, some relevant data necessary to develop and validate hospital bed management models were not publicly accessible. This was either because these data were not collected, because public agencies failed to make them public (despite having them indexed in their databases), the data were processed within indicators and not shown as raw data, or they simply published the data in a format that was difficult to process (e.g., PDF image documents versus CSV tables). Despite the potential of hospital information systems, there were still data that were not adequately captured within these systems.</p><p class="Abstract">Moreover, the data collected in a hospital depends on the strategies and practices specific to that hospital or health system. This limits the generalization of "real" data, and it encourages working with "realistic" or plausible data that are clean of interactions with local variables or decisions (Gunal, 2012; Marin-Garcia et al., 2020). Besides, one can parameterize the model and define the data structure that would be necessary to run the model without delaying till the real data become available. Conversely, plausible data sets can be generated from publicly available information and, later, when real data become available, the accuracy of the model can be evaluated (Garcia-Sabater and Maheut, 2021).</p><p class="Abstract">This work opens lines of future research, both theoretical and practical. From a theoretical point of view, it would be interesting to develop machine learning tools that, by analyzing specific data samples in real hospitals, can identify the parameters necessary for the automatic prototyping of generators adapted to each hospital. Regarding the lines of research applied, it is evident that the formalism proposed for the generation of sound patients is not limited to patients affected by SARS-CoV-2 infection. The generation of heterogeneous patients can represent the needs of a specific population and serve as a basis for studying complex health service delivery systems.</p><p class="Abstract"> </p><p class="Abstract"> </p>


Sign in / Sign up

Export Citation Format

Share Document