naive model
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 14)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
Author(s):  
◽  
Steven Brasell

<p>This research investigates the breakout of security prices from periods of sideways drift known as Triangles. Contributions are made to the existing literature by considering returns conditionally based on Triangles in particular terms of how momentum traders time positions, and by then using alternative statistical methods to more clearly show results. Returns are constructed by scanning for Triangle events, and determining simulated trader returns from predetermined price levels. These are compared with a Naive model consisting of randomly sampled events of comparable measure. Modelling of momentum results is achieved using a marked point Poisson process based approach, used to compare arrival times and profit/losses. These results are confirmed using a set of 10 day return heuristics using bootstrapping to define confidence intervals.  Using these methods applied to CRSP US equity data inclusive from years 1960 to 2017, US equities show a consistent but weak predictable return contribution after Triangle events occur; however, the effect has decreased over time, presumably as the market becomes more efficient. While these observed short term momentum changes in price have likely been compensated to a degree by risk, they do show that such patterns have contained forecastable information about US equities. This shows that prices have likely weakly been affected by past prices, but that currently the effect has reduced to the point that it is of negligible size as of 2017.</p>


2022 ◽  
Author(s):  
◽  
Steven Brasell

<p>This research investigates the breakout of security prices from periods of sideways drift known as Triangles. Contributions are made to the existing literature by considering returns conditionally based on Triangles in particular terms of how momentum traders time positions, and by then using alternative statistical methods to more clearly show results. Returns are constructed by scanning for Triangle events, and determining simulated trader returns from predetermined price levels. These are compared with a Naive model consisting of randomly sampled events of comparable measure. Modelling of momentum results is achieved using a marked point Poisson process based approach, used to compare arrival times and profit/losses. These results are confirmed using a set of 10 day return heuristics using bootstrapping to define confidence intervals.  Using these methods applied to CRSP US equity data inclusive from years 1960 to 2017, US equities show a consistent but weak predictable return contribution after Triangle events occur; however, the effect has decreased over time, presumably as the market becomes more efficient. While these observed short term momentum changes in price have likely been compensated to a degree by risk, they do show that such patterns have contained forecastable information about US equities. This shows that prices have likely weakly been affected by past prices, but that currently the effect has reduced to the point that it is of negligible size as of 2017.</p>


2021 ◽  
Author(s):  
Kimberly Robasky ◽  
Raphael Kim ◽  
Hong Yi ◽  
Hao Xu ◽  
Bokan Bao ◽  
...  

Background: Predicting outcomes on human genetic studies is difficult because the number of variables (genes) is often much larger than the number of observations (human subject tissue samples). We investigated means for improving model performance on the types of under-constrained problems that are typical in human genetics, where the number of genes (features) are strongly correlated but may exceed 10,000, and the number of study participants (observations) may be limited to under 1,000. Methods: We created 'train', 'validate' and 'test' datasets from 240 microarray observations from 127 subjects diagnosed with autism spectrum disorder (ASD) and 113 'typically developing' (TD) subjects (a.k.a., the 'naive' model). We trained a neural network model (a.k.a., the 'naive' model) on 10,422 genes using the 'train' dataset, composed of 70 ASD and 65 TD subjects, and we restricted the model to one, fully-connected hidden layer to minimize the number of trainable parameters, including a drop-out layer to further thin the network. We experimented with alternative network architectures and tuned the hyperparameters using the 'validate' dataset and performed a single, final evaluation using the hold-out 'test' dataset. Next, we trained a neural network model using the identical architecture and identical genes to predict tissue type in GTEx data. We transferred that learning by replacing the top layer of the GTEx model with a layer to predict ASD outcome and we retrained on the ASD dataset, again using the identical 10,422 genes. Findings: The 'naive' neural network model had AUROC=0.58 for the task of predicting ASD outcomes, which saw a statistically significant 7.8% improvement through the use of transfer learning. Interpretation: We demonstrated that neural network learning can be transferred from models trained on large RNA-Seq gene expression to a model trained on a small, microarray gene expression dataset with clinical utility for mitigating over-training on small sample sizes. Incidentally, we built a highly accurate classifier of tissue type with which to perform the transfer learning. Author Summary: Image recognition and natural language processing have enjoyed great success in reusing the computational efforts and data sources to overcome the problem of over-training a neural network on a limited dataset. Other domains using deep learning, including genomics and clinical applications, have been slower to benefit from transfer learning. Here we demonstrate data preparation and modeling techniques that allow genomics researchers to take advantage of transfer learning in order to increase the utility of limited clinical datasets. We show that a non-pretrained, 'naive' model performance can be improved by 7.8% by transferring learning from a highly performant model trained on GTEx data to solve a similar problem.


2021 ◽  
Vol 18 (2) ◽  
pp. 155-165
Author(s):  
Mary S. Daugherty ◽  
Thadavillil Jithendranathan ◽  
David O. Vang

This paper uses a Multiple Attribute Decision Making (MADM) model to improve the out-of-sample performance of a naïve asset allocation model. Under certain conditions, the naïve model has out-performed other portfolio optimization models, but it also has been shown to increase the tail risk. The MADM model uses a set of attributes to rank the assets and is flexible with the attributes that can be used in the ranking process. The MADM model assigns weights to each attribute and uses these weights to rank assets in terms of their desirability for inclusion in a portfolio. Using the MADM model, assets are ranked based on the attributes, and unlike the naïve model, only the top 50 percent of assets are included in the portfolio at any point in time. This model is tested using both developed and emerging market stock indices. In the case of developed markets, the MADM model had 24.04 percent higher return and 53.66 percent less kurtosis than the naïve model. In the case of emerging markets, the MADM model return is 90.16 percent higher than the naïve model, but with almost similar kurtosis.


2020 ◽  
Vol 75 (9-10) ◽  
pp. 549-561
Author(s):  
Christian Beyer ◽  
Vishnu Unnikrishnan ◽  
Robert Brüggemann ◽  
Vincent Toulouse ◽  
Hafez Kader Omar ◽  
...  

Abstract Many current and future applications plan to provide entity-specific predictions. These range from individualized healthcare applications to user-specific purchase recommendations. In our previous stream-based work on Amazon review data, we could show that error-weighted ensembles that combine entity-centric classifiers, which are only trained on reviews of one particular product (entity), and entity-ignorant classifiers, which are trained on all reviews irrespective of the product, can improve prediction quality. This came at the cost of storing multiple entity-centric models in primary memory, many of which would never be used again as their entities would not receive future instances in the stream. To overcome this drawback and make entity-centric learning viable in these scenarios, we investigated two different methods of reducing the primary memory requirement of our entity-centric approach. Our first method uses the lossy counting algorithm for data streams to identify entities whose instances make up a certain percentage of the total data stream within an error-margin. We then store all models which do not fulfil this requirement in secondary memory, from which they can be retrieved in case future instances belonging to them should arrive later in the stream. The second method replaces entity-centric models with a much more naive model which only stores the past labels and predicts the majority label seen so far. We applied our methods on the previously used Amazon data sets which contained up to 1.4M reviews and added two subsets of the Yelp data set which contain up to 4.2M reviews. Both methods were successful in reducing the primary memory requirements while still outperforming an entity-ignorant model.


2020 ◽  
Vol 17 (1) ◽  
pp. 117-128
Author(s):  
Arnita Arnita

This study aims to compare the best method on the forecasting system of rainfall in Medan using Single Exponential Smoothing (SES), Naive Model, and Seasonal Autoregressive Integrated Moving Average (SARIMA) . The data used in this study is rainfall data for 10 years (2009 – 2019). From the simulation by comparing existing method, the best model is SES with  and value of MAPE (Mean Absolut Percentage Error) sebesar 2,47%. And then  SARIMA (1,01,1)(4,0,3)12 whit value of MAPE  is2,93%. Both of this model is high accurate model because value of MAPE resulted < 10%.  


Energies ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 2830
Author(s):  
David Domínguez-Barbero ◽  
Javier García-González ◽  
Miguel A. Sanz-Bobi ◽  
Eugenio F. Sánchez-Úbeda

The deployment of microgrids could be fostered by control systems that do not require very complex modelling, calibration, prediction and/or optimisation processes. This paper explores the application of Reinforcement Learning (RL) techniques for the operation of a microgrid. The implemented Deep Q-Network (DQN) can learn an optimal policy for the operation of the elements of an isolated microgrid, based on the interaction agent-environment when particular operation actions are taken in the microgrid components. In order to facilitate the scaling-up of this solution, the algorithm relies exclusively on historical data from past events, and therefore it does not require forecasts of the demand or the renewable generation. The objective is to minimise the cost of operating the microgrid, including the penalty of non-served power. This paper analyses the effect of considering different definitions for the state of the system by expanding the set of variables that define it. The obtained results are very satisfactory as it can be concluded by their comparison with the perfect-information optimal operation computed with a traditional optimisation model, and with a Naive model.


Author(s):  
Hamza Ali Imran ◽  
Saad Wazir ◽  
Ahmed Jamal Ikram ◽  
Ataul Aziz Ikram ◽  
Hanif Ullah ◽  
...  
Keyword(s):  

Author(s):  
Michael E. Peskin

This chapter works out the theory of electron-positron annihilation to muon pairs as a model for electron-positron annihilation to quarks. It explains that this naive model provides a good description of observed properties of the process of electron-positron annihilation to hadrons.


Sign in / Sign up

Export Citation Format

Share Document