scholarly journals Deep Stock Prediction

2019 ◽  
Vol 8 (4) ◽  
pp. 2555-2558

The ongoing development of profound learning has empowered exchanging calculations to anticipate stock value developments all the more precisely. Tragically, there is a noteworthy hole in reality sending of this achievement. For instance, proficient brokers in their long haul professions have collected various exchanging rules, the legend of which they can see great. Then again, profound learning models have been not really interpretable. This paper presents DeepClue, a framework worked to connect content based profound learning models and end clients through outwardly deciphering the key components learned in the stock value forecast model. We make three commitments in DeepClue. To start with, by structuring the profound neural system engineering for translation and applying a calculation to separate important prescient variables, we give a valuable case on what can be deciphered out of the expectation model for end clients. Second, by investigating chains of command over the extricated factors and showing these variables in an intuitive, progressive representation interface, we shed light on the best way to successfully convey the translated model to end clients. Uncommonly, the elucidation isolates the anticipated from the eccentric for stock forecast using block model parameters and a hazard representation structure. Third, we assess the coordinated perception framework through two contextual analyses in anticipating the stock cost with online budgetary news and friends related tweets from web based life. Quantitative tests contrasting the proposed neural system design and cutting edge models and the human gauge are led and detailed. Criticisms from a casual client contemplate with area specialists are abridged and examined in detail. All the examination results show the viability of DeepClue in finishing securities exchange speculation and investigation assignments.

2012 ◽  
Vol 12 (12) ◽  
pp. 3719-3732 ◽  
Author(s):  
L. Mediero ◽  
L. Garrote ◽  
A. Chavez-Jimenez

Abstract. Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.


2021 ◽  
Author(s):  
Benjamin Kellenberger ◽  
Devis Tuia ◽  
Dan Morris

<p>Ecological research like wildlife censuses increasingly relies on data on the scale of Terabytes. For example, modern camera trap datasets contain millions of images that require prohibitive amounts of manual labour to be annotated with species, bounding boxes, and the like. Machine learning, especially deep learning [3], could greatly accelerate this task through automated predictions, but involves expansive coding and expert knowledge.</p><p>In this abstract we present AIDE, the Annotation Interface for Data-driven Ecology [2]. In a first instance, AIDE is a web-based annotation suite for image labelling with support for concurrent access and scalability, up to the cloud. In a second instance, it tightly integrates deep learning models into the annotation process through active learning [7], where models learn from user-provided labels and in turn select the most relevant images for review from the large pool of unlabelled ones (Fig. 1). The result is a system where users only need to label what is required, which saves time and decreases errors due to fatigue.</p><p><img src="https://contentmanager.copernicus.org/fileStorageProxy.php?f=gnp.0402be60f60062057601161/sdaolpUECMynit/12UGE&app=m&a=0&c=131251398e575ac9974634bd0861fadc&ct=x&pn=gnp.elif&d=1" alt=""></p><p><em>Fig. 1: AIDE offers concurrent web image labelling support and uses annotations and deep learning models in an active learning loop.</em></p><p>AIDE includes a comprehensive set of built-in models, such as ResNet [1] for image classification, Faster R-CNN [5] and RetinaNet [4] for object detection, and U-Net [6] for semantic segmentation. All models can be customised and used without having to write a single line of code. Furthermore, AIDE accepts any third-party model with minimal implementation requirements. To complete the package, AIDE offers both user annotation and model prediction evaluation, access control, customisable model training, and more, all through the web browser.</p><p>AIDE is fully open source and available under https://github.com/microsoft/aerial_wildlife_detection.</p><p> </p><p><strong>References</strong></p>


2013 ◽  
Vol 115 (3) ◽  
pp. 1-47 ◽  
Author(s):  
Barbara Means ◽  
Yukie Toyama ◽  
Robert Murphy ◽  
Marianne Baki

Background/Context Earlier research on various forms of distance learning concluded that these technologies do not differ significantly from regular classroom instruction in terms of learning outcomes. Now that web-based learning has emerged as a major trend in both K–12 and higher education, the relative efficacy of online and face-to-face instruction needs to be revisited. The increased capabilities of web-based applications and collaboration technologies and the rise of blended learning models combining web-based and face-to-face classroom instruction have raised expectations for the effectiveness of online learning. Purpose/Objective/Research Question/Focus of Study This meta-analysis was designed to produce a statistical synthesis of studies contrasting learning outcomes for either fully online or blended learning conditions with those of face-to-face classroom instruction. Population/Participants/Subjects The types of learners in the meta-analysis studies were about evenly split between students in college or earlier years of education and learners in graduate programs or professional training. The average learner age in a study ranged from 13 to 44. Intervention/Program/Practice The meta-analysis was conducted on 50 effects found in 45 studies contrasting a fully or partially online condition with a fully face-to-face instructional condition. Length of instruction varied across studies and exceeded one month in the majority of them. Research Design The meta-analysis corpus consisted of (1) experimental studies using random assignment and (2) quasi-experiments with statistical control for preexisting group differences. An effect size was calculated or estimated for each contrast, and average effect sizes were computed for fully online learning and for blended learning. A coding scheme was applied to classify each study in terms of a set of conditions, practices, and methodological variables. Findings/Results The meta-analysis found that, on average, students in online learning conditions performed modestly better than those receiving face-to-face instruction. The advantage over face-to-face classes was significant in those studies contrasting blended learning with traditional face-to-face instruction but not in those studies contrasting purely online with face-to-face conditions. Conclusions/Recommendations Studies using blended learning also tended to involve additional learning time, instructional resources, and course elements that encourage interactions among learners. This confounding leaves open the possibility that one or all of these other practice variables contributed to the particularly positive outcomes for blended learning. Further research and development on different blended learning models is warranted. Experimental research testing design principles for blending online and face-to-face instruction for different kinds of learners is needed.


Author(s):  
Sabine Seufert

According to several forecasts given by Gartner Group or International Data Corporation, for example, e-learning as a new buzzword for Web-based education and its commercialization seems to be a growing market in the digital economy. This case study will analyze this new and dynamic e-learning market and the corresponding changes on the education market. A framework of the different education models that have already developed on the e-learning market will be introduced and their benefits and risks discussed. Several cases demonstrate the new e-learning models in action. Therefore, this contribution consists of several smaller cases that can be used for getting an overview of the e-learning market and for a discussion about e-learning as a promising e-commerce application on the Internet.


Algorithms ◽  
2018 ◽  
Vol 11 (12) ◽  
pp. 193
Author(s):  
Yuchuang Wang ◽  
Guoyou Shi ◽  
Xiaotong Sun

Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence container allocation to container ships for a voyage, and the degree of influence varies, engendering a complex nonlinearity. Therefore, this paper proposes a model based on gray relational analysis (GRA) and mixed kernel support vector machine (SVM) for predicting container allocation to a container ship for a voyage. First, in this model, the weights of influencing factors are determined through GRA. Then, the weighted factors serve as the input of the SVM model, and SVM model parameters are optimized through a genetic algorithm. Numerical simulations revealed that the proposed model could effectively predict the number of containers for container ship voyage and that it exhibited strong generalization ability and high accuracy. Accordingly, this model provides a new method for predicting container volume for a voyage.


2013 ◽  
Vol 24 (1) ◽  
pp. 27-34
Author(s):  
G. Manuel ◽  
J.H.C. Pretorius

In the 1980s a renewed interest in artificial neural networks (ANN) has led to a wide range of applications which included demand forecasting. ANN demand forecasting algorithms were found to be preferable over parametric or also referred to as statistical based techniques. For an ANN demand forecasting algorithm, the demand may be stochastic or deterministic, linear or nonlinear. Comparative studies conducted on the two broad streams of demand forecasting methodologies, namely artificial intelligence methods and statistical methods has revealed that AI methods tend to hide the complexities of correlation analysis. In parametric methods, correlation is found by means of sometimes difficult and rigorous mathematics. Most statistical methods extract and correlate various demand elements which are usually broadly classed into weather and non-weather variables. Several models account for noise and random factors and suggest optimization techniques specific to certain model parameters. However, for an ANN algorithm, the identification of input and output vectors is critical. Predicting the future demand is conducted by observing previous demand values and how underlying factors influence the overall demand. Trend analyses are conducted on these influential variables and a medium and long term forecast model is derived. In order to perform an accurate forecast, the changes in the demand have to be defined in terms of how these input vectors correlate to the final demand. The elements of the input vectors have to be identifiable and quantifiable. This paper proposes a method known as relevance trees to identify critical elements of the input vector. The case study is of a rapid railway operator, namely the Gautrain.


Book 2 0 ◽  
2014 ◽  
Vol 4 (1) ◽  
pp. 5-20
Author(s):  
Sebastian Drude ◽  
Daan Broeder ◽  
Paul Trilsbeek

Since the late 1990s, the technical group at the Max-Planck-Institute for Psycholinguistics has worked on solutions for important challenges in building sustainable data archives, in particular, how to guarantee long-time-availability of digital research data for future research.The support for the well-known DOBES (Documentation of Endangered Languages) programme has greatly inspired and advanced this work, and lead to the ongoing development of a whole suite of tools for annotating, cataloguing and archiving multi-media data. At the core of the LAT (Language Archiving Technology) tools is the IMDI metadata schema, now being integrated into a larger network of digital resources in the European CLARIN project. The multi-media annotator ELAN (with its web-based cousin ANNEX) is now well known not only among documentary linguists.We aim at presenting an overview of the solutions, both achieved and in development, for creating and exploiting sustainable digital data, in particular in the area of documenting languages and cultures, and their interfaces with related other developments.


2020 ◽  
Vol 148 (7) ◽  
pp. 2997-3014
Author(s):  
Caren Marzban ◽  
Robert Tardif ◽  
Scott Sandgathe

Abstract A sensitivity analysis methodology recently developed by the authors is applied to COAMPS and WRF. The method involves varying model parameters according to Latin Hypercube Sampling, and developing multivariate multiple regression models that map the model parameters to forecasts over a spatial domain. The regression coefficients and p values testing whether the coefficients are zero serve as measures of sensitivity of forecasts with respect to model parameters. Nine model parameters are selected from COAMPS and WRF, and their impact is examined on nine forecast quantities (water vapor, convective and gridscale precipitation, and air temperature and wind speed at three altitudes). Although the conclusions depend on the model parameters and specific forecast quantities, it is shown that sensitivity to model parameters is often accompanied by nontrivial spatial structure, which itself depends on the underlying forecast model (i.e., COAMPS vs WRF). One specific difference between these models is in their sensitivity with respect to a parameter that controls temperature increments in the Kain–Fritsch trigger function; whereas this parameter has a distinct spatial structure in COAMPS, that structure is completely absent in WRF. The differences between COAMPS and WRF also extend to the quality of the statistical models used to assess sensitivity; specifically, the differences are largest over the waters off the southeastern coast of the United States. The implication of these findings is twofold: not only is the spatial structure of sensitivities different between COAMPS and WRF, the underlying relationship between the model parameters and the forecasts is also different between the two models.


Sign in / Sign up

Export Citation Format

Share Document