Impact of Chaining Method and Level of Completion on Accuracy of Function Structure-Based Market Price Prediction Models

Author(s):  
Amaninder Singh Gill ◽  
Joshua D. Summers ◽  
Chiradeep Sen

AbstractThe goal of this paper is to explore how different modeling approaches for constructing function structure models and different levels of model completion affect the ability to make inferences (reason) on the resulting information within the respective models. Specifically, the function structure models are used to predict market prices of products, predictions that are then compared based on their accuracy and precision. This work is based on previous studies on understanding how function modeling and the use of topological information from design graphs can be used to predict information with historical training. It was found that forward chaining was the least favorable chaining type irrespective of the level of completion, whereas the backward-chaining models performed relatively better across all completion levels. Given the poor performance of the nucleation models at the highest level of completion, future research must be directed toward understanding and employing the methods yielding the most accuracy. Moreover, the results from this simulation-based study can be used to develop modeling guidelines for designers or students, when constructing function models.

Author(s):  
Amaninder Singh Gill ◽  
Joshua D. Summers

The goal of this paper is to explore how different modeling approaches to construct function structure models and different levels of model completion affect the information contained within the respective models. Specifically, the models are used to predict market prices of products. These predictions are compared based on their accuracy and precision. This work is based on previous studies on understanding how function modeling is done and how topological information from design graphs can be used to predict information with historical training. It was found that forward chaining was the least favorable chaining type irrespective of the level of completion. Backward chaining models work relatively better across all completion percentages, while Nucleation models don’t perform as well for a higher completion percentage. Hence, a greater attention is needed to understand and employ the methods yielding the most accuracy.


Author(s):  
Amaninder Singh Gill ◽  
Joshua D. Summers ◽  
Cameron J. Turner

This paper explores the amount of information stored in the representational components of a function structure: vocabulary, grammar, and topology. This is done by classifying the previously developed functional composition rules into vocabulary, grammatical, and topological classes and applying them to function structures available in an external design repository. The pruned function structures of electromechanical devices are then evaluated for how accurately market values can be predicted using graph complexity connectivity method. The accuracy is inversely with amount of information and level of detail. Applying the topological rule does not significantly impact the predictive power of the models, while applying the vocabulary rules and the grammar rules reduce the accuracy of the predictions. Finally, the least predictive model set is that which had all rules applied. In this manner, the value of a representation to predict or answer questions is quantified through this research approach.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


2020 ◽  
Vol 27 (5) ◽  
pp. 385-391
Author(s):  
Lin Zhong ◽  
Zhong Ming ◽  
Guobo Xie ◽  
Chunlong Fan ◽  
Xue Piao

: In recent years, more and more evidence indicates that long non-coding RNA (lncRNA) plays a significant role in the development of complex biological processes, especially in RNA progressing, chromatin modification, and cell differentiation, as well as many other processes. Surprisingly, lncRNA has an inseparable relationship with human diseases such as cancer. Therefore, only by knowing more about the function of lncRNA can we better solve the problems of human diseases. However, lncRNAs need to bind to proteins to perform their biomedical functions. So we can reveal the lncRNA function by studying the relationship between lncRNA and protein. But due to the limitations of traditional experiments, researchers often use computational prediction models to predict lncRNA protein interactions. In this review, we summarize several computational models of the lncRNA protein interactions prediction base on semi-supervised learning during the past two years, and introduce their advantages and shortcomings briefly. Finally, the future research directions of lncRNA protein interaction prediction are pointed out.


Author(s):  
Kunal Wagh ◽  
Pankaj Dhatrak

The transport industry is a major contributor to both local pollution and greenhouse gas emissions (GHGs). The key challenge today is to mitigate the adverse impacts on the environment caused by road transportation. The volatile market prices and diminishing supplies of fuel have led to an unprecedented interest in battery electric vehicles (BEVs). In addition, improvements in motor efficiencies and significant advances in battery technology have made it easier for BEVs to compete with internal combustion engine (ICE) vehicles. This paper describes and assesses the latest technologies in different elements of the BEV: powertrain architectures, propulsion and regeneration systems, energy storage systems and charging techniques. The current and future trends of these technologies have been reviewed in detail. Finally, the key issue of electric vehicle component recycling (battery, motor and power electronics) has been discussed. Global emission regulations are pushing the industry towards zero or ultra-low emission vehicles. Thus, by 2025, most cars must have a considerable level of powertrain electrification. As the market share of electric vehicles increases, clear trends have emerged in the development of powertrain systems. However, some significant barriers must be overcome before appreciable market penetration can be achieved. The objective of the current study is to review and provide a complete picture of the current BEV technology and a framework to assist future research in the sector.


Healthcare ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 778
Author(s):  
Ann-Rong Yan ◽  
Indira Samarawickrema ◽  
Mark Naunton ◽  
Gregory M. Peterson ◽  
Desmond Yip ◽  
...  

Venous thromboembolism (VTE) is a significant cause of mortality in patients with lung cancer. Despite the availability of a wide range of anticoagulants to help prevent thrombosis, thromboprophylaxis in ambulatory patients is a challenge due to its associated risk of haemorrhage. As a result, anticoagulation is only recommended in patients with a relatively high risk of VTE. Efforts have been made to develop predictive models for VTE risk assessment in cancer patients, but the availability of a reliable predictive model for ambulate patients with lung cancer is unclear. We have analysed the latest information on this topic, with a focus on the lung cancer-related risk factors for VTE, and risk prediction models developed and validated in this group of patients. The existing risk models, such as the Khorana score, the PROTECHT score and the CONKO score, have shown poor performance in external validations, failing to identify many high-risk individuals. Some of the newly developed and updated models may be promising, but their further validation is needed.


INFO ARTHA ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 45-53
Author(s):  
Swasito Adhipradana Prabu

The decentralization of PBB-P2 in Indonesia is expected to produce a better PBB-P2 administration system. One indicator of a better PBB-P2 administration system is a fair collection of PBB-P2 based on tax base (NJOP) valuation close to market prices. This study examines whether NJOP, as the basis for the imposition of PBB-P2, is in accordance with the market price using the assessment ratio. This study found that the current level of accuracy of the NJOP has not met the standard agreed upon by the IAAO. In addition, this study also found that the NJOP accuracy rate in big cities was slightly better than the NJOP accuracy rate in other cities. In addition, this study also found that there was no positive correlation between NJOP updating activities through SPOP filling and NJOP accuracy. Desentralisasi PBB-P2 di Indonesia diharapkan menghasilkan sistem penatausahaan PBB-P2 yang lebih baik. Salah satu indikator dari sistem penatausahaan PBB-P2 yang lebih baik adalah pemungutan PBB-P2 yang adil dengan dasar pengenaan pajak (NJOP) yang mendekati harga pasar. Studi ini meneliti apakah NJOP sebagai dasar pengenaan PBB-P2 sudah sesuai dengan harga pasar menggunakan assessment ratio. Penelitian ini menemukan bahwa tingkat akurasi NJOP saat ini belum memenuhi standar yang disepakati oleh IAAO. Selain itu, penelitian ini juga menemukan bahwa tingkat akurasi NJOP di kota besar, sedikit lebih baik dibanding tingkaat akurasi NJOP di kota-kota lainnya. Selain itu, penelitian ini juga menemukan bahwa tidak ada korelasi positif antara kegiatan pemutakhiran NJOP melalui pengisian SPOP dengan tingkat akurasi NJOP.


2018 ◽  
Vol 1 (2) ◽  
pp. 79-86 ◽  
Author(s):  
David P. Looney ◽  
Mark J. Buller ◽  
Andrei V. Gribok ◽  
Jayme L. Leger ◽  
Adam W. Potter ◽  
...  

ECTemp™ is a heart rate (HR)-based core temperature (CT) estimation algorithm mainly used as a real-time thermal-work strain indicator in military populations. ECTemp™ may also be valuable for resting CT estimation, which is critical for circadian rhythm research. This investigation developed and incorporated a sigmoid equation into ECTemp™ to better estimate resting CT. HR and CT data were collected over two calorimeter test trials from 16 volunteers (age, 23 ± 3 yrs; height, 1.72 ± 0.07 m; body mass, 68.5 ± 8.1 kg) during periods of sleep and inactivity. Half of the test trials were combined with ECTemp™’s original development dataset to train the new sigmoid model while the other was used for model validation. Models were compared by their estimation accuracy and precision. While both models produced accurate CT estimates, the sigmoid model had a smaller bias (−0.04 ± 0.26°C vs. −0.19 ± 0.29°C) and root mean square error (RMSE; 0.26°C vs. 0.35°C). ECTemp™ is a validated HR-based resting CT estimation algorithm. The new sigmoid equation corrects lower CT estimates while producing nearly identical estimates to the original quadratic equation at higher CT. The demonstrated accuracy of ECTemp™ encourages future research to explore the algorithm’s potential as a non-invasive means of tracking CT circadian rhythms.


2019 ◽  
Vol 24 (48) ◽  
pp. 194-204 ◽  
Author(s):  
Francisco Flores-Muñoz ◽  
Alberto Javier Báez-García ◽  
Josué Gutiérrez-Barroso

Purpose This work aims to explore the behavior of stock market prices according to the autoregressive fractional differencing integrated moving average model. This behavior will be compared with a measure of online presence, search engine results as measured by Google Trends. Design/methodology/approach The study sample is comprised by the companies listed at the STOXX® Global 3000 Travel and Leisure. Google Finance and Yahoo Finance, along with Google Trends, were used, respectively, to obtain the data of stock prices and search results, for a period of five years (October 2012 to October 2017). To guarantee certain comparability between the two data sets, weekly observations were collected, with a total figure of 118 firms, two time series each (price and search results), around 61,000 observations. Findings Relationships between the two data sets are explored, with theoretical implications for the fields of economics, finance and management. Tourist corporations were analyzed owing to their growing economic impact. The estimations are initially consistent with long memory; so, they suggest that both stock market prices and online search trends deserve further exploration for modeling and forecasting. Significant differences owing to country and sector effects are also shown. Originality/value This research contributes in two different ways: it demonstrate the potential of a new tool for the analysis of relevant time series to monitor the behavior of firms and markets, and it suggests several theoretical pathways for further research in the specific topics of asymmetry of information and corporate transparency, proposing pertinent bridges between the two fields.


Forecasting ◽  
2021 ◽  
Vol 3 (3) ◽  
pp. 633-643
Author(s):  
Niccolo Pescetelli

As artificial intelligence becomes ubiquitous in our lives, so do the opportunities to combine machine and human intelligence to obtain more accurate and more resilient prediction models across a wide range of domains. Hybrid intelligence can be designed in many ways, depending on the role of the human and the algorithm in the hybrid system. This paper offers a brief taxonomy of hybrid intelligence, which describes possible relationships between human and machine intelligence for robust forecasting. In this taxonomy, biological intelligence represents one axis of variation, going from individual intelligence (one individual in isolation) to collective intelligence (several connected individuals). The second axis of variation represents increasingly sophisticated algorithms that can take into account more aspects of the forecasting system, from information to task to human problem-solvers. The novelty of the paper lies in the interpretation of recent studies in hybrid intelligence as precursors of a set of algorithms that are expected to be more prominent in the future. These algorithms promise to increase hybrid system’s resilience across a wide range of human errors and biases thanks to greater human-machine understanding. This work ends with a short overview for future research in this field.


Sign in / Sign up

Export Citation Format

Share Document