scholarly journals Big data analysis of economic news

2017 ◽  
Vol 9 ◽  
pp. 184797901772004 ◽  
Author(s):  
Mohammed Elshendy ◽  
Andrea Fronzetti Colladon

We propose a novel method to improve the forecast of macroeconomic indicators based on social network and semantic analysis techniques. In particular, we explore variables extracted from the Global Database of Events, Language, and Tone, which monitors the world’s broadcast, print and web news. We investigate the locations and the countries involved in economic events (such as business or economic agreements), as well as the tone and the Goldstein scale of the news where the events are reported. We connect these elements to build three different social networks and to extract new network metrics, which prove their value in extending the predictive power of models only based on the inclusion of other economic or demographic indices. We find that the number of news, their tone, the network constraint of nations and their betweenness centrality oscillations are important predictors of the Gross Domestic Product per Capita and of the Business and Consumer Confidence indices.

2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Dingbo Duan ◽  
Guangyu Gao ◽  
Chi Harold Liu ◽  
Jian Ma

Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.


2019 ◽  
Vol 2 (1) ◽  
pp. 1-8
Author(s):  
Lilik Uzlifatul Jannah ◽  
Ike Susanti

The purpose of this study was to determine the ability of students to formulate questions in English. This research employed  descriptive qualitative method using simple analysis techniques based on grammatical and semantic analysis. Questions made by students are based on reading at the literal level or basic level. The research subjects involved 80 students in the Non-English Study Program in the second semester Management Study Program Lamongan Islamic University. The results of the study were obtained as follows (1). Students were able to formulate questions in English as many times, even if they do not meet the grammatical and semantic rules. (2). Students’ mistakes in formulating questions in English based on reading text including grammatical errors (60%) either errors in semantics or  meanings (40%); and (3). Students still use translating techniques in formulating questions so that the strong influence of the use of the first language (Indonesian) and the rules of writing in the Indonesian language can be obviously seen. Finally, it can be concluded that the students’ reading skill is at the lower cognitive level or at literal phase.


2021 ◽  
pp. 35-50
Author(s):  
Yevgeniya A. Savchenko-Synyakova ◽  
◽  
Olena V. Tutova ◽  
Halyna A. Pidnebesna ◽  
◽  
...  

CORAL GMDH is a method of the inductive approach. In this article it is used for modeling and forecasting socio-economic processes. Here the CORAL GMDH algorithm is applied to solve three problems: recovery of missing data, modeling of macroeconomic indicators, and forecasting the gross national income (GNI). Also, the CORAL GMDH algorithm is used to build models in the problem of modeling the dependence of GNI on socio-demographic indicators and develop recommendations on how the state can influence the level of human capital development in the country by influencing certain socio-demographic indices. The results of modeling for Ukraine, Belarus, and Poland are compared.


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
U. A. Piumi Ishanka ◽  
Takashi Yukawa

Context-aware recommendation systems attempt to address the challenge of identifying products or items that have the greatest chance of meeting user requirements by adapting to current contextual information. Many such systems have been developed in domains such as movies, books, and music, and emotion is a contextual parameter that has already been used in those fields. This paper focuses on the use of emotion as a contextual parameter in a tourist destination recommendation system. We developed a new corpus that incorporates the emotion parameter by employing semantic analysis techniques for destination recommendation. We review the effectiveness of incorporating emotion in a recommendation process using prefiltering techniques and show that the use of emotion as a contextual parameter for location recommendation in conjunction with collaborative filtering increases user satisfaction.


2019 ◽  
Vol 46 (4) ◽  
pp. 508-527 ◽  
Author(s):  
Qi Wen ◽  
Peter A Gloor ◽  
Andrea Fronzetti Colladon ◽  
Praful Tickoo ◽  
Tushar Joshi

In the information economy, individuals’ work performance is closely associated with their digital communication strategies. This study combines social network and semantic analysis to develop a method to identify top performers based on email communication. By reviewing existing literature, we identified the indicators that quantify email communication into measurable dimensions. To empirically examine the predictive power of the proposed indicators, we collected 2 million email archive of 578 executives in an international service company. Panel regression was employed to derive interpretable association between email indicators and top performance. The results suggest that top performers tend to assume central network positions and have high responsiveness to emails. In email contents, top performers use more positive and complex language, with low emotionality, but rich in influential words that are probably reused by co-workers. To better explore the predictive power of the email indicators, we employed AdaBoost machine learning models, which achieved 83.56% accuracy in identifying top performers. With cluster analysis, we further find three categories of top performers, ‘networkers’ with central network positions, ‘influencers’ with influential ideas and ‘positivists’ with positive sentiments. The findings suggest that top performers have distinctive email communication patterns, laying the foundation for grounding email communication competence in theory. The proposed email analysis method also provides a tool to evaluate the different types of individual communication styles.


2019 ◽  
Vol 4 (2) ◽  
pp. 272-285
Author(s):  
Yicheng Zhu

Current literature on economic news coverage mainly focuses on the economic news about domestic economy. This study asks a further question: will international economic news be accurately reflecting the economic performance of a foreign country? This study takes China as the target country and economic news coverage from other countries from the Global Database of Events, Language and Tone for this research and constructs a Poisson Lagged Regression model for news volume and compares autoregressive conditional heteroskedasticity model versus autoregressive integrated moving average model for economic news tone change. The results show that international economic news coverage is largely different from domestic news coverage, and the attention of foreign news on Chinese economy is negativity related to the performance of the Shanghai Stock Index. Moreover, the economic news tone about China’s economy showed a seasonal pattern.


2017 ◽  
Vol 01 (01) ◽  
pp. 1630006 ◽  
Author(s):  
Flora Amato ◽  
Vincenzo Moscato ◽  
Antonio Picariello ◽  
Giancarlo Sperlí ◽  
Antonio D’Acierno ◽  
...  

In this paper, we present a general framework for retrieving relevant information from news papers that exploits a novel summarization algorithm based on a deep semantic analysis of texts. In particular, we extract from each Web document a set of triples (subject, predicate, object) that are then used to build a summary through an unsupervised clustering algorithm exploiting the notion of semantic similarity. Finally, we leverage the centroids of clusters to determine the most significant summary sentences using some heuristics. Several experiments are carried out using the standard DUC methodology and ROUGE software and show how the proposed method outperforms several summarizer systems in terms of recall and readability.


2021 ◽  
Author(s):  
Yue Feng

Semantic analysis is the process of shifting the understanding of text from the levels of phrases, clauses, sentences to the level of semantic meanings. Two of the most important semantic analysis tasks include 1) semantic relatedness measurement and 2) entity linking. The semantic relatedness measurement task aims to quantitatively identify the relationships between two words or concepts based on the similarity or closeness of their semantic meaning whereas the entity linking task focuses on linking plain text to structured knowledge resources, e.g. Wikipedia to provide semantic annotation of texts. A limitation of current semantic analysis approaches is that they are built upon traditional documents which are well structured in formal English, e.g. news; however, with the emergence of social networks, enormous volumes of information can be extracted from the posts on social networks, which are short, grammatically incorrect and can contain special characters or newly invented words, e.g. LOL, BRB. Therefore, traditional semantic analysis approaches may not perform well for analysing social network posts. In this thesis, we build semantic analysis techniques particularly for Twitter content. We build a semantic relatedness model to calculate semantic relatedness between any two words obtained from tweets and by using the proposed semantic relatedness model, we semantically annotate tweets by linking them to Wikipedia entries. We compare our work with state-of-the-art semantic relatedness and entity linking methods that show promising results.


Sign in / Sign up

Export Citation Format

Share Document