scholarly journals Research on Hybrid Index Method of Double-Layer B+ Tree for Power Big Data Considering Knowledge Graph

2021 ◽  
Vol 1771 (1) ◽  
pp. 012004
Author(s):  
Ling Chao Gao ◽  
Li Ming Yao ◽  
ZhiWei Yang ◽  
Fei Zheng
Author(s):  
Mengxi Zhao ◽  
Dan Li ◽  
Yongshen Long

2020 ◽  
Vol 12 (16) ◽  
pp. 6294
Author(s):  
Chenyu Zheng

Global cities act as influential hubs in the networked world. Their city brands, which are projected by the global news media, are becoming sustainable resources in various global competitions and cooperations. This study adopts the research paradigm of computational social science to assess and compare the city brand attention, positivity, and influence of ten Globalization and World Cities Research Network (GaWC) Alpha+ global cities, along with their dimensional structures, based on combining the cognitive and affective theoretical perspectives on the frameworks of the Anholt global city brand dimension system, the big data of global news knowledge graph in Google’s Global Database of Events, Language, and Tone (GDELT), and the technologies of word-embedding semantic mining and clustering analysis. The empirical results show that the overall values and dimensional structures of city brand influence of global cities form distinct levels and clusters, respectively. Although global cities share a common structural characteristic of city brand influence of the dimensions of presence and potential being most prominent, Western and Eastern global cities differentiate in the clustering of dimensional structures of city brand attention, positivity, and influence. City brand attention is more important than city brand positivity in improving the city brand influence of global cities. The preferences of the global news media over global city brands fits the nature of global cities.


2017 ◽  
Vol 72 ◽  
pp. 264-272 ◽  
Author(s):  
Mei Wang ◽  
Meng Xiao ◽  
Sancheng Peng ◽  
Guohua Liu
Keyword(s):  
Big Data ◽  

2021 ◽  
Vol 18 (6) ◽  
pp. 8661-8682
Author(s):  
Vishnu Vandana Kolisetty ◽  
◽  
Dharmendra Singh Rajput ◽  

<abstract> <p>Big data has attracted a lot of attention in many domain sectors. The volume of data-generating today in every domain in form of digital is enormous and same time acquiring such information for various analyses and decisions is growing in every field. So, it is significant to integrate the related information based on their similarity. But the existing integration techniques are usually having processing and time complexity and even having constraints in interconnecting multiple data sources. Many of these sources of information come from a variety of sources. Due to the complex distribution of many different data sources, it is difficult to determine the relationship between the data, and it is difficult to study the same data structures for integration to effectively access or retrieve data to meet the needs of different data analysis. In this paper, proposed an integration of big data with computation of attribute conditional dependency (ACD) and similarity index (SI) methods termed as ACD-SI. The ACD-SI mechanism allows using of an improved Bayesian mechanism to analyze the distribution of attributes in a document in the form of dependence on possible attributes. It also uses attribute conversion and selection mechanisms for mapping and grouping data for integration and uses methods such as LSA (latent semantic analysis) to analyze the content of data attributes to extract relevant and accurate data. It performs a series of experiments to measure the overall purity and normalization of the data integrity, using a large dataset of bibliographic data from various publications. The obtained purity and NMI ratio confined the clustered data relevancy and the measure of precision, recall, and accurate rate justified the improvement of the proposal is compared to the existing approaches.</p> </abstract>


Author(s):  
Pamela Hussey ◽  
Subhashis Das ◽  
Sharon Farrell ◽  
Lorraine Ledger ◽  
Anne Spencer
Keyword(s):  
Big Data ◽  

Author(s):  
Luigi Bellomarini ◽  
Georg Gottlob ◽  
Andreas Pieris ◽  
Emanuel Sallinger

Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals.


Sign in / Sign up

Export Citation Format

Share Document