scholarly journals Multi-Source Geo-Information Fusion in Transition: A Summer 2019 Snapshot

2019 ◽  
Vol 8 (8) ◽  
pp. 330 ◽  
Author(s):  
Robert Jeansoulin

Since the launch of Landsat-1 in 1972, the scientific domain of geo-information has been incrementally shaped through different periods, due to technology evolutions: in devices (satellites, UAV, IoT), in sensors (optical, radar, LiDAR), in software (GIS, WebGIS, 3D), and in communication (Big Data). Land Cover and Disaster Management remain the main big issues where these technologies are highly required. Data fusion methods and tools have been adapted progressively to new data sources, which are augmenting in volume, variety, and in quick accessibility. This Special Issue gives a snapshot of the current status of that adaptation, as well as looking at what challenges are coming soon.

Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


Big Data ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 450-451
Author(s):  
S. Balamurugan ◽  
Bala Anand Muthu ◽  
Sheng-Lung Peng ◽  
Mohd Helmy Abd Wahab

2017 ◽  
Vol 97 ◽  
pp. 12-22 ◽  
Author(s):  
Flávio E.A. Horita ◽  
João Porto de Albuquerque ◽  
Victor Marchezini ◽  
Eduardo M. Mendiondo

2021 ◽  
Vol 20 ◽  
pp. 352-361
Author(s):  
Xiang Lin

In the big data environment, the visualization technique has been increasingly adopted to mine the data on library and information (L&I), with the diversification of data sources and the growth of data volume. However, there are several defects with the research on information association of L&I visualization network: the lack of optimization of network layout algorithms, and the absence of L&I information fusion and comparison in multiple disciplines, in the big data environment. To overcome these defects, this paper explores the visualization of L&I from the perspective of big data analysis and fusion. Firstly, the authors analyzed the topology of the L&I visualization network, and calculated the metrics for the construction of L&I visualization topology map. Next, the importance of meta-paths of the L&I visualization network was calculated. Finally, a complex big data L&I visualization network was established, and the associations between information nodes were analyzed in details. Experimental results verify the effectiveness of the proposed algorithm


RMD Open ◽  
2019 ◽  
Vol 5 (2) ◽  
pp. e001004 ◽  
Author(s):  
Joanna Kedra ◽  
Timothy Radstake ◽  
Aridaman Pandit ◽  
Xenofon Baraliakos ◽  
Francis Berenbaum ◽  
...  

ObjectiveTo assess the current use of big data and artificial intelligence (AI) in the field of rheumatic and musculoskeletal diseases (RMDs).MethodsA systematic literature review was performed in PubMed MEDLINE in November 2018, with key words referring to big data, AI and RMDs. All original reports published in English were analysed. A mirror literature review was also performed outside of RMDs on the same number of articles. The number of data analysed, data sources and statistical methods used (traditional statistics, AI or both) were collected. The analysis compared findings within and beyond the field of RMDs.ResultsOf 567 articles relating to RMDs, 55 met the inclusion criteria and were analysed, as well as 55 articles in other medical fields. The mean number of data points was 746 million (range 2000–5 billion) in RMDs, and 9.1 billion (range 100 000–200 billion) outside of RMDs. Data sources were varied: in RMDs, 26 (47%) were clinical, 8 (15%) biological and 16 (29%) radiological. Both traditional and AI methods were used to analyse big data (respectively, 10 (18%) and 45 (82%) in RMDs and 8 (15%) and 47 (85%) out of RMDs). Machine learning represented 97% of AI methods in RMDs and among these methods, the most represented was artificial neural network (20/44 articles in RMDs).ConclusionsBig data sources and types are varied within the field of RMDs, and methods used to analyse big data were heterogeneous. These findings will inform a European League Against Rheumatism taskforce on big data in RMDs.


Big Data ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 544-545
Author(s):  
S. Balamurugan ◽  
Bala Anand Muthu ◽  
Sheng-Lung Peng ◽  
Mohd Helmy Abd Wahab

2019 ◽  
Vol 8 (9) ◽  
pp. 387 ◽  
Author(s):  
Silvino Pedro Cumbane ◽  
Gyozo Gidófalvi

Natural hazards result in devastating losses in human life, environmental assets and personal, and regional and national economies. The availability of different big data such as satellite imageries, Global Positioning System (GPS) traces, mobile Call Detail Records (CDRs), social media posts, etc., in conjunction with advances in data analytic techniques (e.g., data mining and big data processing, machine learning and deep learning) can facilitate the extraction of geospatial information that is critical for rapid and effective disaster response. However, disaster response systems development usually requires the integration of data from different sources (streaming data sources and data sources at rest) with different characteristics and types, which consequently have different processing needs. Deciding which processing framework to use for a specific big data to perform a given task is usually a challenge for researchers from the disaster management field. Therefore, this paper contributes in four aspects. Firstly, potential big data sources are described and characterized. Secondly, the big data processing frameworks are characterized and grouped based on the sources of data they handle. Then, a short description of each big data processing framework is provided and a comparison of processing frameworks in each group is carried out considering the main aspects such as computing cluster architecture, data flow, data processing model, fault-tolerance, scalability, latency, back-pressure mechanism, programming languages, and support for machine learning libraries, which are related to specific processing needs. Finally, a link between big data and processing frameworks is established, based on the processing provisioning for essential tasks in the response phase of disaster management.


2012 ◽  
Vol 16 (3) ◽  
Author(s):  
Laurie P Dringus

This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through the lens of questioning the current status of applying learning analytics to online courses. The goal of the discussion is twofold: (1) to inform online learning practitioners (e.g., instructors and administrators) of the potential of learning analytics in online courses and (2) to broaden discussion in the research community about the advancement of learning analytics in online learning. In recognizing the full potential of formalizing big data in online coures, the community must address this issue also in the context of the potentially "harmful" application of learning analytics.


Sign in / Sign up

Export Citation Format

Share Document