Disaster Management during Pandemic: A Big Data-Centric Approach

Author(s):  
Mohamed Elsotouhy ◽  
Geetika Jain ◽  
Archana Shrivastava

The concept of big data (BD) has been coupled with disaster management to improve the crisis response during pandemic and epidemic. BD has transformed every aspect and approach of handling the unorganized set of data files and converting the same into a piece of more structured information. The constant inflow of unstructured data shows the research lacuna, especially during a pandemic. This study is an effort to develop a pandemic disaster management approach based on BD. BD text analytics potential is immense in effective pandemic disaster management via visualization, explanation, and data analysis. To seize the understanding of using BD toward disaster management, we have taken a comprehensive approach in place of fragmented view by using BD text analytics approach to comprehend the various relationships about disaster management theory. The study’s findings indicate that it is essential to understand all the pandemic disaster management performed in the past and improve the future crisis response using BD. Though worldwide, all the communities face big chaos and have little help reaching a potential solution.

2019 ◽  
Author(s):  
Vitor Tonon ◽  
Tiago Silva ◽  
Vínicius Silva ◽  
Gean Pereira ◽  
Solange Rezende

The recommendation task is a prominent and challenging area of study in Machine Learning. It aims to recommend items such as products, movies, and services to users according to what they have liked in the past. In general, most of the recommendation systems only consider structured information. For instance, in recommending movies to users they might use features such as genre, actors, and directors. However, unstructured data such as users' reviews may also be considered, since they can aggregate important information to the recommendation process, improving the performance of recommendation systems. Thus, in this work, we evaluate the use of text mining methods to extract and represent relevant information about user reviews, which were used alongside with rating data, as input of a content-based recommendation algorithm. We considered three different strategies for this purpose, which were: Topics, Embeddings and Relevant Embeddings. We hypothesized that using the considered strategies, it is possible to create more meaningful and concise representations than the traditional bag-of-words model, and yet, increase the performance of recommendation systems. In our experimental evaluation, we confirmed such a hypothesis, showing that the considered representations strategies are indeed very promising for representing user reviews in the recommendation process.


Purpose This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies. Design/methodology/approach This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context. Findings More than ever before, current organizations see knowledge as the key to success. The emphasis on effective knowledge management (KM) has increased accordingly. However, the ubiquitous nature of data available to firms means that conventional KM tools are largely incapable of coping with such an information overload. Big data text analytics offers considerably greater scope in this respect. Its tools and technologies can enable businesses to extract important information from masses of structured and unstructured data and convert the information into explicit knowledge that can be absorbed and exploited to help secure a competitive advantage. Practical implications The paper provides strategic insights and practical thinking that have influenced some of the world’s leading organizations. Originality/value The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.


Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 54595-54614 ◽  
Author(s):  
Syed Attique Shah ◽  
Dursun Zafer Seker ◽  
Sufian Hameed ◽  
Dirk Draheim

2015 ◽  
Vol 2015 ◽  
pp. 1-16 ◽  
Author(s):  
Ashwin Belle ◽  
Raghuram Thiagarajan ◽  
S. M. Reza Soroushmehr ◽  
Fatemeh Navidi ◽  
Daniel A. Beard ◽  
...  

The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sanghee Kim ◽  
Hongjoo Woo

Purpose According to the perspective of evolutionary economic theory, the marketplace continuously evolves over time, following the changing needs of both customers and firms. In accordance with the theory, the second-hand apparel market has been rapidly expanding by meeting consumers’ diverse preferences and promoting sustainability since 2014. To understand what changes in consumers’ consumption behaviors regarding used apparel have driven this growth, the purpose of this study is to examine how the second-hand apparel market product types, distribution channels and consumers’ motives have changed over the past five years. Design/methodology/approach This study collected big data from Google through Textom software by extracting all Web-exposed text in 2014, and again in 2019, that contained the keyword “second-hand apparel,” and used the Node XL program to visualize the network patterns of these words through the semantic network analysis. Findings The results indicate that the second-hand apparel market has evolved with various changes over the past five years in terms of consumer motives, product types and distribution channels. Originality/value This study provides a comprehensive understanding of the changing demands of consumers toward used apparel over the past five years, providing insights for retailers as well as future research in this subject area.


2021 ◽  
Vol 75 (3) ◽  
pp. 76-82
Author(s):  
G.T. Balakayeva ◽  
◽  
D.K. Darkenbayev ◽  
M. Turdaliyev ◽  
◽  
...  

The growth rate of these enterprises has increased significantly in the last decade. Research has shown that over the past two decades, the amount of data has increased approximately tenfold every two years - this exceeded Moore's Law, which doubles the power of processors. About thirty thousand gigabytes of data are accumulated every second, and their processing requires an increase in the efficiency of data processing. Uploading videos, photos and letters from users on social networks leads to the accumulation of a large amount of data, including unstructured ones. This leads to the need for enterprises to work with big data of different formats, which must be prepared in a certain way for further work in order to obtain the results of modeling and calculations. In connection with the above, the research carried out in the article on processing and storing large data of an enterprise, developing a model and algorithms, as well as using new technologies is relevant. Undoubtedly, every year the information flows of enterprises will increase and in this regard, it is important to solve the issues of storing and processing large amounts of data. The relevance of the article is due to the growing digitalization, the increasing transition to professional activities online in many areas of modern society. The article provides a detailed analysis and research of these new technologies.


Sign in / Sign up

Export Citation Format

Share Document