scholarly journals The data warehouse for primary geological and geophysical data: an aspect of creation

Author(s):  
Oleg Zurian ◽  
O. Likhosherstov

The geological industry of Ukraine as a whole is sufficiently conservative. However, the development of world scientific thought and the improvement of the mineral extraction technologies require a rethinking of primary geological data (PGD). During the Soviet times, there was a rapid development of geological prospecting activities with creation and accumulation of PGD’s large volumes. Reinterpretation and rethinking of this information using the latest techniques, approaches and technologies is an important issue. An important aspect is to save this information, because large number of PGD remains on paper. The only way to facilitate the circulation of PGD and ensure their proper storage is to create a centralized digital data warehouse using the latest information technologies for storing, processing and analyzing data. Such actions should ensure the rapid retrieval and analysis of PGD, facilitate the planning of geological prospect and ensure overall performance, including economic efficiency. The article discusses aspects of data warehouse creating for primary geological and geophysical data. The infrastructure, architecture and creating stages of the data warehouse for primary geological data are highlighted. The authors are examined the technological approaches, stages of work on the data warehouse creation. Modern technologies, including technologies associated with Big Data, are considered as those that should be oriented to performers of work. Primary geological data is partially structured or unstructured, and its volumes are constantly growing with high speed. The introduction of modern Big Data technologies will allow creating flexible powerful systems that must ensure horizontal scaling of the system in terms of computing power and storage size, and carry out operational primary processing and analysis of the data, that the user needs.

2020 ◽  
Vol 1 (4) ◽  
pp. 45-49
Author(s):  
Oleg Zurian ◽  
O. Likhosherstov

The geological industry of Ukraine as a whole is sufficiently conservative. However, the development of world scientific thought and the improvement of the mineral extraction technologies require a rethinking of primary geological data (PGD). During the Soviet times, there was a rapid development of geological prospecting activities with creation and accumulation of PGD’s large volumes. Reinterpretation and rethinking of this information using the latest techniques, approaches and technologies is an important issue. An important aspect is to save this information, because large number of PGD remains on paper. The only way to facilitate the circulation of PGD and ensure their proper storage is to create a centralized digital data warehouse using the latest information technologies for storing, processing and analyzing data. Such actions should ensure the rapid retrieval and analysis of PGD, facilitate the planning of geological prospect and ensure overall performance, including economic efficiency. The article discusses aspects of data warehouse creating for primary geological and geophysical data. The infrastructure, architecture and creating stages of the data warehouse for primary geological data are highlighted. The authors are examined the technological approaches, stages of work on the data warehouse creation. Modern technologies, including technologies associated with Big Data, are considered as those that should be oriented to performers of work. Primary geological data is partially structured or unstructured, and its volumes are constantly growing with high speed. The introduction of modern Big Data technologies will allow creating flexible powerful systems that must ensure horizontal scaling of the system in terms of computing power and storage size, and carry out operational primary processing and analysis of the data, that the user needs.


2019 ◽  
Vol 8 (10) ◽  
pp. 449
Author(s):  
Xi Liu ◽  
Lina Hao ◽  
Wunian Yang

With the rapid development of big data, numerous industries have turned their focus from information research and construction to big data technologies. Earth science and geographic information systems industries are highly information-intensive, and thus there is an urgent need to study and integrate big data technologies to improve their level of information. However, there is a large gap between existing big data and traditional geographic information technologies. Owing to certain characteristics, it is difficult to quickly and easily apply big data to geographic information technologies. Through the research, development, and application practices achieved in recent years, we have gradually developed a common geospatial big data solution. Based on the formation of a set of geospatial big data frameworks, a complete geospatial big data platform system called BiGeo was developed. Through the management and analysis of massive amounts of spatial data from Sichuan Province, China, the basic framework of this platform can be better utilized to meet our needs. This paper summarizes the design, implementation, and experimental experience of BiGeo, which provides a new type of solution to the research and construction of geospatial big data.


2019 ◽  
Vol 3 (2) ◽  
pp. 152
Author(s):  
Xianglan Wu

<p>In today's society, the rise of the Internet and rapid development make every day produce a huge amount of data. Therefore, the traditional data processing mode and data storage can not be fully analyzed and mined these data. More and more new information technologies (such as cloud computing, virtualization and big data, etc.) have emerged and been applied, the network has turned from informationization to intelligence, and campus construction has ushered in the stage of smart campus construction.The construction of intelligent campus refers to big data and cloud computing technology, which improves the informatization service quality of colleges and universities by integrating, storing and mining huge data.</p>


Author(s):  
Farid Huseynov

The term “big data” refers to the very large and diverse sets of structured, semi-structured, and unstructured digital data from different sources that accumulate and grow very rapidly on a continuous basis. Big data enables enhanced decision-making in various types of businesses. Through these technologies, businesses are able to cut operational costs, digitally transform business operations to be more efficient and effective, and make more informed business decisions. Big data technologies enable businesses to better understand their markets by uncovering hidden patterns behind consumer behaviors and introduce new products and services accordingly. This chapter shows the critical role that big data plays in businesses. Initially, in this chapter, big data and its underlying technologies are explained. Later, this chapter discusses how big data digitally transforms critical business operations for enhanced decision-making and superior customer experience. Finally, this chapter ends with the possible challenges of big data for businesses and possible solutions to these challenges.


2017 ◽  
Vol 12 (01) ◽  
Author(s):  
Shweta Kaushik

Internet assumes an essential part in giving different learning sources to the world, which encourages numerous applications to give quality support of the customers. As the years go on the web is over-burden with parcel of data and it turns out to be difficult to extricate the applicable data from the web. This offers path to the advancement of the Big Data and the volume of the information continues expanding quickly step by step. Enormous Data has increased much consideration from the scholarly world and the IT business. In the advanced and figuring world, data is produced and gathered at a rate that quickly surpasses the limit go. Data mining procedures are utilized to locate the concealed data from the huge information. This Technique is utilized store, oversee, and investigate high speed of information and this information can be in any shape organized or unstructured frame. It is hard to handle substantial volume of information utilizing information base strategy like RDBMS. From one perspective, Big Data is amazingly important to deliver efficiency in organizations and transformative achievements in logical controls, which give us a considerable measure of chances to make incredible advances in many fields. There is most likely the future rivalries in business profitability and advances will without a doubt merge into the Big Data investigations. Then again, Big Data likewise emerges with many difficulties, for example, troubles in information catch, information stockpiling, information investigation and information perception. In this paper we concentrate on the audit of Big Data, its information order techniques and the way it can be mined utilizing different mining strategies.


2019 ◽  
Vol 16 (8) ◽  
pp. 3419-3427
Author(s):  
Shishir K. Shandilya ◽  
S. Sountharrajan ◽  
Smita Shandilya ◽  
E. Suganya

Big Data Technologies are well-accepted in the recent years in bio-medical and genome informatics. They are capable to process gigantic and heterogeneous genome information with good precision and recall. With the quick advancements in computation and storage technologies, the cost of acquiring and processing the genomic data has decreased significantly. The upcoming sequencing platforms will produce vast amount of data, which will imperatively require high-performance systems for on-demand analysis with time-bound efficiency. Recent bio-informatics tools are capable of utilizing the novel features of Hadoop in a more flexible way. In particular, big data technologies such as MapReduce and Hive are able to provide high-speed computational environment for the analysis of petabyte scale datasets. This has attracted the focus of bio-scientists to use the big data applications to automate the entire genome analysis. The proposed framework is designed over MapReduce and Java on extended Hadoop platform to achieve the parallelism of Big Data Analysis. It will assist the bioinformatics community by providing a comprehensive solution for Descriptive, Comparative, Exploratory, Inferential, Predictive and Causal Analysis on Genome data. The proposed framework is user-friendly, fully-customizable, scalable and fit for comprehensive real-time genome analysis from data acquisition till predictive sequence analysis.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Himanshu Gupta ◽  
Sarangdhar Kumar ◽  
Simonov Kusi-Sarpong ◽  
Charbel Jose Chiappetta Jabbour ◽  
Martin Agyemang

PurposeThe aim of this study is to identify and prioritize a list of key digitization enablers that can improve supply chain management (SCM). SCM is an important driver for organization's competitive advantage. The fierce competition in the market has forced companies to look the past conventional decision-making process, which is based on intuition and previous experience. The swift evolution of information technologies (ITs) and digitization tools has changed the scenario for many industries, including those involved in SCM.Design/methodology/approachThe Best Worst Method (BWM) has been applied to evaluate, rank and prioritize the key digitization and IT enablers beneficial for the improvement of SC performance. The study also used additive value function to rank the organizations on their SC performance with respect to digitization enablers.FindingsThe total of 25 key enablers have been identified and ranked. The results revealed that “big data/data science skills”, “tracking and localization of products” and “appropriate and feasibility study for aiding the selection and adoption of big data technologies and techniques ” are the top three digitization and IT enablers that organizations need to focus much in order to improve their SC performance. The study also ranked the SC performance of the organizations based on digitization enablers.Practical implicationsThe findings of this study will help the organizations to focus on certain digitization technologies in order to improve their SC performance. This study also provides an original framework for organizations to rank the key digitization enablers according to enablers relevant in their context and also to compare their performance with their counterparts.Originality/valueThis study seems to be the first of its kind in which 25 digitization enablers categorized in four main categories are ranked using a multi-criteria decision-making (MCDM) tool. This study is also first of its kind in ranking the organizations in their SC performance based on weights/ranks of digitization enablers.


2020 ◽  
Vol 9 (7) ◽  
pp. 210
Author(s):  
Changzhi Li ◽  
Zhongcheng Pan ◽  
Jing Weng ◽  
Pumin Li

In today’s rapid development of network technology, big data has been more and more applied. In the information and digital era, technological development is constantly promoting the reform and progress of the industry. In enterprise management, more and more big data technologies and concepts are applied. In terms of the application of big data in human resource management performance, the relevant technology is applied to human resource performance management has brought a new mode, which plays a prominent role in improving the efficiency of human resource performance management. However, there are still some problems in practical application. This paper, taking pesticide enterprises as an example, studies the problems and countermeasures of human resource management in such enterprises in the era of big data, so as to provide some ideas for guiding relevant enterprises to make good use of big data in human resource performance management.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2221
Author(s):  
Nuno Silva ◽  
Júlio Barros ◽  
Maribel Y. Santos ◽  
Carlos Costa ◽  
Paulo Cortez ◽  
...  

The constant advancements in Information Technology have been the main driver of the Big Data concept’s success. With it, new concepts such as Industry 4.0 and Logistics 4.0 are arising. Due to the increase in data volume, velocity, and variety, organizations are now looking to their data analytics infrastructures and searching for approaches to improve their decision-making capabilities, in order to enhance their results using new approaches such as Big Data and Machine Learning. The implementation of a Big Data Warehouse can be the first step to improve the organizations’ data analysis infrastructure and start retrieving value from the usage of Big Data technologies. Moving to Big Data technologies can provide several opportunities for organizations, such as the capability of analyzing an enormous quantity of data from different data sources in an efficient way. However, at the same time, different challenges can arise, including data quality, data management, and lack of knowledge within the organization, among others. In this work, we propose an approach that can be adopted in the logistics department of any organization in order to promote the Logistics 4.0 movement, while highlighting the main challenges and opportunities associated with the development and implementation of a Big Data Warehouse in a real demonstration case at a multinational automotive organization.


In the current day scenario, a huge amount of data is been generated from various heterogeneous sources like social networks, business apps, government sector, marketing, health care system, sensors, machine log data which is created at such a high speed and other sources. Big Data is chosen as one among the upcoming area of research by several industries. In this paper, the author presents wide collection of literature that has been reviewed and analyzed. This paper emphasizes on Big Data Technologies, Application & Challenges, a comparative study on architectures, methodologies, tools, and survey results proposed by various researchers are presented


Sign in / Sign up

Export Citation Format

Share Document