scholarly journals Survey: Big Data Analytics in Agricultural Products Logistics

Big Data Analytics is one of the most cutting edge technology in the world. Big Data Analytics will provide data management to store, process and analyze the huge amount of data. Agricultural is one of the domains that assuring big data analytics used to make a change in the field. With the consistent enhancements of the peoples living lifestyle, step by step peoples concentrate on the demand of perishable agricultural products. However, the periodic eruption of food quality and safety issues affected the concern of the end-users. To enrich the distribution performance of agricultural product logistics and to provide the freshness, quality, and safety of the agricultural products has become a thread of the current agricultural domain. Big Data Analytics in Agricultural Products Logistics has an essential prospect of optimizing the distribution path of products, prediction of product demands in the market, traceability of the products, analyzing customer feedbacks and increasing overall performance of logistics in agriculture. This paper investigates the key challenges, methods used, technologies used, algorithms for distribution of products and future of Big Data Analytics in agro-logistics.

Author(s):  
Mohd Vasim Ahamad ◽  
Misbahul Haque ◽  
Mohd Imran

In the present digital era, more data are generated and collected than ever before. But, this huge amount of data is of no use until it is converted into some useful information. This huge amount of data, coming from a number of sources in various data formats and having more complexity, is called big data. To convert the big data into meaningful information, the authors use different analytical approaches. Information extracted, after applying big data analytics methods over big data, can be used in business decision making, fraud detection, healthcare services, education sector, machine learning, extreme personalization, etc. This chapter presents the basics of big data and big data analytics. Big data analysts face many challenges in storing, managing, and analyzing big data. This chapter provides details of challenges in all mentioned dimensions. Furthermore, recent trends of big data analytics and future directions for big data researchers are also described.


Author(s):  
P. Venkateswara Rao ◽  
A. Ramamohan Reddy ◽  
V. Sucharita

In the field of Aquaculture with the help of digital advancements huge amount of data is constantly produced for which the data of the aquaculture has entered in the big data world. The requirement for data management and analytics model is increased as the development progresses. Therefore, all the data cannot be stored on single machine. There is need for solution that stores and analyzes huge amounts of data which is nothing but Big Data. In this chapter a framework is developed that provides a solution for shrimp disease by using historical data based on Hive and Hadoop. The data regarding shrimps is acquired from different sources like aquaculture websites, various reports of laboratory etc. The noise is removed after the collection of data from various sources. Data is to be uploaded on HDFS after normalization is done and is to be put in a file that supports Hive. Finally classified data will be located in particular place. Based on the features extracted from aquaculture data, HiveQL can be used to analyze shrimp diseases symptoms.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 60 ◽  
Author(s):  
Lorenzo Carnevale ◽  
Antonio Celesti ◽  
Maria Fazio ◽  
Massimo Villari

Nowadays, we are observing a growing interest about Big Data applications in different healthcare sectors. One of this is definitely cardiology. In fact, electrocardiogram produces a huge amount of data about the heart health status that need to be stored and analysed in order to detect a possible issues. In this paper, we focus on the arrhythmia detection problem. Specifically, our objective is to address the problem of distributed processing considering big data generated by electrocardiogram (ECG) signals in order to carry out pre-processing analysis. Specifically, an algorithm for the identification of heartbeats and arrhythmias is proposed. Such an algorithm is designed in order to carry out distributed processing over the Cloud since big data could represent the bottleneck for cardiology applications. In particular, we implemented the Menard algorithm in Apache Spark in order to process big data coming form ECG signals in order to identify arrhythmias. Experiments conducted using a dataset provided by the Physionet.org European ST-T Database show an improvement in terms of response times. As highlighted by our outcomes, our solution provides a scalable and reliable system, which may address the challenges raised by big data in healthcare.


Author(s):  
Jaimin Navinchandra Undavia ◽  
Atul Manubhai Patel

The technological advancement has also opened up various ways to collect data through automatic mechanisms. One such mechanism collects a huge amount of data without any further maintenance or human interventions. The health industry sector has been confronted by the need to manage the big data being produced by various sources, which are well known for producing high volumes of heterogeneous data. High level of sophistication has been incorporated in almost all the industry, and healthcare is one of them. The article shows that the existence of huge amount of data in healthcare industry and the data generated in healthcare industry is neither homogeneous nor a simple type of data. Then the various sources and objectives of data are also highlighted and discussed. As data come from various sources, they must be versatile in nature in all aspects. So, rightly and meaningfully, big data analytics has penetrated the healthcare industry and its impact is also highlighted.


In the current day scenario, a huge amount of data is been generated from various heterogeneous sources like social networks, business apps, government sector, marketing, health care system, sensors, machine log data which is created at such a high speed and other sources. Big Data is chosen as one among the upcoming area of research by several industries. In this paper, the author presents wide collection of literature that has been reviewed and analyzed. This paper emphasizes on Big Data Technologies, Application & Challenges, a comparative study on architectures, methodologies, tools, and survey results proposed by various researchers are presented


2020 ◽  
Vol 18 (3) ◽  
pp. 181-189
Author(s):  
Xiong Zhou ◽  
Fang Zheng ◽  
Xujuan Zhou ◽  
Ka Ching Chan ◽  
Raj Gururajan ◽  
...  

As China’s agricultural output has improved, the national and local monitoring system of agricultural product safety has become much better, and monitoring standards have become increasingly strict. Despite this, there are agricultural product safety incidents which have caused consumer panic. One way to address this is by properly establishing tracking systems so that agricultural product logistics in China can be tracked and monitored. We explored this research objective with agricultural traceability and security in mind. One option that could be considered is the blockchain technology. Blockchain could also be used to ascertain the provenance of agricultural products to increase the quality and safety of the Chinese agricultural supply chain. In this context, this research converged on big data and technology, platforms and other means for product quality and safety of agricultural products traceability. In order to verify the accuracy of these three convergence, regression analysis were used to construct five models for verification of three hypothesis. The results show that based on “Internet+”, using big data, big technology and big platform can significantly increase the accuracy of agricultural products traceability system hence improve consumer acceptance of the safety of agricultural products.


2019 ◽  
Vol 8 (4) ◽  
pp. 3770-3776

Nowadays, the advancement in the field of information technology has witnessed stupendous growth in various industries, especially the medical imaging technologies in the healthcare industry. However, these advancements in the different technologies have not only made the data bigger but also a bit difficult to process and handle it. Though, these advancements may have resulted in huge amount of unnecessary data, it still cannot be considered as a major problem in today’s world as nowadays, the various advancements in technologies such as Big Data Analytics, Cloud Computing and several others, have made it really easy and effortless for storing huge amount of datasets and handling them. One of the boon that the advancement in technology has given to the world in the field of healthcare industry is the evolution of the scanning machines which can be used for the diagnosis of different diseases and to assemble the conclusions in the form of various medical reports for different scans such as ECG (Electrocardiogram), MRI(Magnetic Resonance Imaging) Brain scans, Ultrasounds, X-Rays, CT-Scanners and much more. But, the interesting part here is that though these scanning machines have their own advantages, one of the main disadvantages of them is that the efficiency of the results produced by them are yet to be known when comparing their performance’s to justify their enormous costs. Therefore, in the paper, the key challenges and various methodologies are being investigated in the healthcare industry with prime focus on comparing the scanning machines such as ECG, MRI, and Ultrasoundetc. by using Big Data Analytics. The various manufacturers of the scanning devices which are used by the hospitals or diagnostic centers have already fixed their price to such a high level that, even the hospitals have to spend lots of money to buy those machines and install them. Therefore, as a management side it becomes difficult to cope up with the performance related cost effectiveness of machines, which even shatters the trust of patients related to technical issues with a particular hospital. The prime aim is to focus on the precise implementation, performance efficiency and cost effectiveness of all the medical scans. The idea can also be implemented in improving theperformance along with the cost effectiveness of machines and devices other than the medical industry as well.


2019 ◽  
Vol 13 (7) ◽  
pp. 1 ◽  
Author(s):  
Flasteen Abuqabita ◽  
Razan Al-Omoush ◽  
Jaber Alwidian

Recently, huge amount of data has been generated in all over the world; these data are very huge, extremely fast and varies in its type. In order to extract the value from this data and make sense of it, a lot of frameworks and tools are needed to be developed for analyzing it. Until now a lot of tools and frameworks were generated to capture, store, analyze and visualize it. In this study we categorized the existing frameworks which is used for processing the big data into three groups, namely as, Batch processing, Stream analytics and Interactive analytics, we discussed each of them in detailed and made comparison on each of them.


2019 ◽  
Vol 8 (4) ◽  
pp. 5950-5956

Deep Learning and Big Data Analytics are key focus in current rapidly growing environment. The use of large data has become crucial to different organizations as they collecting huge amount of domain-specific data, which contains critical information about cyber security, theft detection, national resources, business economics, marketing, and medical information. The assessment of this huge amount of data needs advanced and improved analytical techniques for surveying and guessing future courses of action by making advanced decision-making strategies. Deep learning algorithms utilize the collected training data, to create a representation model. This model uses the computer for predictions or decision making about new data without needing to train the machine explicitly to perform user task. These techniques and algorithms infer greater level complicated abstractions as data are represented through tree like structure. A major use of Deep Learning is processing, learning and training from the huge amounts of unsupervised data, analyze patterns from the data and can be used for large Datasets in which the raw data is largely unlabeled and not classified. In this paper, Deep Learning techniques for addressing Data of different variety/formats is analyzed, enabling fast and full processing and integration of large amounts of different variety of information i.e. Data transformation is also addressed. It also addresses the quality of data as the performances of a machine improve depending on the data quality. Further exploration on the deep learning techniques to assist Big Data by focusing on two key topics: (1) is it possible for Deep Learning to assist some of the specific problems like Data Variety and Data Quality in Big Data Analytics, and (2) Whether these techniques can aid in processing the Big Data


2022 ◽  
pp. 1450-1457
Author(s):  
Jaimin Navinchandra Undavia ◽  
Atul Manubhai Patel

The technological advancement has also opened up various ways to collect data through automatic mechanisms. One installed mechanism collects a huge amount of data without any further maintenance or human interventions. The health industry has been confronted by the need to manage the big data being produced by various sources, which are well known for producing high volumes of heterogeneous data. A high level of sophistication has been incorporated in almost all the industry, and healthcare is also one of them. The article explores the existence of a huge amount of data in the healthcare industry, and the data generated in the healthcare industry is neither homogeneous nor simple. Then the various sources and objectives of data are highlighted and discussed. As data come from various sources, they must be versatile in nature in all aspects. So, rightly and meaningfully, big data analytics has penetrated the healthcare industry, and its impact is highlighted.


Sign in / Sign up

Export Citation Format

Share Document