scholarly journals State of the art in image processing & big data analytics: issues and challenges

2018 ◽  
Vol 7 (3.3) ◽  
pp. 195 ◽  
Author(s):  
S Vahini Ezhilraman ◽  
Sujatha Srinivasan

Image processing, in the contemporary domain, is now emerging as a novel and an innovative space in computing research and applications. Today, the discipline of “computer science” may be termed as “image science”, why because in every aspect of computer application, either science or humanities or management, image processing plays a vital role in varied ways. It is broadly now used in all the industries, organizations, administrative divisions; various social organizations, economic/business institutions, healthcare, defense and so on. Image processing takes images as input and image processing techniques are used to process the images and the output is modified images, video, or collection of text, or features of the images. The resultant output by most image processing techniques creates a huge amount of data which is categorized as Big-data. In this technique, bulky information is processed and stored as either structured or unstructured data as a result of processing images through computing techniques. In turn, Big Data analytics for mining knowledge from data created through image processing techniques has a huge potential in sectors like education, government organizations, healthcare institutions, manufacturing units, finance and banking, centers of retail business. This paper focuses on highlighting the recent innovations made in the field of image processing and Big Data analytics. The integration and interaction of the two broad fields of image processing and Big Data have great potential in various areas. Research challenges identified in the integration and interaction of these two broad fields are discussed and some possible research directions are suggested. 


2021 ◽  
Author(s):  
R. Salter ◽  
Quyen Dong ◽  
Cody Coleman ◽  
Maria Seale ◽  
Alicia Ruvinsky ◽  
...  

The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.



Diabetic Retinopathy affects the retina of the eye and eventually it may lead to total visual impairment. Total blindness can be avoided by detecting Diabetic Retinopathy at an early stage. Various manual tests are used by the doctors to detect the presence of disease, but they are tedious and expensive. Some of the features of Diabetic Retinopathy are exudates, haemorrhages and micro aneurysms. Detection and removal of optic disc plays a vital role in extraction of these features. This paper focuses on detection of optic disc using various image processing techniques, algorithms such as Canny edge, Circular Hough (CHT). Retinal images from IDRiD, Diaret_db0, Diaret_db1, Chasedb and Messidor datasets were used.



2022 ◽  
pp. 1578-1596
Author(s):  
Gunasekaran Manogaran ◽  
Chandu Thota ◽  
Daphne Lopez

Big Data has been playing a vital role in almost all environments such as healthcare, education, business organizations and scientific research. Big data analytics requires advanced tools and techniques to store, process and analyze the huge volume of data. Big data consists of huge unstructured data that require advance real-time analysis. Thus, nowadays many of the researchers are interested in developing advance technologies and algorithms to solve the issues when dealing with big data. Big Data has gained much attention from many private organizations, public sector and research institutes. This chapter provides an overview of the state-of-the-art algorithms for processing big data, as well as the characteristics, applications, opportunities and challenges of big data systems. This chapter also presents the challenges and issues in human computer interaction with big data analytics.



2020 ◽  
pp. 833-854
Author(s):  
Md Muzakkir Hussain ◽  
M.M. Sufyan Beg ◽  
Mohammad Saad Alam ◽  
Shahedul Haque Laskar

Electric vehicles (EVs) are key players for transport oriented smart cities (TOSC) powered by smart grids (SG) because they help those cities to become greener by reducing vehicle emissions and carbon footprint. In this article, the authors analyze different use-cases to show how big data analytics (BDA) can play vital role for successful electric vehicle (EV) to smart grid (SG) integration. Followed by this, this article presents an edge computing model and highlights the advantages of employing such distributed edge paradigms towards satisfying the store, compute and networking (SCN) requirements of smart EV applications in TOSCs. This article also highlights the distinguishing features of the edge paradigm, towards supporting BDA activities in EV to SG integration in TOSCs. Finally, the authors provide a detailed overview of opportunities, trends, and challenges of both these computing techniques. In particular, this article discusses the deployment challenges and state-of-the-art solutions in edge privacy and edge forensics.



Author(s):  
Gunasekaran Manogaran ◽  
Chandu Thota ◽  
Daphne Lopez

Big Data has been playing a vital role in almost all environments such as healthcare, education, business organizations and scientific research. Big data analytics requires advanced tools and techniques to store, process and analyze the huge volume of data. Big data consists of huge unstructured data that require advance real-time analysis. Thus, nowadays many of the researchers are interested in developing advance technologies and algorithms to solve the issues when dealing with big data. Big Data has gained much attention from many private organizations, public sector and research institutes. This chapter provides an overview of the state-of-the-art algorithms for processing big data, as well as the characteristics, applications, opportunities and challenges of big data systems. This chapter also presents the challenges and issues in human computer interaction with big data analytics.



Blockchain technology is the process of development of bitcoin, the blockchain technology as a distributed ledger of cryptocurrency transactions for digitized, decentralized, trusted and secured manner. The mainstream of blockchain technology is bitcoin, bitcoin concept made with ledger of every single transaction, transactions allows for hashing mechanism to verify the large amounts of data. Big data task requires that large amount of computational space, to generate the terabytes of data for ensuring the successful data processing techniques. The major impact on big data analytics requires more number of data and generated data can be depending upon different sectors from different organizations. This paper presents a state of definition, characteristics, transaction process, and applications, along with discussion of big data analytics are introduced. In blockchain technology covers the flaws of big data in fruitful relationship, with the factors of security, transparency, decentralization and flexibility, so that data to be analyze in different and efficient way for organizations all sizes in data analytics form



2021 ◽  
Vol 10 (6) ◽  
pp. 3393-3402
Author(s):  
Ahmed Hussein Ali ◽  
Royida A. Ibrahem Alhayali ◽  
Mostafa Abdulghafoor Mohammed ◽  
Tole Sutikno

Advancements in information technology is contributing to the excessive rate of big data generation recently. Big data refers to datasets that are huge in volume and consumes much time and space to process and transmit using the available resources. Big data also covers data with unstructured and structured formats. Many agencies are currently subscribing to research on big data analytics owing to the failure of the existing data processing techniques to handle the rate at which big data is generated. This paper presents an efficient classification and reduction technique for big data based on parallel generalized Hebbian algorithm (GHA) which is one of the commonly used principal component analysis (PCA) neural network (NN) learning algorithms. The new method proposed in this study was compared to the existing methods to demonstrate its capabilities in reducing the dimensionality of big data. The proposed method in this paper is implemented using Spark Radoop platform.



Sign in / Sign up

Export Citation Format

Share Document