Quality Assurance Issues for Big Data Applications in Supply Chain Management

2022 ◽  
pp. 1458-1483
Author(s):  
Kamalendu Pal

Heterogeneous data types, widely distributed data sources, huge data volumes, and large-scale business-alliance partners describe typical global supply chain operational environments. Mobile and wireless technologies are putting an extra layer of data source in this technology-enriched supply chain operation. This environment also needs to provide access to data anywhere, anytime to its end-users. This new type of data set originating from the global retail supply chain is commonly known as big data because of its huge volume, resulting from the velocity with which it arrives in the global retail business environment. Such environments empower and necessitate decision makers to act or react quicker to all decision tasks. Academics and practitioners are researching and building the next generation of big-data-based application software systems. This new generation of software applications is based on complex data analysis algorithms (i.e., on data that does not adhere to standard relational data models). The traditional software testing methods are insufficient for big-data-based applications. Testing big-data-based applications is one of the biggest challenges faced by modern software design and development communities because of lack of knowledge on what to test and how much data to test. Big-data-based applications developers have been facing a daunting task in defining the best strategies for structured and unstructured data validation, setting up an optimal test environment, and working with non-relational databases testing approaches. This chapter focuses on big-data-based software testing and quality-assurance-related issues in the context of Hadoop, an open source framework. It includes discussion about several challenges with respect to massively parallel data generation from multiple sources, testing methods for validation of pre-Hadoop processing, software application quality factors, and some of the software testing mechanisms for this new breed of applications

Author(s):  
Kamalendu Pal

Heterogeneous data types, widely distributed data sources, huge data volumes, and large-scale business-alliance partners describe typical global supply chain operational environments. Mobile and wireless technologies are putting an extra layer of data source in this technology-enriched supply chain operation. This environment also needs to provide access to data anywhere, anytime to its end-users. This new type of data set originating from the global retail supply chain is commonly known as big data because of its huge volume, resulting from the velocity with which it arrives in the global retail business environment. Such environments empower and necessitate decision makers to act or react quicker to all decision tasks. Academics and practitioners are researching and building the next generation of big-data-based application software systems. This new generation of software applications is based on complex data analysis algorithms (i.e., on data that does not adhere to standard relational data models). The traditional software testing methods are insufficient for big-data-based applications. Testing big-data-based applications is one of the biggest challenges faced by modern software design and development communities because of lack of knowledge on what to test and how much data to test. Big-data-based applications developers have been facing a daunting task in defining the best strategies for structured and unstructured data validation, setting up an optimal test environment, and working with non-relational databases testing approaches. This chapter focuses on big-data-based software testing and quality-assurance-related issues in the context of Hadoop, an open source framework. It includes discussion about several challenges with respect to massively parallel data generation from multiple sources, testing methods for validation of pre-Hadoop processing, software application quality factors, and some of the software testing mechanisms for this new breed of applications


Author(s):  
Kamalendu Pal

Global retail business has become diverse and latest Information Technology (IT) advancements have created new possibilities for the management of the deluge of data generated by world-wide business operations of its supply chain. In this business, external data from social media and supplier networks provide a huge influx to augment existing data. This is combined with data from sensors and intelligent machines, commonly known as Internet of Things (IoT) data. This data, originating from the global retail supply chain, is simply known as Big Data - because of its enormous volume, the velocity with which it arrives in the global retail business environment, its veracity to quality related issues, and values it generates for the global supply chain. Many retail products manufacturing companies are trying to find ways to enhance their quality of operational performance while reducing business support costs. They do this primarily by improving defect tracking and better forecasting. These manufacturing and operational improvements along with a favorable customer experience remain crucil to thriving in global competition. In recent years, Big Data and its associated technologies are attracting huge research interest with academics, industry practitioners, and government agencies. Big Data-based software applications are widely used within retail supply chain management - in recommendation, prediction, and decision support systems. The spectacular growth of these software systems has enormous potential for improving the daily performance of retail product and service companies. However, there are increasingly data quality problems resulting in erroneous tesing costs in retail Supply Chain Management (SCM). The heavy investment made in Big Data-based software applications puts increasing pressure on management to justify the quality assurance in these software systems. This chapter discusses about data quality and the dimensions of data quality for Big Data applications. It also examines some of the challenges presented by managing the quality and governance of Big Data, and how those can be balanced with the need of delivery usable Big Data-based software systems. Finally, the chapter highlights the importance of data governance; and it also includes some of the Big Data managerial practice related issues and their justifications for achieving application software quality assurance.


Author(s):  
Adarsh Bhandari

Abstract: With the rapid escalation of data driven solutions, companies are integrating huge data from multiple sources in order to gain fruitful results. To handle this tremendous volume of data we need cloud based architecture to store and manage this data. Cloud computing has emerged as a significant infrastructure that promises to reduce the need for maintaining costly computing facilities by organizations and scale up the products. Even today heavy applications are deployed on cloud and managed specially at AWS eliminating the need for error prone manual operations. This paper demonstrates about certain cloud computing tools and techniques present to handle big data and processes involved while extracting this data till model deployment and also distinction among their usage. It will also demonstrate, how big data analytics and cloud computing will change methods that will later drive the industry. Additionally, a study is presented later in the paper about management of blockchain generated big data on cloud and making analytical decision. Furthermore, the impact of blockchain in cloud computing and big data analytics has been employed in this paper. Keywords: Cloud Computing, Big Data, Amazon Web Services (AWS), Google Cloud Platform (GCP), SaaS, PaaS, IaaS.


2019 ◽  
Vol 10 (4) ◽  
pp. 106
Author(s):  
Bader A. Alyoubi

Big Data is gaining rapid popularity in e-commerce sector across the globe. There is a general consensus among experts that Saudi organisations are late in adopting new technologies. It is generally believed that the lack of research in latest technologies that are specific to Saudi Arabia that is culturally, socially, and economically different from the West, is one of the key factors for the delay in technology adoption in Saudi Arabia. Hence, to fill this gap to a certain extent and create awareness about Big Data technology, the primary goal of this research was to identify the impact of Big Data on e-commerce organisations in Saudi Arabia. Internet has changed the business environment of Saudi Arabia too. E-commerce is set for achieving new heights due to latest technological advancements. A qualitative research approach was used by conducting interviews with highly experienced professional to gather primary data. Using multiple sources of evidence, this research found out that traditional databases are not capable of handling massive data. Big Data is a promising technology that can be adopted by e-commerce companies in Saudi Arabia. Big Data’s predictive analytics will certainly help e-commerce companies to gain better insight of the consumer behaviour and thus offer customised products and services. The key finding of this research is that Big Data has a significant impact in e-commerce organisations in Saudi Arabia on various verticals like customer retention, inventory management, product customisation, and fraud detection.


Logistics ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 22
Author(s):  
Hisham Alidrisi

This paper presents a strategic roadmap to handle the issue of resource allocation among the green supply chain management (GSCM) practices. This complex issue for supply chain stakeholders highlights the need for the application of supply chain finance (SCF). This paper proposes the five Vs of big data (value, volume, velocity, variety, and veracity) as a platform for determining the role of GSCM practices in improving SCF implementation. The fuzzy analytic network process (ANP) was employed to prioritize the five Vs by their roles in SCF. The fuzzy technique for order preference by similarity to ideal solution (TOPSIS) was then applied to evaluate GSCM practices on the basis of the five Vs. In addition, interpretive structural modeling (ISM) was used to visualize the optimum implementation of the GSCM practices. The outcome is a hybrid self-assessment model that measures the environmental maturity of SCF by the coherent application of three multicriteria decision-making techniques. The development of the Basic Readiness Index (BRI), Relative Readiness Index (RRI), and Strategic Matrix Tool (SMT) creates the potential for further improvements through the integration of the RRI scores and ISM results. This hybrid model presents a practical tool for decision-makers.


Author(s):  
Ying Wang ◽  
Yiding Liu ◽  
Minna Xia

Big data is featured by multiple sources and heterogeneity. Based on the big data platform of Hadoop and spark, a hybrid analysis on forest fire is built in this study. This platform combines the big data analysis and processing technology, and learns from the research results of different technical fields, such as forest fire monitoring. In this system, HDFS of Hadoop is used to store all kinds of data, spark module is used to provide various big data analysis methods, and visualization tools are used to realize the visualization of analysis results, such as Echarts, ArcGIS and unity3d. Finally, an experiment for forest fire point detection is designed so as to corroborate the feasibility and effectiveness, and provide some meaningful guidance for the follow-up research and the establishment of forest fire monitoring and visualized early warning big data platform. However, there are two shortcomings in this experiment: more data types should be selected. At the same time, if the original data can be converted to XML format, the compatibility is better. It is expected that the above problems can be solved in the follow-up research.


Sign in / Sign up

Export Citation Format

Share Document