scholarly journals Big Data Use and Challenges: Insights from Two Internet-Mediated Surveys

Computers ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 73 ◽  
Author(s):  
Rossi ◽  
Rubattino ◽  
Viscusi

Big data and analytics have received great attention from practitioners and academics, nowadays representing a key resource for the renewed interest in artificial intelligence, especially for machine learning techniques. In this article we explore the use of big data and analytics by different types of organizations, from various countries and industries, including the ones with a limited size and capabilities compared to corporations or new ventures. In particular, we are interested in organizations where the exploitation of big data and analytics may have social value in terms of, e.g., public and personal safety. Hence, this article discusses the results of two multi-industry and multi-country surveys carried out on a sample of public and private organizations. The results show a low rate of utilization of the data collected due to, among other issues, privacy and security, as well as the lack of staff trained in data analysis. Also, the two surveys show a challenge to reach an appropriate level of effectiveness in the use of big data and analytics, due to the shortage of the right tools and, again, capabilities, often related to a low rate of digital transformation.

2021 ◽  
Vol 5 (1) ◽  
pp. 38
Author(s):  
Chiara Giola ◽  
Piero Danti ◽  
Sandro Magnani

In the age of AI, companies strive to extract benefits from data. In the first steps of data analysis, an arduous dilemma scientists have to cope with is the definition of the ’right’ quantity of data needed for a certain task. In particular, when dealing with energy management, one of the most thriving application of AI is the consumption’s optimization of energy plant generators. When designing a strategy to improve the generators’ schedule, a piece of essential information is the future energy load requested by the plant. This topic, in the literature it is referred to as load forecasting, has lately gained great popularity; in this paper authors underline the problem of estimating the correct size of data to train prediction algorithms and propose a suitable methodology. The main characters of this methodology are the Learning Curves, a powerful tool to track algorithms performance whilst data training-set size varies. At first, a brief review of the state of the art and a shallow analysis of eligible machine learning techniques are offered. Furthermore, the hypothesis and constraints of the work are explained, presenting the dataset and the goal of the analysis. Finally, the methodology is elucidated and the results are discussed.


2021 ◽  
Vol 119 ◽  
pp. 44-53
Author(s):  
Danilo Bertoni ◽  
Giacomo Aletti ◽  
Daniele Cavicchioli ◽  
Alessandra Micheletti ◽  
Roberto Pretolani

2014 ◽  
Vol 28 (2) ◽  
pp. 3-28 ◽  
Author(s):  
Hal R. Varian

Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big datasets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more effective ways to model complex relationships. In this essay, I will describe a few of these tools for manipulating and analyzing big data. I believe that these methods have a lot to offer and should be more widely known and used by economists.


Author(s):  
Bruce Mellado ◽  
Jianhong Wu ◽  
Jude Dzevela Kong ◽  
Nicola Luigi Bragazzi ◽  
Ali Asgary ◽  
...  

COVID-19 is imposing massive health, social and economic costs. While many developed countries have started vaccinating, most African nations are waiting for vaccine stocks to be allocated and are using clinical public health (CPH) strategies to control the pandemic. The emergence of variants of concern (VOC), unequal access to the vaccine supply and locally specific logistical and vaccine delivery parameters, add complexity to national CPH strategies and amplify the urgent need for effective CPH policies. Big data and artificial intelligence machine learning techniques and collaborations can be instrumental in an accurate, timely, locally nuanced analysis of multiple data sources to inform CPH decision-making, vaccination strategies and their staged roll-out. The Africa-Canada Artificial Intelligence and Data Innovation Consortium (ACADIC) has been established to develop and employ machine learning techniques to design CPH strategies in Africa, which requires ongoing collaboration, testing and development to maximize the equity and effectiveness of COVID-19-related CPH interventions.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Tahani Daghistani ◽  
Huda AlGhamdi ◽  
Riyad Alshammari ◽  
Raed H. AlHazme

AbstractOutpatients who fail to attend their appointments have a negative impact on the healthcare outcome. Thus, healthcare organizations facing new opportunities, one of them is to improve the quality of healthcare. The main challenges is predictive analysis using techniques capable of handle the huge data generated. We propose a big data framework for identifying subject outpatients’ no-show via feature engineering and machine learning (MLlib) in the Spark platform. This study evaluates the performance of five machine learning techniques, using the (2,011,813‬) outpatients’ visits data. Conducting several experiments and using different validation methods, the Gradient Boosting (GB) performed best, resulting in an increase of accuracy and ROC to 79% and 81%, respectively. In addition, we showed that exploring and evaluating the performance of the machine learning models using various evaluation methods is critical as the accuracy of prediction can significantly differ. The aim of this paper is exploring factors that affect no-show rate and can be used to formulate predictions using big data machine learning techniques.


Author(s):  
Suriya Murugan ◽  
Sumithra M. G.

Cognitive radio has emerged as a promising candidate solution to improve spectrum utilization in next generation wireless networks. Spectrum sensing is one of the main challenges encountered by cognitive radio and the application of big data is a powerful way to solve various problems. However, for the increasingly tense spectrum resources, the prediction of cognitive radio based on big data is an inevitable trend. The signal data from various sources is analyzed using the big data cognitive radio framework and efficient data analytics can be performed using different types of machine learning techniques. This chapter analyses the process of spectrum sensing in cognitive radio, the challenges to process spectrum data and need for dynamic machine learning algorithms in decision making process.


Sign in / Sign up

Export Citation Format

Share Document