Data Mining and Statistics in Data Science

2019 ◽  
Vol 5 (30) ◽  
pp. 960-968
Author(s):  
Güner Gözde KILIÇ
Keyword(s):  
Author(s):  
M. A. Burhanuddin ◽  
Ronizam Ismail ◽  
Nurul Izzaimah ◽  
Ali Abdul-Jabbar Mohammed ◽  
Norzaimah Zainol

Recently, the mobile service providers have been growing rapidly in Malaysia. In this paper, we propose analytical method to find best telecommunication provider by visualizing their performance among telecommunication service providers in Malaysia, i.e. TM Berhad, Celcom, Maxis, U-Mobile, etc. This paperuses data mining technique to evaluate the performanceof telecommunication service providers using their customers feedback from Twitter Inc. It demonstrates on how the system could process and then interpret the big data into a simple graph or visualization format. In addition, build a computerized tool and recommend data analytic model based on the collected result. From prepping the data for pre-processing until conducting analysis, this project is focusing on the process of data science itself where Cross Industry Standard Process for Data Mining (CRISP-DM) methodology will be used as a reference. The analysis was developed by using R language and R Studio packages. From the result, it shows that Telco 4 is the best as it received highest positive scores from the tweet data. In contrast, Telco 3 should improve their performance as having less positive feedback from their customers via tweet data. This project bring insights of how the telecommunication industries can analyze tweet data from their customers. Malaysia telecommunication industry will get the benefit by improving their customer satisfaction and business growth. Besides, it will give the awareness to the telecommunication user of updated review from other users.


Author(s):  
Gurdeep S Hura

This chapter presents this new emerging technology of social media and networking with a detailed discussion on: basic definitions and applications, how this technology evolved in the last few years, the need for dynamicity under data mining environment. It also provides a comprehensive design and analysis of popular social networking media and sites available for the users. A brief discussion on the data mining methodologies for implementing the variety of new applications dealing with huge/big data in data science is presented. Further, an attempt is being made in this chapter to present a new emerging perspective of data mining methodologies with its dynamicity for social networking media and sites as a new trend and needed framework for dealing with huge amount of data for its collection, analysis and interpretation for a number of real world applications. A discussion will also be provided for the current and future status of data mining of social media and networking applications.


Author(s):  
Sabitha Rajagopal

Data Science employs techniques and theories to create data products. Data product is merely a data application that acquires its value from the data itself, and creates more data as a result; it's not just an application with data. Data science involves the methodical study of digital data employing techniques of observation, development, analysis, testing and validation. It tackles the real time challenges by adopting a holistic approach. It ‘creates' knowledge about large and dynamic bases, ‘develops' methods to manage data and ‘optimizes' processes to improve its performance. The goal includes vital investigation and innovation in conjunction with functional exploration intended to notify decision-making for individuals, businesses, and governments. This paper discusses the emergence of Data Science and its subsequent developments in the fields of Data Mining and Data Warehousing. The research focuses on need, challenges, impact, ethics and progress of Data Science. Finally the insights of the subsequent phases in research and development of Data Science is provided.


Author(s):  
Fernando Martinez-Plumed ◽  
Lidia Contreras-Ochando ◽  
Cesar Ferri ◽  
Jose Hernandez Orallo ◽  
Meelis Kull ◽  
...  
Keyword(s):  

2016 ◽  
Vol 21 (3) ◽  
pp. 525-547 ◽  
Author(s):  
Scott Tonidandel ◽  
Eden B. King ◽  
Jose M. Cortina

Advances in data science, such as data mining, data visualization, and machine learning, are extremely well-suited to address numerous questions in the organizational sciences given the explosion of available data. Despite these opportunities, few scholars in our field have discussed the specific ways in which the lens of our science should be brought to bear on the topic of big data and big data's reciprocal impact on our science. The purpose of this paper is to provide an overview of the big data phenomenon and its potential for impacting organizational science in both positive and negative ways. We identifying the biggest opportunities afforded by big data along with the biggest obstacles, and we discuss specifically how we think our methods will be most impacted by the data analytics movement. We also provide a list of resources to help interested readers incorporate big data methods into their existing research. Our hope is that we stimulate interest in big data, motivate future research using big data sources, and encourage the application of associated data science techniques more broadly in the organizational sciences.


2021 ◽  
Author(s):  
Chhaya Kulkarni ◽  
Nuzhat Maisha ◽  
Leasha J Schaub ◽  
Jacob Glaser ◽  
Erin Lavik ◽  
...  

This paper focuses on the discovery of a computational design map of disparate heterogeneous outcomes from bioinformatics experiments in pig (porcine) studies to help identify key variables impacting the experiment outcomes. Specifically we aim to connect discoveries from disparate laboratory experimentation in the area of trauma, blood loss and blood clotting using data science methods in a collaborative ensemble setting. Trauma related grave injuries cause exsanguination and death, constituting up to 50% of deaths especially in the armed forces. Restricting blood loss in such scenarios usually requires the presence of first responders, which is not feasible in certain cases. Moreover, a traumatic event may lead to a cytokine storm, reflected in the cytokine variables. Hemostatic nanoparticles have been developed to tackle these kinds of situations of trauma and blood loss. This paper highlights a collaborative effort of using data science methods in evaluating the outcomes from a lab study to further understand the efficacy of the nanoparticles. An intravenous administration of hemostatic nanoparticles was executed in pigs that had to undergo hemorrhagic shock and blood loss and other immune response variables, cytokine response variables are measured. Thus, through various hemostatic nanoparticles used in the intervention, multiple data outcomes are produced and it becomes critical to understand which nanoparticles are critical and what variables are key to study further variations in the lab. We propose a collaborative data mining framework which combines the results from multiple data mining methods to discover impactful features. We used frequent patterns observed in the data from these experiments. We further validate the connections between these frequent rules by comparing the results with decision trees and feature ranking. Both the frequent patterns and the decision trees help us identify the critical variables that stand out in the lab studies and need further validation and follow up in future studies. The outcomes from the data mining methods help produce a computational design map of the experimental results. Our preliminary results from such a computational design map provided insights in determining which features can help in designing the most effective hemostatic nanoparticles.


2019 ◽  
Vol 1 ◽  
pp. 1-2
Author(s):  
Jan Wilkening

<p><strong>Abstract.</strong> Data is regarded as the oil of the 21st century, and the concept of data science has received increasing attention in the last years. These trends are mainly caused by the rise of big data &amp;ndash; data that is big in terms of volume, variety and velocity. Consequently, data scientists are required to make sense of these large datasets. Companies have problems acquiring talented people to solve data science problems. This is not surprising, as employers often expect skillsets that can hardly be found in one person: Not only does a data scientist need to have a solid background in machine learning, statistics and various programming languages, but often also in IT systems architecture, databases, complex mathematics. Above all, she should have a strong non-technical domain expertise in her field (see Figure 1).</p><p>As it is widely accepted that 80% of data has a spatial component, developments in data science could provide exciting new opportunities for GIS and cartography: Cartographers are experts in spatial data visualization, and often also very skilled in statistics, data pre-processing and analysis in general. The cartographers’ skill levels often depend on the degree to which cartography programs at universities focus on the “front end” (visualisation) of a spatial data and leave the “back end” (modelling, gathering, processing, analysis) to GIScientists. In many university curricula, these front-end and back-end distinctions between cartographers and GIScientists are not clearly defined, and the boundaries are somewhat blurred.</p><p>In order to become good data scientists, cartographers and GIScientists need to acquire certain additional skills that are often beyond their university curricula. These skills include programming, machine learning and data mining. These are important technologies for extracting knowledge big spatial data sets, and thereby the logical advancement to “traditional” geoprocessing, which focuses on “traditional” (small, structured, static) datasets such shapefiles or feature classes.</p><p>To bridge the gap between spatial sciences (such as GIS and cartography) and data science, we need an integrated framework of “spatial data science” (Figure 2).</p><p>Spatial sciences focus on causality, theory-based approaches to explain why things are happening in space. In contrast, the scope of data science is to find similar patterns in big datasets with techniques of machine learning and data mining &amp;ndash; often without considering spatial concepts (such as topology, spatial indexing, spatial autocorrelation, modifiable area unit problems, map projections and coordinate systems, uncertainty in measurement etc.).</p><p>Spatial data science could become the core competency of GIScientists and cartographers who are willing to integrate methods from the data science knowledge stack. Moreover, data scientists could enhance their work by integrating important spatial concepts and tools from GIS and cartography into data science workflows. A non-exhaustive knowledge stack for spatial data scientists, including typical tasks and tools, is given in Table 1.</p><p>There are many interesting ongoing projects at the interface of spatial and data science. Examples from the ArcGIS platform include:</p><ul><li>Integration of Python GIS APIs with Machine Learning libraries, such as scikit-learn or TensorFlow, in Jupyter Notebooks</li><li>Combination of R (advanced statistics and visualization) and GIS (basic geoprocessing, mapping) in ModelBuilder and other automatization frameworks</li><li>Enterprise GIS solutions for distributed geoprocessing operations on big, real-time vector and raster datasets</li><li>Dashboards for visualizing real-time sensor data and integrating it with other data sources</li><li>Applications for interactive data exploration</li><li>GIS tools for Machine Learning tasks for prediction, clustering and classification of spatial data</li><li>GIS Integration for Hadoop</li></ul><p>While the discussion about proprietary (ArcGIS) vs. open-source (QGIS) software is beyond the scope of this article, it has to be stated that a.) many ArcGIS projects are actually open-source and b.) using a complete GIS platform instead of several open-source pieces has several advantages, particularly in efficiency, maintenance and support (see Wilkening et al. (2019) for a more detailed consideration). At any rate, cartography and GIS tools are the essential technology blocks for solving the (80% spatial) data science problems of the future.</p>


2019 ◽  
Vol 8 (3) ◽  
pp. 7140-7145

Poverty has been a main concern for century in any part of the world. The abrupt increase of population in the country and the inevitable rise of the inflation rate due to the economic challenges and other factors, it is clearly manifested that poverty is a problem that needs to be addressed seriously. With the available various advanced-technology nowadays, this problem on poverty maybe reduced with the aide of Data Mining which is a part of Data Science. This paper focused on predicting the poverty alleviation using Data Mining techniques based from all available data from the Philippine Statistic’s Authority, National Economic Development Authority, and Department of Social Welfare and Development. The application of supervised learning in Data Mining specifically, NaiveBayes Algorithm, Decision Tree J48 Algorithm, and K- Nearest Neighbour Algorithm has been utilized for the prediction of poverty alleviation in the province of Eastern Samar. The results of this study unveil that among the core indicators in identifying poverty, it is the “Economic Sector” with the attribute “Income” is the most significant factor that affects poverty alleviation in the province.


Sign in / Sign up

Export Citation Format

Share Document