scholarly journals EXERCISE OF MACHINE LEARNING USING SOME PYTHON TOOLS AND TECHNIQUES

2018 ◽  
Vol 6 ◽  
pp. 1062-1070
Author(s):  
Ertan Mustafa Geldiev ◽  
Nayden Valkov Nenkov ◽  
Mariana Mateeva Petrova

One of the goals of predictive analytics training using Python tools is to create a "Model" from classified examples that classifies new examples from a Dataset. The purpose of different strategies and experiments is to create a more accurate prediction model. The goals we set out in the study are to achieve successive steps to find an accurate model for a dataset and preserving it for its subsequent use using the python instruments. Once we have found the right model, we save it and load it later, to classify if we have "phishing" in our case. In the case that the path we reach to the discovery of the search model, we can ask ourselves how much we can automate everything and whether a computer program can be written to automatically go through the unified steps and to find the right model? Due to the fact that the steps for finding the exact model are often unified and repetitive for different types of data, we have offered a hypothetical algorithm that could write a complex computer program searching for a model, for example when we have a classification task. This algorithm is rather directional and does not claim to be all-encompassing. The research explores some features of Python Scientific Python Packages like Numpy, Pandas, Matplotlib, Scipy and scycit-learn to create a more accurate model. The Dataset used for the research was downloaded free from the UCI Machine Learning Repository (UCI Machine Learning Repository, 2017).

Author(s):  
Ganapathy Subramaniam Balasubramanian, Et. al.

Understanding activity incidents is one of the necessary measures in workplace safety strategy. Analyzing the trends of the activity incident information helps to spot the potential pain points and helps to scale back the loss. Optimizing the Machine Learning algorithms may be a comparatively new trend to suit the prediction model and algorithms within the right place to support human helpful factors. This research aims to make a prediction model spot the activity incidents in chemical and gas industries. This paper describes the design and approach of building and implementing the prediction model to predict the reason behind the incident which may be used as a key index for achieving industrial safety specific to chemical and gas industries. The implementation of the grading algorithmic program including the prediction model ought to bring unbiased information to get a logical conclusion. The prediction model has been trained against incident information that has 25700 chemical industrial incidents with accident descriptions for the last decade. Inspection information and incident logs ought to be chomped high of the trained dataset to verify and validate the implementation. The result of the implementation provides insight towards the understanding of the patterns, classifications, associated conjointly contributes to an increased understanding of quantitative and qualitative analytics. Innovative cloud-based technology discloses the gate to method the continual in-streaming information, method it, and output the required end in a period. The first technology stack utilized in this design is Apache Kafka, Apache Spark, KSQL, Data frames, and AWS Lambda functions. Lambda functions are accustomed implement the grading algorithmic program and prediction algorithmic program to put in writing out the results back to AWS S3 buckets. Proof of conception implementation of the prediction model helps the industries to examine through the incidents and can layout the bottom platform for the assorted protective implementations that continuously advantage the workplace's name, growth, and have less attrition in human resources.


2020 ◽  
pp. 75-92
Author(s):  
Chris Bleakley

Chapter 5 delves into the origins of artificial intelligence (AI). By the end of the 1940s, a few visionaries realised that computers were more than mere automatic calculators. They believed that computers running the right algorithms could perform tasks previously thought to require human intelligence. Christopher Strachey completed the first artificially intelligent computer program in 1952. The program played the board game Checkers. Arthur Samuel of IBM extended and improved on Strachey’s program by including machine learning - the ability of a program to learn from experience. A team from Carnegie Melon University developed the first computer program that could perform algebra. The program eventually reproduced 38 of the 52 proofs in a classic mathematics textbook. Flushed by these successes, serious scientists made wildly optimistic pronouncements about the future of AI. In the event, project after project failed to deliver and the first “AI winter” set in.


The healthcare part has seen an incredible advancement following the improvement of new computer innovations, and that pushed this region to deliver increasingly restorative information, that which brought forth different fields of research. Numerous endeavors are done to adapt to the blast of therapeutic information on one hand, and to acquire valuable learning from it then again. To help in making decisions and to extract useful knowledge this incited specialists to apply all the specialized developments like predictive analytics, learning algorithms, machine learning and predictive analytics. In medical science to determine the risk of building up a disease the prediction models are used so that it can enable early treatment or prevention of that disease. To markers of future disposition to a disease multiple or single analyses are used.


Nowadays, large volume of data is generated in the form of text, voice, video, images and sound. It is very challenging job to handle and to get process these different types of data. It is very laborious process to analysis big data by using the traditional data processing applications. Due to huge scattered file systems, a big data analysis is a difficult task. So, to analyses the big data, a number of tools and techniques are required. Some of the techniques of data mining are used to analyze the big data such as clustering, prediction, and classification and decision tree etc. Apache Hadoop, Apache spark, Apache Storm, MongoDB, NOSQL, HPCC are the tools used to handle big data. This paper presents a review and comparative study of these tools and techniques which are basically used for Big Data analytics. A brief summary of tools and techniques is represented here.


Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


TAPPI Journal ◽  
2019 ◽  
Vol 18 (11) ◽  
pp. 679-689
Author(s):  
CYDNEY RECHTIN ◽  
CHITTA RANJAN ◽  
ANTHONY LEWIS ◽  
BETH ANN ZARKO

Packaging manufacturers are challenged to achieve consistent strength targets and maximize production while reducing costs through smarter fiber utilization, chemical optimization, energy reduction, and more. With innovative instrumentation readily accessible, mills are collecting vast amounts of data that provide them with ever increasing visibility into their processes. Turning this visibility into actionable insight is key to successfully exceeding customer expectations and reducing costs. Predictive analytics supported by machine learning can provide real-time quality measures that remain robust and accurate in the face of changing machine conditions. These adaptive quality “soft sensors” allow for more informed, on-the-fly process changes; fast change detection; and process control optimization without requiring periodic model tuning. The use of predictive modeling in the paper industry has increased in recent years; however, little attention has been given to packaging finished quality. The use of machine learning to maintain prediction relevancy under everchanging machine conditions is novel. In this paper, we demonstrate the process of establishing real-time, adaptive quality predictions in an industry focused on reel-to-reel quality control, and we discuss the value created through the availability and use of real-time critical quality.


Sign in / Sign up

Export Citation Format

Share Document