Machine learning model for feature recognition of sports competition based on improved TLD algorithm

2020 ◽  
pp. 1-12
Author(s):  
Qinglong Ding ◽  
Zhenfeng Ding

Sports competition characteristics play an important role in judging the fairness of the game and improving the skills of the athletes. At present, the feature recognition of sports competition is affected by the environmental background, which causes problems in feature recognition. In order to improve the effect of feature recognition of sports competition, this study improves the TLD algorithm, and uses machine learning to build a feature recognition model of sports competition based on the improved TLD algorithm. Moreover, this study applies the TLD algorithm to the long-term pedestrian tracking of PTZ cameras. In view of the shortcomings of the TLD algorithm, this study improves the TLD algorithm. In addition, the improved TLD algorithm is experimentally analyzed on a standard data set, and the improved TLD algorithm is experimentally verified. Finally, the experimental results are visually represented by mathematical statistics methods. The research shows that the method proposed by this paper has certain effects.

Author(s):  
Meiyan Xu ◽  
Junfeng Yao ◽  
Yifeng Zheng ◽  
Yaojin Lin

Existing machine learning methods for classification and recognition of EEG motor imagery usually suffer from reduced accuracy for limited training data. To address this problem, this paper proposes a multi-rhythm capsule network (FBCapsNet) that uses as little EEG information as possible with key features to classify motor imagery and further improves the classification efficiency. The network conforms to a small recognition model with only 3 acquisition channels but it can effectively use the limited data for feature learning. Based on the BCI Competition IV 2b data set, experimental results show that the proposed network can achieve 2.41% better performance than existing cutting-edge methods.


Author(s):  
Dhilsath Fathima.M ◽  
S. Justin Samuel ◽  
R. Hari Haran

Aim: This proposed work is used to develop an improved and robust machine learning model for predicting Myocardial Infarction (MI) could have substantial clinical impact. Objectives: This paper explains how to build machine learning based computer-aided analysis system for an early and accurate prediction of Myocardial Infarction (MI) which utilizes framingham heart study dataset for validation and evaluation. This proposed computer-aided analysis model will support medical professionals to predict myocardial infarction proficiently. Methods: The proposed model utilize the mean imputation to remove the missing values from the data set, then applied principal component analysis to extract the optimal features from the data set to enhance the performance of the classifiers. After PCA, the reduced features are partitioned into training dataset and testing dataset where 70% of the training dataset are given as an input to the four well-liked classifiers as support vector machine, k-nearest neighbor, logistic regression and decision tree to train the classifiers and 30% of test dataset is used to evaluate an output of machine learning model using performance metrics as confusion matrix, classifier accuracy, precision, sensitivity, F1-score, AUC-ROC curve. Results: Output of the classifiers are evaluated using performance measures and we observed that logistic regression provides high accuracy than K-NN, SVM, decision tree classifiers and PCA performs sound as a good feature extraction method to enhance the performance of proposed model. From these analyses, we conclude that logistic regression having good mean accuracy level and standard deviation accuracy compared with the other three algorithms. AUC-ROC curve of the proposed classifiers is analyzed from the output figure.4, figure.5 that logistic regression exhibits good AUC-ROC score, i.e. around 70% compared to k-NN and decision tree algorithm. Conclusion: From the result analysis, we infer that this proposed machine learning model will act as an optimal decision making system to predict the acute myocardial infarction at an early stage than an existing machine learning based prediction models and it is capable to predict the presence of an acute myocardial Infarction with human using the heart disease risk factors, in order to decide when to start lifestyle modification and medical treatment to prevent the heart disease.


2021 ◽  
Author(s):  
Junjie Shi ◽  
Jiang Bian ◽  
Jakob Richter ◽  
Kuan-Hsun Chen ◽  
Jörg Rahnenführer ◽  
...  

AbstractThe predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework $$\textit{MODES}$$ MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) $$\textit{MODES}$$ MODES -B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) $$\textit{MODES}$$ MODES -I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate $$\textit{MODES}$$ MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy ($$\textit{MODES}$$ MODES -B), run-time efficiency ($$\textit{MODES}$$ MODES -I), and statistical stability for both modes, $$\textit{MODES}$$ MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


2020 ◽  
Vol 10 (22) ◽  
pp. 8067
Author(s):  
Tomohiro Mashita ◽  
Tetsuya Kanayama ◽  
Photchara Ratsamee

Air conditioners enable a comfortable environment for people in a variety of scenarios. However, in the case of a room with multiple people, the specific comfort for a particular person is highly dependent on their clothes, metabolism, preference, and so on, and the ideal conditions for each person in a room can conflict with each other. An ideal way to resolve these kinds of conflicts is an intelligent air conditioning system that can independently control air temperature and flow at different areas in a room and then produce thermal comfort for multiple users, which we define as the personal preference of air flow and temperature. In this paper, we propose Personal Atmosphere, a machine learning based method to obtain parameters of air conditioners which generate non-uniform distributions of air temperature and flow in a room. In this method, two dimensional air-temperature and -flow distributions in a room are used as input to a machine learning model. These inputs can be considered a summary of each user’s preference. Then the model outputs a parameter set for air conditioners in a given room. We utilized ResNet-50 as the model and generated a data set of air temperature and flow distributions using computational fluid dynamics (CFD) software. We then conducted evaluations with two rooms that have two and four air conditioners under the ceiling. We then confirmed that the estimated parameters of the air conditioners can generate air temperature and flow distributions close to those required in simulation. We also evaluated the performance of a ResNet-50 with fine tuning. This result shows that its learning time is significantly decreased, but performance is also decreased.


2017 ◽  
Vol 36 (3) ◽  
pp. 267-269 ◽  
Author(s):  
Matt Hall ◽  
Brendon Hall

The Geophysical Tutorial in the October issue of The Leading Edge was the first we've done on the topic of machine learning. Brendon Hall's article ( Hall, 2016 ) showed readers how to take a small data set — wireline logs and geologic facies data from nine wells in the Hugoton natural gas and helium field of southwest Kansas ( Dubois et al., 2007 ) — and predict the facies in two wells for which the facies data were not available. The article demonstrated with 25 lines of code how to explore the data set, then create, train and test a machine learning model for facies classification, and finally visualize the results. The workflow took a deliberately naive approach using a support vector machine model. It achieved a sort of baseline accuracy rate — a first-order prediction, if you will — of 0.42. That might sound low, but it's not untypical for a naive approach to this kind of problem. For comparison, random draws from the facies distribution score 0.16, which is therefore the true baseline.


Author(s):  
Muneer A. S. Hazaa ◽  
Nazlia Omar ◽  
Fadl Mutaher Ba-Alwi ◽  
Mohammed Albared

Identifying of compound nouns is important for a wide spectrum of applications in the field of natural language processing such as machine translation and information retrieval. Extraction of compound nouns requires deep or shallow syntactic preprocessing tools and large corpora. This paper investigates several methods for extracting Noun compounds from Malay text corpora. First, we present the empirical results of sixteen statistical association measures of Malay <N+N> compound nouns extraction. Second, we introduce the possibility of integrating multiple association measures. Third, this work also provides a standard dataset intended to provide a common platform for evaluating research on the identification compound Nouns in Malay language. The standard data set contains 7,235 unique N-N candidates, 2,970 of them are N-N compound nouns collocations. The extraction algorithms are evaluated against this reference data set. The experimental results  demonstrate that a group of association measures (T-test , Piatersky-Shapiro (PS) , C_value, FGM and  rank combination method) are the best association measure and outperforms the other association measures for <N+N> collocations in the Malay  corpus. Finally, we describe several classification methods for combining association measures scores of the basic measures, followed by their evaluation. Evaluation results show that classification algorithms significantly outperform individual association measures. Experimental results obtained are quite satisfactory in terms of the Precision, Recall and F-score.


Analysis of patient’s data is always a great idea to get accurate results on using classifiers. A combination of classifiers would give an accurate result than using a single classifier because one single classifier does not give accurate results but always appropriate ones. The aim is to predict the outcome feature of the data set. The “outcome” can contain only two values that is 0 and 1. 0 means patient doesn’t have heart disease and 1 means patient have heart diseases. So, there is a need to build a classification algorithm that can predict the Outcome feature of the test dataset with good accuracy. For this understanding the data is important, and then various classification algorithm can be tested. Then the best model can be selected which gives highest accuracy among all. The built model can then be given to the software developer for building the end user application using the selected machine learning model that will be able to predict the heart disease in a patient.


At maximum traffic intensity i.e. during the busy hour, the GSM BSC signalling units (BSU) measured CPU load will be at its peak. The BSUs CPU load is a function of the number of transceivers (TRXs) mapped to it and hence the volume of offered traffic being handled by the unit. The unit CPU load is also a function of the nature of the offered load, i.e. with the same volume of offered load, the CPU load with the nominal traffic profile would be different as compared to some other arbitrary traffic profile. To manage future traffic growth, a model to estimate the BSU unit CPU load is an essential need. In recent times, using Machine Learning (ML) to develop such a model is an approach that has gained wide acceptance. Since, the estimation of CPU load is difficult as it depends on large set of parameters, machine learning approach is more scalable. In this paper, we describe a back-propagation neural network model that was developed to estimate the BSU unit CPU load. We describe the model parameters and choices and implementation architecture, and estimate its accuracy of prediction, based on an evaluation data set. We also discuss alternative ML architectures and compare their relative prediction accuracies, to the primary ML model


In this paper we propose a novel supervised machine learning model to predict the polarity of sentiments expressed in microblogs. The proposed model has a stacked neural network structure consisting of Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN) layers. In order to capture the long-term dependencies of sentiments in the text ordering of a microblog, the proposed model employs an LSTM layer. The encodings produced by the LSTM layer are then fed to a CNN layer, which generates localized patterns of higher accuracy. These patterns are capable of capturing both local and global long-term dependences in the text of the microblogs. It was observed that the proposed model performs better and gives improved prediction accuracy when compared to semantic, machine learning and deep neural network approaches such as SVM, CNN, LSTM, CNN-LSTM, etc. This paper utilizes the benchmark Stanford Large Movie Review dataset to show the significance of the new approach. The prediction accuracy of the proposed approach is comparable to other state-of-art approaches.


Sign in / Sign up

Export Citation Format

Share Document