scholarly journals Waste Management Using Machine Learning and Deep Learning Algorithms

2020 ◽  
Vol 6 (2) ◽  
pp. 97-106
Author(s):  
Khan Nasik Sami ◽  
Zian Md Afique Amin ◽  
Raini Hassan

Waste Management is one of the essential issues that the world is currently facing does not matter if the country is developed or under developing. The key issue in this waste segregation is that the trash bin at open spots gets flooded well ahead of time before the beginning of the following cleaning process. The isolation of waste is done by unskilled workers which is less effective, time-consuming, and not plausible because of a lot of waste. So, we are proposing an automated waste classification problem utilizing Machine Learning and Deep Learning algorithms. The goal of this task is to gather a dataset and arrange it into six classes consisting of glass, paper, and metal, plastic, cardboard, and waste. The model that we have used are classification models. For our research we did comparisons between four algorithms, those are CNN, SVM, Random Forest, and Decision Tree. As our concern is a classification problem, we have used several machine learning and deep learning algorithm that best fits for classification solutions. For our model, CNN accomplished high characterization on accuracy around 90%, while SVM additionally indicated an excellent transformation to various kinds of waste which were 85%, and Random Forest and Decision Tree have accomplished 55% and 65% respectively

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jian Li ◽  
Yongyan Zhao

As the national economy has entered a stage of rapid development, the national economy and social development have also ushered in the “14th Five-Year Plan,” and the country has also issued support policies to encourage and guide college students to start their own businesses. Therefore, the establishment of an innovation and entrepreneurship platform has a significant impact on China’s economy. This gives college students great support and help in starting a business. The theory of deep learning algorithms originated from the development of artificial neural networks and is another important field of machine learning. As the computing power of computers has been greatly improved, especially the computing power of GPU can quickly train deep neural networks, deep learning algorithms have become an important research direction. The deep learning algorithm is a nonlinear network structure and a standard modeling method in the field of machine learning. After modeling various templates, they can be identified and implemented. This article uses a combination of theoretical research and empirical research, based on the views and research content of some scholars in recent years, and introduces the basic framework and research content of this article. Then, deep learning algorithms are used to analyze the experimental data. Data analysis is performed, and relevant concepts of deep learning algorithms are combined. This article focuses on exploring the construction of an IAE (innovation and entrepreneurship) education platform and making full use of the role of deep learning algorithms to realize the construction of innovation and entrepreneurship platforms. Traditional methods need to extract features through manual design, then perform feature classification, and finally realize the function of recognition. The deep learning algorithm has strong data image processing capabilities and can quickly process large-scale data. Research data show that 49.5% of college students and 35.2% of undergraduates expressed their interest in entrepreneurship. Entrepreneurship is a good choice to relieve employment pressure.


2021 ◽  
Vol 12 ◽  
Author(s):  
Suk-Young Kim ◽  
Taesung Park ◽  
Kwonyoung Kim ◽  
Jihoon Oh ◽  
Yoonjae Park ◽  
...  

Purpose: The number of patients with alcohol-related problems is steadily increasing. A large-scale survey of alcohol-related problems has been conducted. However, studies that predict hazardous drinkers and identify which factors contribute to the prediction are limited. Thus, the purpose of this study was to predict hazardous drinkers and the severity of alcohol-related problems of patients using a deep learning algorithm based on a large-scale survey data.Materials and Methods: Datasets of National Health and Nutrition Examination Survey of South Korea (K-NHANES), a nationally representative survey for the entire South Korean population, were used to train deep learning and conventional machine learning algorithms. Datasets from 69,187 and 45,672 participants were used to predict hazardous drinkers and the severity of alcohol-related problems, respectively. Based on the degree of contribution of each variable to deep learning, it was possible to determine which variable contributed significantly to the prediction of hazardous drinkers.Results: Deep learning showed the higher performance than conventional machine learning algorithms. It predicted hazardous drinkers with an AUC (Area under the receiver operating characteristic curve) of 0.870 (Logistic regression: 0.858, Linear SVM: 0.849, Random forest classifier: 0.810, K-nearest neighbors: 0.740). Among 325 variables for predicting hazardous drinkers, energy intake was a factor showing the greatest contribution to the prediction, followed by carbohydrate intake. Participants were classified into Zone I, Zone II, Zone III, and Zone IV based on the degree of alcohol-related problems, showing AUCs of 0.881, 0.774, 0.853, and 0.879, respectively.Conclusion: Hazardous drinking groups could be effectively predicted and individuals could be classified according to the degree of alcohol-related problems using a deep learning algorithm. This algorithm could be used to screen people who need treatment for alcohol-related problems among the general population or hospital visitors.


Computers ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 113
Author(s):  
James Coe ◽  
Mustafa Atay

The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.


2019 ◽  
Vol 5 (Supplement_1) ◽  
Author(s):  
David Nieuwenhuijse ◽  
Bas Oude Munnink ◽  
My Phan ◽  
Marion Koopmans

Abstract Sewage samples have a high potential benefit for surveillance of circulating pathogens because they are easy to obtain and reflect population-wide circulation of pathogens. These type of samples typically contain a great diversity of viruses. Therefore, one of the main challenges of metagenomic sequencing of sewage for surveillance is sequence annotation and interpretation. Especially for high-threat viruses, false positive signals can trigger unnecessary alerts, but true positives should not be missed. Annotation thus requires high sensitivity and specificity. To better interpret annotated reads for high-threat viruses, we attempt to determine how classifiable they are in a background of reads of closely related low-threat viruses. As an example, we attempted to distinguish poliovirus reads, a virus of high public health importance, from other enterovirus reads. A sequence-based deep learning algorithm was used to classify reads as either polio or non-polio enterovirus. Short reads were generated from 500 polio and 2,000 non-polio enterovirus genomes as a training set. By training the algorithm on this dataset we try to determine, on a single read level, which short reads can reliably be labeled as poliovirus and which cannot. After training the deep learning algorithm on the generated reads we were able to calculate the probability with which a read can be assigned to a poliovirus genome or a non-poliovirus genome. We show that the algorithm succeeds in classifying the reads with high accuracy. The probability of assigning the read to the correct class was related to the location in the genome to which the read mapped, which conformed with our expectations since some regions of the genome are more conserved than others. Classifying short reads of high-threat viral pathogens seems to be a promising application of sequence-based deep learning algorithms. Also, recent developments in software and hardware have facilitated the development and training of deep learning algorithms. Further plans of this work are to characterize the hard-to-classify regions of the poliovirus genome, build larger training databases, and expand on the current approach to other viruses.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Peter Appiahene ◽  
Yaw Marfo Missah ◽  
Ussiph Najim

The financial crisis that hit Ghana from 2015 to 2018 has raised various issues with respect to the efficiency of banks and the safety of depositors’ in the banking industry. As part of measures to improve the banking sector and also restore customers’ confidence, efficiency and performance analysis in the banking industry has become a hot issue. This is because stakeholders have to detect the underlying causes of inefficiencies within the banking industry. Nonparametric methods such as Data Envelopment Analysis (DEA) have been suggested in the literature as a good measure of banks’ efficiency and performance. Machine learning algorithms have also been viewed as a good tool to estimate various nonparametric and nonlinear problems. This paper presents a combined DEA with three machine learning approaches in evaluating bank efficiency and performance using 444 Ghanaian bank branches, Decision Making Units (DMUs). The results were compared with the corresponding efficiency ratings obtained from the DEA. Finally, the prediction accuracies of the three machine learning algorithm models were compared. The results suggested that the decision tree (DT) and its C5.0 algorithm provided the best predictive model. It had 100% accuracy in predicting the 134 holdout sample dataset (30% banks) and a P value of 0.00. The DT was followed closely by random forest algorithm with a predictive accuracy of 98.5% and a P value of 0.00 and finally the neural network (86.6% accuracy) with a P value 0.66. The study concluded that banks in Ghana can use the result of this study to predict their respective efficiencies. All experiments were performed within a simulation environment and conducted in R studio using R codes.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Aan Chu ◽  
David Squirrell ◽  
Andelka M. Phillips ◽  
Ehsan Vaghefi

This systematic review was performed to identify the specifics of an optimal diabetic retinopathy deep learning algorithm, by identifying the best exemplar research studies of the field, whilst highlighting potential barriers to clinical implementation of such an algorithm. Searching five electronic databases (Embase, MEDLINE, Scopus, PubMed, and the Cochrane Library) returned 747 unique records on 20 December 2019. Predetermined inclusion and exclusion criteria were applied to the search results, resulting in 15 highest-quality publications. A manual search through the reference lists of relevant review articles found from the database search was conducted, yielding no additional records. A validation dataset of the trained deep learning algorithms was used for creating a set of optimal properties for an ideal diabetic retinopathy classification algorithm. Potential limitations to the clinical implementation of such systems were identified as lack of generalizability, limited screening scope, and data sovereignty issues. It is concluded that deep learning algorithms in the context of diabetic retinopathy screening have reported impressive results. Despite this, the potential sources of limitations in such systems must be evaluated carefully. An ideal deep learning algorithm should be clinic-, clinician-, and camera-agnostic; complying with the local regulation for data sovereignty, storage, privacy, and reporting; whilst requiring minimum human input.


2021 ◽  
Vol 35 (4) ◽  
pp. 349-357
Author(s):  
Shilpa P. Khedkar ◽  
Aroul Canessane Ramalingam

The Internet of Things (IoT) is a rising infrastructure of 21st century. The classification of traffic over IoT networks is attained significance importance due to rapid growth of users and devices. It is need of the hour to isolate the normal traffic from the malicious traffic and to assign the normal traffic to the proper destination to suffice the QoS requirements of the IoT users. Detection of malicious traffic can be done by continuously monitoring traffic for suspicious links, files, connection created and received, unrecognised protocol/port numbers, and suspicious Destination/Source IP combinations. A proficient classification mechanism in IoT environment should be capable enough to classify the heavy traffic in a fast manner, to deflect the malevolent traffic on time and to transmit the benign traffic to the designated nodes for serving the needs of the users. In this work, adaboost and Xgboost machine learning algorithms and Deep Neural Networks approach are proposed to separate the IoT traffic which eventually enhances the throughput of IoT networks and reduces the congestion over IoT channels. The result of experiment indicates a deep learning algorithm achieves higher accuracy compared to machine learning algorithms.


2021 ◽  
Author(s):  
Yiqi Jack Gao ◽  
Yu Sun

The start of 2020 marked the beginning of the deadly COVID-19 pandemic caused by the novel SARS-COV-2 from Wuhan, China. As of the time of writing, the virus had infected over 150 million people worldwide and resulted in more than 3.5 million global deaths. Accurate future predictions made through machine learning algorithms can be very useful as a guide for hospitals and policy makers to make adequate preparations and enact effective policies to combat the pandemic. This paper carries out a two pronged approach to analyzing COVID-19. First, the model utilizes the feature significance of random forest regressor to select eight of the most significant predictors (date, new tests, weekly hospital admissions, population density, total tests, total deaths, location, and total cases) for predicting daily increases of Covid-19 cases, highlighting potential target areas in order to achieve efficient pandemic responses. Then it utilizes machine learning algorithms such as linear regression, polynomial regression, and random forest regression to make accurate predictions of daily COVID-19 cases using a combination of this diverse range of predictors and proved to be competent at generating predictions with reasonable accuracy.


2021 ◽  
Author(s):  
Catherine Ollagnier ◽  
Claudia Kasper ◽  
Anna Wallenbeck ◽  
Linda Keeling ◽  
Siavash A Bigdeli

Tail biting is a detrimental behaviour that impacts the welfare and health of pigs. Early detection of tail biting precursor signs allows for preventive measures to be taken, thus avoiding the occurrence of the tail biting event. This study aimed to build a machine-learning algorithm for real time detection of upcoming tail biting outbreaks, using feeding behaviour data recorded by an electronic feeder. Prediction capacities of seven machine learning algorithms (e.g., random forest, neural networks) were evaluated from daily feeding data collected from 65 pens originating from 2 herds of grower-finisher pigs (25-100kg), in which 27 tail biting events occurred. Data were divided into training and testing data, either by randomly splitting data into 75% (training set) and 25% (testing set), or by randomly selecting pens to constitute the testing set. The random forest algorithm was able to predict 70% of the upcoming events with an accuracy of 94%, when predicting events in pens for which it had previous data. The detection of events for unknown pens was less sensitive, and the neural network model was able to detect 14% of the upcoming events with an accuracy of 63%. A machine-learning algorithm based on ongoing data collection should be considered for implementation into automatic feeder systems for real time prediction of tail biting events.


Sign in / Sign up

Export Citation Format

Share Document