scholarly journals Construction of Innovation and Entrepreneurship Platform Based on Deep Learning Algorithm

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jian Li ◽  
Yongyan Zhao

As the national economy has entered a stage of rapid development, the national economy and social development have also ushered in the “14th Five-Year Plan,” and the country has also issued support policies to encourage and guide college students to start their own businesses. Therefore, the establishment of an innovation and entrepreneurship platform has a significant impact on China’s economy. This gives college students great support and help in starting a business. The theory of deep learning algorithms originated from the development of artificial neural networks and is another important field of machine learning. As the computing power of computers has been greatly improved, especially the computing power of GPU can quickly train deep neural networks, deep learning algorithms have become an important research direction. The deep learning algorithm is a nonlinear network structure and a standard modeling method in the field of machine learning. After modeling various templates, they can be identified and implemented. This article uses a combination of theoretical research and empirical research, based on the views and research content of some scholars in recent years, and introduces the basic framework and research content of this article. Then, deep learning algorithms are used to analyze the experimental data. Data analysis is performed, and relevant concepts of deep learning algorithms are combined. This article focuses on exploring the construction of an IAE (innovation and entrepreneurship) education platform and making full use of the role of deep learning algorithms to realize the construction of innovation and entrepreneurship platforms. Traditional methods need to extract features through manual design, then perform feature classification, and finally realize the function of recognition. The deep learning algorithm has strong data image processing capabilities and can quickly process large-scale data. Research data show that 49.5% of college students and 35.2% of undergraduates expressed their interest in entrepreneurship. Entrepreneurship is a good choice to relieve employment pressure.

Author(s):  
Fawziya M. Rammo ◽  
Mohammed N. Al-Hamdani

Many languages identification (LID) systems rely on language models that use machine learning (ML) approaches, LID systems utilize rather long recording periods to achieve satisfactory accuracy. This study aims to extract enough information from short recording intervals in order to successfully classify the spoken languages under test. The classification process is based on frames of (2-18) seconds where most of the previous LID systems were based on much longer time frames (from 3 seconds to 2 minutes). This research defined and implemented many low-level features using MFCC (Mel-frequency cepstral coefficients), containing speech files in five languages (English. French, German, Italian, Spanish), from voxforge.org an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper. A CNN (convolutional Neural Networks) algorithm applied in this paper for classification and the result was perfect, binary language classification had an accuracy of 100%, and five languages classification with six languages had an accuracy of 99.8%.


2021 ◽  
Vol 12 ◽  
Author(s):  
Suk-Young Kim ◽  
Taesung Park ◽  
Kwonyoung Kim ◽  
Jihoon Oh ◽  
Yoonjae Park ◽  
...  

Purpose: The number of patients with alcohol-related problems is steadily increasing. A large-scale survey of alcohol-related problems has been conducted. However, studies that predict hazardous drinkers and identify which factors contribute to the prediction are limited. Thus, the purpose of this study was to predict hazardous drinkers and the severity of alcohol-related problems of patients using a deep learning algorithm based on a large-scale survey data.Materials and Methods: Datasets of National Health and Nutrition Examination Survey of South Korea (K-NHANES), a nationally representative survey for the entire South Korean population, were used to train deep learning and conventional machine learning algorithms. Datasets from 69,187 and 45,672 participants were used to predict hazardous drinkers and the severity of alcohol-related problems, respectively. Based on the degree of contribution of each variable to deep learning, it was possible to determine which variable contributed significantly to the prediction of hazardous drinkers.Results: Deep learning showed the higher performance than conventional machine learning algorithms. It predicted hazardous drinkers with an AUC (Area under the receiver operating characteristic curve) of 0.870 (Logistic regression: 0.858, Linear SVM: 0.849, Random forest classifier: 0.810, K-nearest neighbors: 0.740). Among 325 variables for predicting hazardous drinkers, energy intake was a factor showing the greatest contribution to the prediction, followed by carbohydrate intake. Participants were classified into Zone I, Zone II, Zone III, and Zone IV based on the degree of alcohol-related problems, showing AUCs of 0.881, 0.774, 0.853, and 0.879, respectively.Conclusion: Hazardous drinking groups could be effectively predicted and individuals could be classified according to the degree of alcohol-related problems using a deep learning algorithm. This algorithm could be used to screen people who need treatment for alcohol-related problems among the general population or hospital visitors.


2020 ◽  
Vol 6 (2) ◽  
pp. 97-106
Author(s):  
Khan Nasik Sami ◽  
Zian Md Afique Amin ◽  
Raini Hassan

Waste Management is one of the essential issues that the world is currently facing does not matter if the country is developed or under developing. The key issue in this waste segregation is that the trash bin at open spots gets flooded well ahead of time before the beginning of the following cleaning process. The isolation of waste is done by unskilled workers which is less effective, time-consuming, and not plausible because of a lot of waste. So, we are proposing an automated waste classification problem utilizing Machine Learning and Deep Learning algorithms. The goal of this task is to gather a dataset and arrange it into six classes consisting of glass, paper, and metal, plastic, cardboard, and waste. The model that we have used are classification models. For our research we did comparisons between four algorithms, those are CNN, SVM, Random Forest, and Decision Tree. As our concern is a classification problem, we have used several machine learning and deep learning algorithm that best fits for classification solutions. For our model, CNN accomplished high characterization on accuracy around 90%, while SVM additionally indicated an excellent transformation to various kinds of waste which were 85%, and Random Forest and Decision Tree have accomplished 55% and 65% respectively


Computers ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 113
Author(s):  
James Coe ◽  
Mustafa Atay

The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.


2019 ◽  
Vol 5 (Supplement_1) ◽  
Author(s):  
David Nieuwenhuijse ◽  
Bas Oude Munnink ◽  
My Phan ◽  
Marion Koopmans

Abstract Sewage samples have a high potential benefit for surveillance of circulating pathogens because they are easy to obtain and reflect population-wide circulation of pathogens. These type of samples typically contain a great diversity of viruses. Therefore, one of the main challenges of metagenomic sequencing of sewage for surveillance is sequence annotation and interpretation. Especially for high-threat viruses, false positive signals can trigger unnecessary alerts, but true positives should not be missed. Annotation thus requires high sensitivity and specificity. To better interpret annotated reads for high-threat viruses, we attempt to determine how classifiable they are in a background of reads of closely related low-threat viruses. As an example, we attempted to distinguish poliovirus reads, a virus of high public health importance, from other enterovirus reads. A sequence-based deep learning algorithm was used to classify reads as either polio or non-polio enterovirus. Short reads were generated from 500 polio and 2,000 non-polio enterovirus genomes as a training set. By training the algorithm on this dataset we try to determine, on a single read level, which short reads can reliably be labeled as poliovirus and which cannot. After training the deep learning algorithm on the generated reads we were able to calculate the probability with which a read can be assigned to a poliovirus genome or a non-poliovirus genome. We show that the algorithm succeeds in classifying the reads with high accuracy. The probability of assigning the read to the correct class was related to the location in the genome to which the read mapped, which conformed with our expectations since some regions of the genome are more conserved than others. Classifying short reads of high-threat viral pathogens seems to be a promising application of sequence-based deep learning algorithms. Also, recent developments in software and hardware have facilitated the development and training of deep learning algorithms. Further plans of this work are to characterize the hard-to-classify regions of the poliovirus genome, build larger training databases, and expand on the current approach to other viruses.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Aan Chu ◽  
David Squirrell ◽  
Andelka M. Phillips ◽  
Ehsan Vaghefi

This systematic review was performed to identify the specifics of an optimal diabetic retinopathy deep learning algorithm, by identifying the best exemplar research studies of the field, whilst highlighting potential barriers to clinical implementation of such an algorithm. Searching five electronic databases (Embase, MEDLINE, Scopus, PubMed, and the Cochrane Library) returned 747 unique records on 20 December 2019. Predetermined inclusion and exclusion criteria were applied to the search results, resulting in 15 highest-quality publications. A manual search through the reference lists of relevant review articles found from the database search was conducted, yielding no additional records. A validation dataset of the trained deep learning algorithms was used for creating a set of optimal properties for an ideal diabetic retinopathy classification algorithm. Potential limitations to the clinical implementation of such systems were identified as lack of generalizability, limited screening scope, and data sovereignty issues. It is concluded that deep learning algorithms in the context of diabetic retinopathy screening have reported impressive results. Despite this, the potential sources of limitations in such systems must be evaluated carefully. An ideal deep learning algorithm should be clinic-, clinician-, and camera-agnostic; complying with the local regulation for data sovereignty, storage, privacy, and reporting; whilst requiring minimum human input.


Sign in / Sign up

Export Citation Format

Share Document