Student Grade Prediction Using Machine Learning in Iot Era

Author(s):  
Adedoyin A. Hussain ◽  
Kamil Dimililer
2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi136-vi136
Author(s):  
Sara Merkaj ◽  
Ryan Bahar ◽  
W R Brim ◽  
Harry Subramanian ◽  
Tal Zeevi ◽  
...  

Abstract PURPOSE Reporting guidelines are crucial in model development studies to ensure the quality, transparency and objectivity of reporting. While machine learning (ML) models have proven themselves effective in predicting glioma grade, their potential use can only be determined if they are clearly and comprehensively reported. Reporting quality has not yet been evaluated for ML glioma grade prediction studies, to our knowledge. We measured published literature against the TRIPOD Statement, a checklist of items considered essential for the reporting of diagnostic studies. MATERIALS AND METHODS A literature review, in agreement with PRISMA, was conducted by a university librarian in October 2020 and verified by a second librarian in February 2021 using four databases: Cochrane trials (CENTRAL), Ovid Embase, Ovid MEDLINE, and Web of Science core-collection. Keywords and controlled vocabulary included artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Publications were screened in Covidence and scored against the 27 items in the TRIPOD Statement that were relevant and applicable. RESULTS The search identified 11,727 candidate articles with 1,135 articles undergoing full text review. 86 articles met the criteria for our study. The mean adherence rate to TRIPOD was 44.4% (range: 22.2% - 66.7%), with poor reporting adherence in categories including abstract (0%), model performance (0%), title (1.2%), justification of sample size (2.3%), full model specification (2.3%), participant demographics and missing data (7%). Studies had high reporting adherence in categories including results interpretation (100%), background (98.8%), study design/source of data (96.5%), and objectives (95.3%). CONCLUSION Existing publications on the use of ML in glioma grade prediction have a low overall quality of reporting. Improvements can be made in the reporting of titles and abstracts, justification of sample size, and model specification and performance.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Siti Dianah Abdul Bujang ◽  
Ali Selamat ◽  
Roliana Ibrahim ◽  
Ondrej Krejcar ◽  
Enrique Herrera-Viedma ◽  
...  

2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi133-vi133
Author(s):  
Ryan Bahar ◽  
Sara Merkaj ◽  
W R Brim ◽  
Harry Subramanian ◽  
Tal Zeevi ◽  
...  

Abstract PURPOSE Machine learning (ML) technologies have demonstrated highly accurate prediction of glioma grade, though it is unclear which methods and algorithms are superior. We have conducted a systematic review of the literature in order to identify the ML applications most promising for future research and clinical implementation. MATERIALS AND METHODS A literature review, in agreement with PRISMA, was conducted by a university librarian in October 2020 and verified by a second librarian in February 2021 using four databases: Cochrane trials (CENTRAL), Ovid Embase, Ovid MEDLINE, and Web of Science core-collection. Keywords and controlled vocabulary included artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Screening of publications was done in Covidence, and TRIPOD was used for bias assessment. RESULTS The search identified 11,727 candidate articles with 1,135 articles undergoing full text review. 86 articles published since 1995 met the criteria for our study. 79% of the articles were published between 2018 and 2020. The average glioma prediction accuracy of the highest performing model in each study was 90% (range: 53% to 100%). The most common algorithm used for cML studies was Support Vector Machine (SVM) and for DL studies was Convolutional Neural Network (CNN). BRATS and TCIA datasets were used in 47% of the studies, with the average patient number of study datasets being 186 (range: 23 to 662). The average number of features used in machine learning prediction was 55 (range: 2 to 580). Classical machine learning (cML) was the primary machine learning model in 68% of studies, with deep learning (DL) used in 32%. CONCLUSIONS Using multimodal sequences in ML methods delivers significantly higher grading accuracies than single sequences. Potential areas of improvement for ML glioma grade prediction studies include increasing sample size, incorporating molecular subtypes, and validating on external datasets.


Author(s):  
Sushmita Gaonkar

<p>A massive amount of data is reproduced across numerous pursuits such as education, medical science, defenses, social media, and so on and so forth. Machine Learning (ML) and Data Mining (DM) are techniques that can be used to identify and improve the hidden patterns automatically through experience seen as a subset of Artificial intelligence. One of the key areas of this application is Educational Data Mining(EDM) which uses ML and statistics to extract large repositories of data associated with learning activities. These learning management systems are majorly used to predict college grades. The proposed model is built to predict the future grade of colleges and universities, established on the current activities they execute. Machine learning algorithms are found to be very practical and effective. It is the most valuable under circumstances where the individual doesn’t have an adequate amount of knowledge. ML algorithm predicts the future based on the input given to it, it investigates and analyzes given input data. They are trained based on it and infer a hypothesis /theory. The proposed model has used the Random Forest regression (RFR) algorithm which will help colleges to priorly know the grades and if these grades are less than what they anticipated they can improve them by enriching the current activities.</p>


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document