scholarly journals UniAssist- Implementation of Machine Learning Based Higher Education University Recommendation System

UniAssist project is implemented to help students who have completed their Bachelorette degree and are looking forward to study abroad to pursue their higher education such as Masters. Machine Learning would help identify appropriate Universities for such students and suggest them accordingly. UniAssist would help such individuals by recommending those Universities according to their preference of course, country and considering their grades, work experience and qualifications. There is a need for students hoping to pursue higher education outside India to get to know about proper universities. Data collected is then converted into relevant information that is currently not easily available such as courses offered by their dream universities, the avg. tuition fee and even the avg. expense of living near the chosen university on single mobile app based software platform. This is the first phase of the admission process for every student. The machine-learning algorithm used is Collaborative filtering memory-based approach using KNN calculated using cosine similarity. A mobile-based software application is implemented in order to help and guide students for their higher education.

2021 ◽  
Vol 11 (3) ◽  
pp. 92
Author(s):  
Mehdi Berriri ◽  
Sofiane Djema ◽  
Gaëtan Rey ◽  
Christel Dartigues-Pallez

Today, many students are moving towards higher education courses that do not suit them and end up failing. The purpose of this study is to help provide counselors with better knowledge so that they can offer future students courses corresponding to their profile. The second objective is to allow the teaching staff to propose training courses adapted to students by anticipating their possible difficulties. This is possible thanks to a machine learning algorithm called Random Forest, allowing for the classification of the students depending on their results. We had to process data, generate models using our algorithm, and cross the results obtained to have a better final prediction. We tested our method on different use cases, from two classes to five classes. These sets of classes represent the different intervals with an average ranging from 0 to 20. Thus, an accuracy of 75% was achieved with a set of five classes and up to 85% for sets of two and three classes.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 659-664
Author(s):  
David A Boone ◽  
Sarah R Chang

ABSTRACT Introduction This research has resulted in a system of sensors and software for effectively adjusting prosthetic alignment with digital numeric control. We called this suite of technologies the Prosthesis Smart Alignment Tool (ProSAT) system. Materials and Methods The ProSAT system has three components: a prosthesis-embedded sensor, an alignment tool, and an Internet-connected alignment expert system application that utilizes machine learning to analyze prosthetic alignment. All components communicate via Bluetooth. Together, they provide for numerically controlled prosthesis alignment adjustment. The ProSAT components help diagnose and guide the correction of very subtle, difficult-to-see imbalances in dynamic gait. The sensor has been cross-validated against kinetic measurement in a gait laboratory, and bench testing was performed to validate the performance of the tool while adjusting a prosthetic socket based on machine learning analyses from the software application. Results The three-dimensional alignment of the prosthetic socket was measured pre- and postadjustment from two fiducial points marked on the anterior surface of the prosthetic socket. A coordinate measuring machine was used to derive an alignment angular offset from vertical for both conditions: pre- and postalignment conditions. Of interest is the difference in the angles between conditions. The ProSAT tool is only controlling the relative change made to the alignment, not an absolute position or orientation. Target alignments were calculated by the machine learning algorithm in the ProSAT software, based on input of kinetic data samples representing the precondition and where a real prosthetic misalignment condition was known a priori. Detected misalignments were converted by the software to a corrective adjustment in the prosthesis alignment being tested. We demonstrated that a user could successfully and quickly achieve target postalignment change within an average of 0.1°. Conclusions The accuracy of a prototype ProSAT system has been validated for controlled alignment changes by a prosthetist. Refinement of the ergonomic form and technical function of the hardware and clinical usability of the mobile software application are currently being completed with benchtop experiments in advance of further human subject testing of alignment efficiency, accuracy, and user experience.


2021 ◽  
Author(s):  
Mustapha Abba ◽  
Chidozie Nduka ◽  
Seun Anjorin ◽  
Shukri Mohamed ◽  
Emmanuel Agogo ◽  
...  

BACKGROUND Due to scientific and technical advancements in the field, published hypertension research has developed during the last decade. Given the huge amount of scientific material published in this field, identifying the relevant information is difficult. We employed topic modelling, which is a strong approach for extracting useful information from enormous amounts of unstructured text. OBJECTIVE To utilize a machine learning algorithm to uncover hidden topics and subtopics from 100 years of peer-reviewed hypertension publications and identify temporal trends. METHODS The titles and abstracts of hypertension papers indexed in PubMed were examined. We used the Latent Dirichlet Allocation (LDA) model to select 20 primary subjects and then ran a trend analysis to see how popular they were over time. RESULTS We gathered 581,750 hypertension-related research articles from 1900 to 2018 and divided them into 20 categories. Preclinical, risk factors, complications, and therapy studies were the categories used to categorise the publications. We discovered themes that were becoming increasingly ‘hot,' becoming less ‘cold,' and being published seldom. Risk variables and major cardiovascular events subjects displayed very dynamic patterns over time (how? – briefly detail here). The majority of the articles (71.2%) had a negative valency, followed by positive (20.6%) and neutral valencies (8.2 percent). Between 1980 and 2000, negative sentiment articles fell somewhat, while positive and neutral sentiment articles climbed significantly. CONCLUSIONS This unique machine learning methodology provided fascinating insights on current hypertension research trends. This method allows researchers to discover study subjects and shifts in study focus, and in the end, it captures the broader picture of the primary concepts in current hypertension research articles. CLINICALTRIAL Not applicable


2019 ◽  
Vol 8 (4) ◽  
pp. 2299-2302

Implementing a machine learning algorithm gives you a deep and practical appreciation for how the algorithm works. This knowledge can also help you to internalize the mathematical description of the algorithm by thinking of the vectors and matrices as arrays and the computational intuitions for the transformations on those structures. There are numerous micro-decisions required when implementing a machine learning algorithm, like Select programming language, Select Algorithm, Select Problem, Research Algorithm, Unit Test and these decisions are often missing from the formal algorithm descriptions. The notion of implementing a job recommendation (a classic machine learning problem) system using to two algorithms namely, KNN [3] and logistic regression [3] in more than one programming language (C++ and python) is introduced and we bring here the analysis and comparison of performance of each. We specifically focus on building a model for predictions of jobs in the field of computer sciences but they can be applied to a wide range of other areas as well. This paper can be used by implementers to deduce which language will best suite their needs to achieve accuracy along with efficiency We are using more than one algorithm to establish the fact that our finding is not just singularly applicable.


2020 ◽  
Vol 25 (5) ◽  
pp. 559-568
Author(s):  
Joy Dhar ◽  
Asoke Kumar Jodder

After passing the 10th class, every student is eager to know which educational program will be the best for their higher education to match their career goal. Sometimes, they are very much confused to decide the best path for their higher education, and they need help to determine the best suitable academic program to develop their careers and achieve their goal. So, we introduce an effective recommendation system to forecast each student's best educational program for their career development. This proposed research is accomplished by utilizing machine learning (ML) approaches to forecast every student's best academic path based on their past academic performances and recommend them the best suitable academic program for their higher studies. Class 10th standard passing student data are supplied to this automated system, and a correlation-based feature selection approach is applied to extract the relevant features for each academic program. This study utilizes multiple ML algorithms to provide the best results and forecast each student's academic performance and select the best model based on their performance for each educational program. Hence, the best-selected model and related features are involved in the recommendation process to provide the best suitable academic path for achieving every student's career goals.


2019 ◽  
Vol 10 (1) ◽  
pp. 38-62
Author(s):  
Megha Rathi ◽  
Vikas Pareek

Recent advances in mobile technology and machine learning together steer us to create a mobile-based healthcare app for recommending disease. In this study, the authors develop an android-based healthcare app which will detect all kinds of diseases in no time. The authors developed a novel, hybrid machine-learning algorithm in order to provide more accurate results. For the same purpose, the authors have combined two machine-learning algorithms, SVM and GA. The proposed algorithms will enhance the accuracy and at the same time reduce the complexity and count of attributes in the database. Analysis of algorithm is also done using statistical parameters like accuracy, confusion matrix, and roc-curve. The pivotal intent of this research work is to create an android-based healthcare app which will predict disease when provided with certain details. For a disease like cancer, for which a series of tests are required for confirmation, this app will quickly detect cancer and it is helpful to doctors as they can start the right course of treatment right away. Further, this app will also recommend a diet fitting the patient profile.


2021 ◽  
Vol 7 (2) ◽  
pp. 71-78
Author(s):  
Timothy Dicky ◽  
Alva Erwin ◽  
Heru Purnomo Ipung

The purpose of this research is to develop a job recommender system based on the Hadoop MapReduce framework to achieve scalability of the system when it processes big data. Also, a machine learning algorithm is implemented inside the job recommender to produce an accurate job recommendation. The project begins by collecting sample data to build an accurate job recommender system with a centralized program architecture. Then a job recommender with a distributed system program architecture is implemented using Hadoop MapReduce which then deployed to a Hadoop cluster. After the implementation, both systems are tested using a large number of applicants and job data, with the time required for the program to compute the data is recorded to be analyzed. Based on the experiments, we conclude that the recommender produces the most accurate result when the cosine similarity measure is used inside the algorithm. Also, the centralized job recommender system is able to process the data faster compared to the distributed cluster job recommender system. But as the size of the data grows, the centralized system eventually will lack the capacity to process the data, while the distributed cluster job recommender is able to scale according to the size of the data.


Author(s):  
Ana Maria Magdalena Saldana-Perez ◽  
Marco Antonio Moreno-Ibarra ◽  
Miguel Jesus Torres-Ruiz

It is interesting to exploit the user-generated content (UGC) and to use it with a view to infer new data; volunteered geographic information (VGI) is a concept derived from UGC, whose main importance lies in its continuously updated data. The present approach tries to explode the use of VGI by collecting data from a social network and a RSS service; the short texts collected from the social network are written in Spanish language; text mining and a recovery information processes are applied over the data in order to remove special characters on text and to extract relevant information about the traffic events on the study area; then data are geocoded. The texts are classified by using a machine learning algorithm into five classes, each of them represents a specific traffic event or situation.


Sign in / Sign up

Export Citation Format

Share Document