scholarly journals Learn2Write: Augmented Reality and Machine Learning-Based Mobile App to Learn Writing

Computers ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 4
Author(s):  
Md. Nahidul Islam Opu ◽  
Md. Rakibul Islam ◽  
Muhammad Ashad Kabir ◽  
Md. Sabir Hossain ◽  
Mohammad Mainul Islam

Augmented reality (AR) has been widely used in education, particularly for child education. This paper presents the design and implementation of a novel mobile app, Learn2Write, using machine learning techniques and augmented reality to teach alphabet writing. The app has two main features: (i) guided learning to teach users how to write the alphabet and (ii) on-screen and AR-based handwriting testing using machine learning. A learner needs to write on the mobile screen in on-screen testing, whereas AR-based testing allows one to evaluate writing on paper or a board in a real world environment. We implement a novel approach to use machine learning for AR-based testing to detect an alphabet written on a board or paper. It detects the handwritten alphabet using our developed machine learning model. After that, a 3D model of that alphabet appears on the screen with its pronunciation/sound. The key benefit of our approach is that it allows the learner to use a handwritten alphabet. As we have used marker-less augmented reality, it does not require a static image as a marker. The app was built with ARCore SDK for Unity. We further evaluated and quantified the performance of our app on multiple devices.

Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


2022 ◽  
pp. 220-249
Author(s):  
Md Ariful Haque ◽  
Sachin Shetty

Financial sectors are lucrative cyber-attack targets because of their immediate financial gain. As a result, financial institutions face challenges in developing systems that can automatically identify security breaches and separate fraudulent transactions from legitimate transactions. Today, organizations widely use machine learning techniques to identify any fraudulent behavior in customers' transactions. However, machine learning techniques are often challenging because of financial institutions' confidentiality policy, leading to not sharing the customer transaction data. This chapter discusses some crucial challenges of handling cybersecurity and fraud in the financial industry and building machine learning-based models to address those challenges. The authors utilize an open-source e-commerce transaction dataset to illustrate the forensic processes by creating a machine learning model to classify fraudulent transactions. Overall, the chapter focuses on how the machine learning models can help detect and prevent fraudulent activities in the financial sector in the age of cybersecurity.


2020 ◽  
Vol 9 (6) ◽  
pp. 379 ◽  
Author(s):  
Eleonora Grilli ◽  
Fabio Remondino

The use of machine learning techniques for point cloud classification has been investigated extensively in the last decade in the geospatial community, while in the cultural heritage field it has only recently started to be explored. The high complexity and heterogeneity of 3D heritage data, the diversity of the possible scenarios, and the different classification purposes that each case study might present, makes it difficult to realise a large training dataset for learning purposes. An important practical issue that has not been explored yet, is the application of a single machine learning model across large and different architectural datasets. This paper tackles this issue presenting a methodology able to successfully generalise to unseen scenarios a random forest model trained on a specific dataset. This is achieved looking for the best features suitable to identify the classes of interest (e.g., wall, windows, roof and columns).


Author(s):  
Afshin Rahimi ◽  
Mofiyinoluwa O. Folami

As the number of satellite launches increases each year, it is only natural that an interest in the safety and monitoring of these systems would increase as well. However, as a system becomes more complex, generating a high-fidelity model that accurately describes the system becomes complicated. Therefore, imploring a data-driven method can provide to be more beneficial for such applications. This research proposes a novel approach for data-driven machine learning techniques on the detection and isolation of nonlinear systems, with a case-study for an in-orbit closed loop-controlled satellite with reaction wheels as actuators. High-fidelity models of the 3-axis controlled satellite are employed to generate data for both nominal and faulty conditions of the reaction wheels. The generated simulation data is used as input for the isolation method, after which the data is pre-processed through feature extraction from a temporal, statistical, and spectral domain. The pre-processed features are then fed into various machine learning classifiers. Isolation results are validated with cross-validation, and model parameters are tuned using hyperparameter optimization. To validate the robustness of the proposed method, it is tested on three characterized datasets and three reaction wheel configurations, including standard four-wheel, three-orthogonal, and pyramid. The results prove superior performance isolation accuracy for the system under study compared to previous studies using alternative methods (Rahimi & Saadat, 2019, 2020).


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Choudhary Sobhan Shakeel ◽  
Saad Jawaid Khan ◽  
Beenish Chaudhry ◽  
Syeda Fatima Aijaz ◽  
Umer Hassan

Alopecia areata is defined as an autoimmune disorder that results in hair loss. The latest worldwide statistics have exhibited that alopecia areata has a prevalence of 1 in 1000 and has an incidence of 2%. Machine learning techniques have demonstrated potential in different areas of dermatology and may play a significant role in classifying alopecia areata for better prediction and diagnosis. We propose a framework pertaining to the classification of healthy hairs and alopecia areata. We used 200 images of healthy hairs from the Figaro1k dataset and 68 hair images of alopecia areata from the Dermnet dataset to undergo image preprocessing including enhancement and segmentation. This was followed by feature extraction including texture, shape, and color. Two classification techniques, i.e., support vector machine (SVM) and k -nearest neighbor (KNN), are then applied to train a machine learning model with 70% of the images. The remaining image set was used for the testing phase. With a 10-fold cross-validation, the reported accuracies of SVM and KNN are 91.4% and 88.9%, respectively. Paired sample T -test showed significant differences between the two accuracies with a p < 0.001 . SVM generated higher accuracy (91.4%) as compared to KNN (88.9%). The findings of our study demonstrate potential for better prediction in the field of dermatology.


Sign in / Sign up

Export Citation Format

Share Document