scholarly journals Automatic Generation of Learning Objects Using Text Summarizer Based on Deep Learning Models

2021 ◽  
Author(s):  
Leandro Massetti Ribeiro Oliveira ◽  
Antonio José G. Busson ◽  
Carlos de Salles S. Neto ◽  
Gabriel N. P. dos Santos ◽  
Sérgio Colcher

A learning object (LO) is an entity, digital or not, that can be used and reused or referenced during a technological support process for teaching and learning. Despite mainly being multimedia, with audio, video, text and images synchronized with each other, LOs can help disseminate knowledge even only in educational texts. However, creating these texts can be costly in time and effort, creating the need to seek new ways to generate this content. This article presents a solution for the generation of text-based LOs generated through summaries supported by Deep Learning models. The present work was evaluated in a supervised experiment in which volunteers rate computer educational texts generated by three types of summarizers. The results presented are positive and allow us to compare the performance of summaries as LO generators in text format. The findings also suggest that using post-processing in the output of models can improve the readability of generated content.

Author(s):  
Seung Youn (Yonnie) Chyung ◽  
Joann Swanson

While the concept of utilizing learning objects has been addressed in instructional design for some time, slightly different definitions of the term “learning object” are found in the literature. For example, the Institute of Electrical and Electronics Engineers (IEEE) (2005) defines a learning object as “any entity, digital or non-digital, which can be used, re-used or referenced during technology supported learning.” Wiley (2000) similarly defines a learning object as “any digital resource that can be reused to support learning” (p. 7). Barritt and Alderman (2004) state a working definition of learning objects as “an independent collection of content and media elements, a learning approach (interactivity, learning architecture, context), and metadata (used for storage and searching)” (pp. 7-8). Merrill (1996) uses a different term, a “knowledge object” that consists of a set of predefined elements, each of which is “instantiated by way of a multimedia resource (text, audio, video, graphic) or a pointer to another knowledge object” (p. 32).


Author(s):  
Anustup Mukherjee ◽  
Harjeet Kaur

Artificial intelligence within the area of computer vision is creating a replacement genre in detection industry. Here, AI is using the power of computer vision in creating advanced educational software LMS that detects student emotions during online classes, interviews, and judges their understanding and concentration level. It also generates automated content in step with their needs. This LMS cannot only judge audio, video, and image of a student; it also judges the voice tone. Through this judgement, the AI model understands how much a student is learning, effectivity, intellect, and drawbacks. In this chapter, the power of deep learning models VGG Net and Alex Net in LMS computer vision are used. This LMS architecture will be able to work like a virtual teacher that will be taking a parental guide to students.


2021 ◽  
Author(s):  
Kenji Fukumoto ◽  
Rinji Suzuki ◽  
Hiroyuki Terada ◽  
Masafumi Bato ◽  
Akiyo Nadamoto

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


2019 ◽  
Author(s):  
Mohammad Rezaei ◽  
Yanjun Li ◽  
Xiaolin Li ◽  
Chenglong Li

<b>Introduction:</b> The ability to discriminate among ligands binding to the same protein target in terms of their relative binding affinity lies at the heart of structure-based drug design. Any improvement in the accuracy and reliability of binding affinity prediction methods decreases the discrepancy between experimental and computational results.<br><b>Objectives:</b> The primary objectives were to find the most relevant features affecting binding affinity prediction, least use of manual feature engineering, and improving the reliability of binding affinity prediction using efficient deep learning models by tuning the model hyperparameters.<br><b>Methods:</b> The binding site of target proteins was represented as a grid box around their bound ligand. Both binary and distance-dependent occupancies were examined for how an atom affects its neighbor voxels in this grid. A combination of different features including ANOLEA, ligand elements, and Arpeggio atom types were used to represent the input. An efficient convolutional neural network (CNN) architecture, DeepAtom, was developed, trained and tested on the PDBbind v2016 dataset. Additionally an extended benchmark dataset was compiled to train and evaluate the models.<br><b>Results: </b>The best DeepAtom model showed an improved accuracy in the binding affinity prediction on PDBbind core subset (Pearson’s R=0.83) and is better than the recent state-of-the-art models in this field. In addition when the DeepAtom model was trained on our proposed benchmark dataset, it yields higher correlation compared to the baseline which confirms the value of our model.<br><b>Conclusions:</b> The promising results for the predicted binding affinities is expected to pave the way for embedding deep learning models in virtual screening and rational drug design fields.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


Sign in / Sign up

Export Citation Format

Share Document