Implementation of Structured Inquiry Based Model Learning Toward Students’ Understanding of Geometry

2014 ◽  
Vol 1 (1) ◽  
pp. 75 ◽  
Author(s):  
Kalbin Salim ◽  
Dayang Hjh Tiawa
Keyword(s):  
2021 ◽  
Author(s):  
Su-Jeong Park ◽  
Soon-Seo Park ◽  
Han-Lim Choi ◽  
Kyeong-Soo An ◽  
Young-Gon Kim

2015 ◽  
Vol 7 (1) ◽  
pp. 29-38
Author(s):  
Esti Munafiah ◽  
Agus Basir Ali Akbar S

The objective of this study is to see learning process using LCC model for chemistry course.  The study used classroom action research with three cycles each of which implements planning, acting, observing and reflection.  Subject of the study was 40 students of grade 8E of MTsN Blitar in the academic year 2009/2010. The findings of the study are as follows:  (1) Cycle I:  students participation 62.5%, mean score of worksheet 60, mean score of quiz 41,7, and mastery learning 3 students; (2) Cycle II: students participation 86.6%, mean score of worksheet 81, mean score of quiz 72.38, and mastery learning 26 students; (3) Cycle III:  students participation 100%, mean score of worksheet 89, mean score of quiz 72.44, and mastery learning 39 students.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-20
Author(s):  
Dongsheng Li ◽  
Haodong Liu ◽  
Chao Chen ◽  
Yingying Zhao ◽  
Stephen M. Chu ◽  
...  

In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data. However, the global models are often obtained via a performance tradeoff among users/items, i.e., not all users/items are perfectly fitted by the global models due to the hard non-convex optimization problems in CF algorithms. Ensemble learning can address this issue by learning multiple diverse models but usually suffer from efficiency issue on large datasets or complex algorithms. In this article, we keep the intermediate models obtained during global model learning as the snapshot models, and then adaptively combine the snapshot models for individual user-item pairs using a memory network-based method. Empirical studies on three real-world datasets show that the proposed method can extensively and significantly improve the accuracy (up to 15.9% relatively) when applied to a variety of existing collaborative filtering methods.


Author(s):  
Gretel Liz De la Peña Sarracén ◽  
Paolo Rosso

AbstractThe proliferation of harmful content on social media affects a large part of the user community. Therefore, several approaches have emerged to control this phenomenon automatically. However, this is still a quite challenging task. In this paper, we explore the offensive language as a particular case of harmful content and focus our study in the analysis of keywords in available datasets composed of offensive tweets. Thus, we aim to identify relevant words in those datasets and analyze how they can affect model learning. For keyword extraction, we propose an unsupervised hybrid approach which combines the multi-head self-attention of BERT and a reasoning on a word graph. The attention mechanism allows to capture relationships among words in a context, while a language model is learned. Then, the relationships are used to generate a graph from what we identify the most relevant words by using the eigenvector centrality. Experiments were performed by means of two mechanisms. On the one hand, we used an information retrieval system to evaluate the impact of the keywords in recovering offensive tweets from a dataset. On the other hand, we evaluated a keyword-based model for offensive language detection. Results highlight some points to consider when training models with available datasets.


Author(s):  
Woongsun Jeon ◽  
Ankush Chakrabarty ◽  
Ali Zemouche ◽  
Rajesh Rajamani

Author(s):  
Shengsheng Qian ◽  
Jun Hu ◽  
Quan Fang ◽  
Changsheng Xu

In this article, we focus on fake news detection task and aim to automatically identify the fake news from vast amount of social media posts. To date, many approaches have been proposed to detect fake news, which includes traditional learning methods and deep learning-based models. However, there are three existing challenges: (i) How to represent social media posts effectively, since the post content is various and highly complicated; (ii) how to propose a data-driven method to increase the flexibility of the model to deal with the samples in different contexts and news backgrounds; and (iii) how to fully utilize the additional auxiliary information (the background knowledge and multi-modal information) of posts for better representation learning. To tackle the above challenges, we propose a novel Knowledge-aware Multi-modal Adaptive Graph Convolutional Networks (KMAGCN) to capture the semantic representations by jointly modeling the textual information, knowledge concepts, and visual information into a unified framework for fake news detection. We model posts as graphs and use a knowledge-aware multi-modal adaptive graph learning principal for the effective feature learning. Compared with existing methods, the proposed KMAGCN addresses challenges from three aspects: (1) It models posts as graphs to capture the non-consecutive and long-range semantic relations; (2) it proposes a novel adaptive graph convolutional network to handle the variability of graph data; and (3) it leverages textual information, knowledge concepts and visual information jointly for model learning. We have conducted extensive experiments on three public real-world datasets and superior results demonstrate the effectiveness of KMAGCN compared with other state-of-the-art algorithms.


2021 ◽  
Vol 11 (14) ◽  
pp. 6387
Author(s):  
Li Xu ◽  
Jianzhong Hu

Active infrared thermography (AIRT) is a significant defect detection and evaluation method in the field of non-destructive testing, on account of the fact that it promptly provides visual information and that the results could be used for quantitative research of defects. At present, the quantitative evaluation of defects is an urgent problem to be solved in this field. In this work, a defect depth recognition method based on gated recurrent unit (GRU) networks is proposed to solve the problem of insufficient accuracy in defect depth recognition. AIRT is applied to obtain the raw thermal sequences of the surface temperature field distribution of the defect specimen. Before training the GRU model, principal component analysis (PCA) is used to reduce the dimension and to eliminate the correlation of the raw datasets. Then, the GRU model is employed to automatically recognize the depth of the defect. The defect depth recognition performance of the proposed method is evaluated through an experiment on polymethyl methacrylate (PMMA) with flat bottom holes. The results indicate that the PCA-processed datasets outperform the raw temperature datasets in model learning when assessing defect depth characteristics. A comparison with the BP network shows that the proposed method has better performance in defect depth recognition.


Sign in / Sign up

Export Citation Format

Share Document