scholarly journals Research on Flipped Classroom of Big Data Course Based on Graphic Design MOOC

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yanqi Wang

With the rapid development of the Internet, traditional teaching models can no longer meet the needs of talent training in colleges and universities, and reform is imperative. With the advent of the era of big data, the emergence of a large number of rich and diverse teaching resources, MOOC (Massive Online Open Course), microclasses, flipped classrooms, and other teaching models on the Internet has provided reform thinking and directions for teaching reform. This model divides the entire teaching design into two major modules: SPOC (Small Private Online Course) platform teaching activity design and flipped classroom teaching activity design, and applies this model to the actual teaching of open education, designing detailed teaching activity plans, in a real teaching situation. This study uses questionnaire surveys and interview surveys to investigate the basic personal situation of course learners, learning expectations, course participation, learning experience, and learning effects. It is planned to use the questionnaire star platform to issue and return questionnaires and use EXCEL and SPSS software to analyze the data and perform analysis and processing, combined with in-depth interviews with learners and professors for comprehensive analysis, so as to obtain the most true views of students and teachers on this model. In this process, we collect a variety of data from the SPOC platform and the flipped classroom platform, including feedback from students studying on the SPOC platform before class, observation of students’ learning attitudes in flipped classrooms to display of students’ results after class, and academic performance, summarize experience based on the analysis results, and optimize the teaching design plan. In classification algorithms, support vector machines (SVM) are widely used due to their advantages such as less overfitting and inconspicuous dimensionality of feature vectors. The traditional SVM algorithm is not suitable for processing large-scale data sets due to factors such as high time complexity and long training time. In order to solve these shortcomings, parallelizing the SVM algorithm to process large-scale data sets is an effective solution. On the basis of comparison, a SPOC-based flipped classroom teaching design model was constructed, and empirical application was carried out in the Open University, in order to promote the sustainable development of open education.

Author(s):  
Jun Huang ◽  
Linchuan Xu ◽  
Jing Wang ◽  
Lei Feng ◽  
Kenji Yamanishi

Existing multi-label learning (MLL) approaches mainly assume all the labels are observed and construct classification models with a fixed set of target labels (known labels). However, in some real applications, multiple latent labels may exist outside this set and hide in the data, especially for large-scale data sets. Discovering and exploring the latent labels hidden in the data may not only find interesting knowledge but also help us to build a more robust learning model. In this paper, a novel approach named DLCL (i.e., Discovering Latent Class Labels for MLL) is proposed which can not only discover the latent labels in the training data but also predict new instances with the latent and known labels simultaneously. Extensive experiments show a competitive performance of DLCL against other state-of-the-art MLL approaches.


Author(s):  
Vo Ngoc Phu ◽  
Vo Thi Ngoc Tran

Artificial intelligence (ARTINT) and information have been famous fields for many years. A reason has been that many different areas have been promoted quickly based on the ARTINT and information, and they have created many significant values for many years. These crucial values have certainly been used more and more for many economies of the countries in the world, other sciences, companies, organizations, etc. Many massive corporations, big organizations, etc. have been established rapidly because these economies have been developed in the strongest way. Unsurprisingly, lots of information and large-scale data sets have been created clearly from these corporations, organizations, etc. This has been the major challenges for many commercial applications, studies, etc. to process and store them successfully. To handle this problem, many algorithms have been proposed for processing these big data sets.


2017 ◽  
Author(s):  
Shirley M. Matteson ◽  
Sonya E. Sherrod ◽  
Sevket Ceyhun Cetin

2017 ◽  
Vol 8 (2) ◽  
pp. 30-43
Author(s):  
Mrutyunjaya Panda

The Big Data, due to its complicated and diverse nature, poses a lot of challenges for extracting meaningful observations. This sought smart and efficient algorithms that can deal with computational complexity along with memory constraints out of their iterative behavior. This issue may be solved by using parallel computing techniques, where a single machine or a multiple machine can perform the work simultaneously, dividing the problem into sub problems and assigning some private memory to each sub problems. Clustering analysis are found to be useful in handling such a huge data in the recent past. Even though, there are many investigations in Big data analysis are on, still, to solve this issue, Canopy and K-Means++ clustering are used for processing the large-scale data in shorter amount of time with no memory constraints. In order to find the suitability of the approach, several data sets are considered ranging from small to very large ones having diverse filed of applications. The experimental results opine that the proposed approach is fast and accurate.


Sign in / Sign up

Export Citation Format

Share Document