scholarly journals Research on the Revolution of Multidimensional Learning Space in the Big Data Environment

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Weihua Huang

Multiuser fair sharing of clusters is a classic problem in cluster construction. However, the cluster computing system for hybrid big data applications has the characteristics of heterogeneous requirements, which makes more and more cluster resource managers support fine-grained multidimensional learning resource management. In this context, it is oriented to multiusers of multidimensional learning resources. Shared clusters have become a new topic. A single consideration of a fair-shared cluster will result in a huge waste of resources in the context of discrete and dynamic resource allocation. Fairness and efficiency of cluster resource sharing for multidimensional learning resources are equally important. This paper studies big data processing technology and representative systems and analyzes multidimensional analysis and performance optimization technology. This article discusses the importance of discrete multidimensional learning resource allocation optimization in dynamic scenarios. At the same time, in view of the fact that most of the resources of the big data application cluster system are supplied to large jobs that account for a small proportion of job submissions, while the small jobs that account for a large proportion only use the characteristics of a small part of the system’s resources, the expected residual multidimensionality of large-scale work is proposed. The server with the least learning resources is allocated first, and only fair strategies are considered for small assignments. The topic index is distributed and stored on the system to realize the parallel processing of search to improve the efficiency of search processing. The effectiveness of RDIBT is verified through experimental simulation. The results show that RDIBT has higher performance than LSII index technology in index creation speed and search response speed. In addition, RDIBT can also ensure the scalability of the index system.

2020 ◽  
Vol 11 (2) ◽  
pp. 151-158
Author(s):  
Elfrida Nurutstsany ◽  
Saifullah Hidayat ◽  
Nur Hayati

Learning resources need to be developed based on current developments. The purpose of this research was to develop an Islamic-based Botanical Encyclopedia and test its quality. The method used to develop the encyclopedia was Research and Development (RD) with the 4D model (Design, Define, Develop, and Disseminate) proposed by Thiagarajan in 1974 as the development model. The research instrument had been validated by the material, media, and integration experts. The average percentage of the validation result was 88% (a very valid category). The small-scale trial obtained a percentage of 89% (very valid), and the large-scale trial obtained a percentage of 85% (very feasible category). The results indicated that the developed Islamic-based Botanical encyclopedia could be used as a resource for independent learning.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Author(s):  
Laura Broeker ◽  
Harald Ewolds ◽  
Rita F. de Oliveira ◽  
Stefan Künzell ◽  
Markus Raab

AbstractThe aim of this study was to examine the impact of predictability on dual-task performance by systematically manipulating predictability in either one of two tasks, as well as between tasks. According to capacity-sharing accounts of multitasking, assuming a general pool of resources two tasks can draw upon, predictability should reduce the need for resources and allow more resources to be used by the other task. However, it is currently not well understood what drives resource-allocation policy in dual tasks and which resource allocation policies participants pursue. We used a continuous tracking task together with an audiomotor task and manipulated advance visual information about the tracking path in the first experiment and a sound sequence in the second experiments (2a/b). Results show that performance predominantly improved in the predictable task but not in the unpredictable task, suggesting that participants did not invest more resources into the unpredictable task. One possible explanation was that the re-investment of resources into another task requires some relationship between the tasks. Therefore, in the third experiment, we covaried the two tasks by having sounds 250 ms before turning points in the tracking curve. This enabled participants to improve performance in both tasks, suggesting that resources were shared better between tasks.


Sign in / Sign up

Export Citation Format

Share Document