partial learning
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Laouni Djafri

PurposeThis work can be used as a building block in other settings such as GPU, Map-Reduce, Spark or any other. Also, DDPML can be deployed on other distributed systems such as P2P networks, clusters, clouds computing or other technologies.Design/methodology/approachIn the age of Big Data, all companies want to benefit from large amounts of data. These data can help them understand their internal and external environment and anticipate associated phenomena, as the data turn into knowledge that can be used for prediction later. Thus, this knowledge becomes a great asset in companies' hands. This is precisely the objective of data mining. But with the production of a large amount of data and knowledge at a faster pace, the authors are now talking about Big Data mining. For this reason, the authors’ proposed works mainly aim at solving the problem of volume, veracity, validity and velocity when classifying Big Data using distributed and parallel processing techniques. So, the problem that the authors are raising in this work is how the authors can make machine learning algorithms work in a distributed and parallel way at the same time without losing the accuracy of classification results. To solve this problem, the authors propose a system called Dynamic Distributed and Parallel Machine Learning (DDPML) algorithms. To build it, the authors divided their work into two parts. In the first, the authors propose a distributed architecture that is controlled by Map-Reduce algorithm which in turn depends on random sampling technique. So, the distributed architecture that the authors designed is specially directed to handle big data processing that operates in a coherent and efficient manner with the sampling strategy proposed in this work. This architecture also helps the authors to actually verify the classification results obtained using the representative learning base (RLB). In the second part, the authors have extracted the representative learning base by sampling at two levels using the stratified random sampling method. This sampling method is also applied to extract the shared learning base (SLB) and the partial learning base for the first level (PLBL1) and the partial learning base for the second level (PLBL2). The experimental results show the efficiency of our solution that the authors provided without significant loss of the classification results. Thus, in practical terms, the system DDPML is generally dedicated to big data mining processing, and works effectively in distributed systems with a simple structure, such as client-server networks.FindingsThe authors got very satisfactory classification results.Originality/valueDDPML system is specially designed to smoothly handle big data mining classification.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Trevor Lee-Miller ◽  
Marco Santello ◽  
Andrew M. Gordon

AbstractSuccessful object manipulation, such as preventing object roll, relies on the modulation of forces and centers of pressure (point of application of digits on each grasp surface) prior to lift onset to generate a compensatory torque. Whether or not generalization of learned manipulation can occur after adding or removing effectors is not known. We examined this by recruiting participants to perform lifts in unimanual and bimanual grasps and analyzed results before and after transfer. Our results show partial generalization of learned manipulation occurred when switching from a (1) unimanual to bimanual grasp regardless of object center of mass, and (2) bimanual to unimanual grasp when the center of mass was on the thumb side. Partial generalization was driven by the modulation of effectors’ center of pressure, in the appropriate direction but of insufficient magnitude, while load forces did not contribute to torque generation after transfer. In addition, we show that the combination of effector forces and centers of pressure in the generation of compensatory torque differ between unimanual and bimanual grasping. These findings highlight that (1) high-level representations of learned manipulation enable only partial learning transfer when adding or removing effectors, and (2) such partial generalization is mainly driven by modulation of effectors’ center of pressure.


2020 ◽  
Vol 123 (3) ◽  
pp. 1193-1205 ◽  
Author(s):  
Rodrigo S. Maeda ◽  
Julia M. Zdybal ◽  
Paul L. Gribble ◽  
J. Andrew Pruszynski

Generalizing newly learned movement patterns beyond the training context is challenging for most motor learning situations. Here we tested whether learning of a new physical property of the arm during self-initiated reaching generalizes to new arm configurations. Human participants performed a single-joint elbow reaching task and/or countered mechanical perturbations that created pure elbow motion with the shoulder joint free to rotate or locked by the manipulandum. With the shoulder free, we found activation of shoulder extensor muscles for pure elbow extension trials, appropriate for countering torques that arise at the shoulder due to forearm rotation. After locking the shoulder joint, we found a partial reduction in shoulder muscle activity, appropriate because locking the shoulder joint cancels the torques that arise at the shoulder due to forearm rotation. In our first three experiments, we tested whether and to what extent this partial reduction in shoulder muscle activity generalizes when reaching in different situations: 1) different initial shoulder orientation, 2) different initial elbow orientation, and 3) different reach distance/speed. We found generalization for the different shoulder orientation and reach distance/speed as measured by a reliable reduction in shoulder activity in these situations but no generalization for the different elbow orientation. In our fourth experiment, we found that generalization is also transferred to feedback control by applying mechanical perturbations and observing reflex responses in a distinct shoulder orientation. These results indicate that partial learning of new intersegmental dynamics is not sufficient for modifying a general internal model of arm dynamics. NEW & NOTEWORTHY Here we show that partially learning to reduce shoulder muscle activity following shoulder fixation generalizes to other movement conditions, but it does not generalize globally. These findings suggest that the partial learning of new intersegmental dynamics is not sufficient for modifying a general internal model of the arm’s dynamics.


2019 ◽  
Vol 8 (4) ◽  
pp. 1137-1140 ◽  
Author(s):  
Zefeng Jia ◽  
Wenchi Cheng ◽  
Hailin Zhang

2019 ◽  
Vol 776 ◽  
pp. 43-63
Author(s):  
Sanjay Jain ◽  
Efim Kinber

2018 ◽  
Vol 54 (Supplement) ◽  
pp. 2F3-3-2F3-3
Author(s):  
Hiromi IMAI ◽  
Ayumi KIMURA ◽  
Tamiyo ASAGA ◽  
Sschiko Tsubaki ◽  
Tomoko ASO ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document