parallel learning
Recently Published Documents


TOTAL DOCUMENTS

140
(FIVE YEARS 34)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 16 (4) ◽  
pp. 306-312
Author(s):  
Gwonsoo Lee ◽  
Phil-Yeob Lee ◽  
Ho Sung Kim ◽  
Hansol Lee ◽  
Hyungjoo Kang ◽  
...  

2021 ◽  
pp. 113-135
Author(s):  
Keun Lee

Chapter 5 assesses China’s catch-up model, often called the Beijing Consensus, in a comparative perspective. China’s model shares several elements of the East Asian model because it also pursued the export-oriented, outward-looking growth strategies. A further commonality lies in its emphasis on the elements missing from the Washington Consensus, namely, technology policy and higher education revolution. However, the Chinese catch-up model has several unique elements that are not found in that of Taiwan or Korea. These unique features include the following: first, parallel learning from foreign direct investment firms, followed by active promotion of indigenous firms; second, forward engineering (the role of university spin-off firms) in contrast to reverse engineering adopted in Korea and Taiwan; and third, acquisition of foreign technology and brands through international mergers and acquisitions. In general, these strategies help China achieve a “compressed catch-up” and avoid several of the risks involved, including that of the “liberalization trap,” where premature financial liberalization leads to macroeconomic instability.


2021 ◽  
Vol 15 ◽  
Author(s):  
Wooseok Choi ◽  
Myonghoon Kwak ◽  
Seyoung Kim ◽  
Hyunsang Hwang

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiOx RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.


2021 ◽  
Author(s):  
Trung Nguyen

<p><b>A key goal of Artificial Intelligence (AI) is to replicate different aspects of biological intelligence. Human intelligence can accumulate progressively complicated knowledge by reusing simpler concepts/tasks to represent more complex concepts and solve more difficult tasks. Also, humans and animals with biological intelligence have the autonomy that helps sustain them over a long period. </b></p><p>Young humans need a long period to obtain simple concepts and master basic skills. However, these learnt basic concepts and skills are important to construct foundation knowledge, which is highly reusable and thereby efficiently exploited to learn new knowledge. By relating unseen tasks to learnt knowledge, humans can learn new knowledge or solve new problems effectively. Thus, AI researchers aim to mimic human performance with the same ability to reuse learnt knowledge when solving a novel task in a continual manner. </p><p>Initial attempts to implement this knowledge-transfer ability have been through layered learning and multitask learning. Layered learning aims to learn a complex target task by learning a sequence of easier tasks that provide supportive knowledge prior to learning the target task. This learning paradigm requires human knowledge that may be biased, costly, or not available in a particular domain. Multitask learning generally uses multiple related tasks with individual goals to be learnt together with the hope that they can provide externally supportive signals to each other. However, multitask learning is commonly applied to optimisation tasks that are required to start simultaneously. </p><p>In this thesis, using the transfer of building blocks of learnt knowledge is of interest to solve complex problems. A complex problem is one where the solution cannot be simply enumerated in the time and computation available, often because there are multiple interacting patterns of input features or high dimensions in the data. A strategy for solving complex problems is to discover the high-level patterns in the data. The high-level patterns are ones with complex combinations of original input features (the underlying building blocks) to describe the desired output. However, as the complexity of building blocks grows along with the problem complexity, the size of the search space for solutions and the optimal building blocks also increases in complexity. This poses a challenge in discovering optimal building blocks. </p><p>Learning Classifier Systems (LCSs) are evolutionary rule-based algorithms inspired by cognitive science. LCSs are of interest as their niching nature enables solving problems heterogeneously and learning them progressively from simpler subproblems to more complex (sub)problems. LCSs also encourage transferring subproblem building blocks among tasks. Recent work has extended LCSs with various flexible representations. Among them, Code Fragments (CFs), Genetic Programming (GP)-like trees, are a rich form that can encode complex patterns in a small and concise format. CF-based LCSs are particularly suitable for addressing complex problems. For example, XCSCF*, which was based on Wilson's XCS (an accuracy-based online learning LCS), can learn a generalised solution to the n-bit Multiplexer problem. The above techniques provided remarkable improvements to the scalability of CF-based LCSs. However, there are certain limits in such systems compared with human intelligence, such as their limited autonomy, e.g. the requirement of an appropriate learning order (e.g. layered learning) to enable learning progress. Humans can learn multiple tasks in a parallel ad hoc manner, whereas AI cannot do this autonomously. </p><p>The proposed thesis is that systems of parallel learning agents can solve multiple problems concurrently enabling multitask learning and eventually the ability to learn continually. Here, each agent is a CF-based XCS where the problems are Boolean in nature to aid interpretability. The overall goal of this thesis is to develop novel CF-based XCSs that enable learning continually with the least human support. </p><p>The contributions of this thesis are three specific systems that provide a pathway to continual learning. By reducing the requirements of human guidance without degrading the learning performance. (1) The evolution of CFs is nested and interactive with the evolution of rules. The fitness of CFs called CF-fitness is introduced to guide this process. The evolution of CFs enables growing the complexity of CFs without a depth limit to address hierarchical features. The system is the first XCS with CFs in rule conditions that can learn complex problems that used to be intractable without transfer learning. The introduction of CF evolution allows appropriate latent building blocks that address subproblems to be grouped together and flexibly reused. (2) A new system of multitask learning is developed based on the estimation of the relatedness among tasks. A new dynamic parameter helps automate feature transfer among multiple tasks, which enables improved learning performance in supportive tasks and reduced negative influence between unrelated tasks. (3) A system of parallel learning agents, where each is an XCS with CF-actions, is developed to remove the requirement of a human-biased learning order. The system can provide a clear learning order and a highly interpretable network of knowledge. This network of knowledge enables the system to accumulate knowledge hierarchically and focus on only the novel aspects of any new task. </p><p>The research work has shown that CF-based LCSs can solve hierarchical and large-scale problems autonomously without (extensive) human guidance. The learnt knowledge represented by CFs is highly interpretable. This work is also a foundation for the systems that can learn continually. Ultimately, this thesis is a step towards general learners and problem solvers.</p>


2021 ◽  
Author(s):  
Trung Nguyen

<p><b>A key goal of Artificial Intelligence (AI) is to replicate different aspects of biological intelligence. Human intelligence can accumulate progressively complicated knowledge by reusing simpler concepts/tasks to represent more complex concepts and solve more difficult tasks. Also, humans and animals with biological intelligence have the autonomy that helps sustain them over a long period. </b></p><p>Young humans need a long period to obtain simple concepts and master basic skills. However, these learnt basic concepts and skills are important to construct foundation knowledge, which is highly reusable and thereby efficiently exploited to learn new knowledge. By relating unseen tasks to learnt knowledge, humans can learn new knowledge or solve new problems effectively. Thus, AI researchers aim to mimic human performance with the same ability to reuse learnt knowledge when solving a novel task in a continual manner. </p><p>Initial attempts to implement this knowledge-transfer ability have been through layered learning and multitask learning. Layered learning aims to learn a complex target task by learning a sequence of easier tasks that provide supportive knowledge prior to learning the target task. This learning paradigm requires human knowledge that may be biased, costly, or not available in a particular domain. Multitask learning generally uses multiple related tasks with individual goals to be learnt together with the hope that they can provide externally supportive signals to each other. However, multitask learning is commonly applied to optimisation tasks that are required to start simultaneously. </p><p>In this thesis, using the transfer of building blocks of learnt knowledge is of interest to solve complex problems. A complex problem is one where the solution cannot be simply enumerated in the time and computation available, often because there are multiple interacting patterns of input features or high dimensions in the data. A strategy for solving complex problems is to discover the high-level patterns in the data. The high-level patterns are ones with complex combinations of original input features (the underlying building blocks) to describe the desired output. However, as the complexity of building blocks grows along with the problem complexity, the size of the search space for solutions and the optimal building blocks also increases in complexity. This poses a challenge in discovering optimal building blocks. </p><p>Learning Classifier Systems (LCSs) are evolutionary rule-based algorithms inspired by cognitive science. LCSs are of interest as their niching nature enables solving problems heterogeneously and learning them progressively from simpler subproblems to more complex (sub)problems. LCSs also encourage transferring subproblem building blocks among tasks. Recent work has extended LCSs with various flexible representations. Among them, Code Fragments (CFs), Genetic Programming (GP)-like trees, are a rich form that can encode complex patterns in a small and concise format. CF-based LCSs are particularly suitable for addressing complex problems. For example, XCSCF*, which was based on Wilson's XCS (an accuracy-based online learning LCS), can learn a generalised solution to the n-bit Multiplexer problem. The above techniques provided remarkable improvements to the scalability of CF-based LCSs. However, there are certain limits in such systems compared with human intelligence, such as their limited autonomy, e.g. the requirement of an appropriate learning order (e.g. layered learning) to enable learning progress. Humans can learn multiple tasks in a parallel ad hoc manner, whereas AI cannot do this autonomously. </p><p>The proposed thesis is that systems of parallel learning agents can solve multiple problems concurrently enabling multitask learning and eventually the ability to learn continually. Here, each agent is a CF-based XCS where the problems are Boolean in nature to aid interpretability. The overall goal of this thesis is to develop novel CF-based XCSs that enable learning continually with the least human support. </p><p>The contributions of this thesis are three specific systems that provide a pathway to continual learning. By reducing the requirements of human guidance without degrading the learning performance. (1) The evolution of CFs is nested and interactive with the evolution of rules. The fitness of CFs called CF-fitness is introduced to guide this process. The evolution of CFs enables growing the complexity of CFs without a depth limit to address hierarchical features. The system is the first XCS with CFs in rule conditions that can learn complex problems that used to be intractable without transfer learning. The introduction of CF evolution allows appropriate latent building blocks that address subproblems to be grouped together and flexibly reused. (2) A new system of multitask learning is developed based on the estimation of the relatedness among tasks. A new dynamic parameter helps automate feature transfer among multiple tasks, which enables improved learning performance in supportive tasks and reduced negative influence between unrelated tasks. (3) A system of parallel learning agents, where each is an XCS with CF-actions, is developed to remove the requirement of a human-biased learning order. The system can provide a clear learning order and a highly interpretable network of knowledge. This network of knowledge enables the system to accumulate knowledge hierarchically and focus on only the novel aspects of any new task. </p><p>The research work has shown that CF-based LCSs can solve hierarchical and large-scale problems autonomously without (extensive) human guidance. The learnt knowledge represented by CFs is highly interpretable. This work is also a foundation for the systems that can learn continually. Ultimately, this thesis is a step towards general learners and problem solvers.</p>


Sign in / Sign up

Export Citation Format

Share Document