continual learning
Recently Published Documents


TOTAL DOCUMENTS

312
(FIVE YEARS 274)

H-INDEX

9
(FIVE YEARS 7)

Author(s):  
Suresh Kumar Amalapuram ◽  
Akash Tadwai ◽  
Reethu Vinta ◽  
Sumohana S. Channappayya ◽  
Bheemarjuna Reddy Tamma

2022 ◽  
pp. 74-92
Author(s):  
Sandra Blanke ◽  
Paul Christian Nielsen ◽  
Brian Wrozek

The need for cybersecurity professionals extends across government and private industries. Estimates place the shortage of cybersecurity professionals at 1.8 million by 2022. This chapter provides aspiring cybersecurity students a clear understanding of the various educational pathways they can choose to achieve their goals. The authors describe educational categories and include an assessment of each that students will want to consider based on their own situation. The authors discuss how the study of cybersecurity can be accomplished from a computer science, engineering, and business perspective. Students with STEM skills can accomplish their goals in numerous cybersecurity roles including cyber engineer, architect, and other technical roles. Finally, students with cyber business interest can accomplish their goals with a focus on strategy, compliance, awareness, and others. Organizations need employees with all these skills. This chapter concludes with the recommendation for continual learning, the value of networking, and the encouragement for students to start creating a cyber career.


2022 ◽  
pp. 275-303
Author(s):  
Umberto Michieli ◽  
Marco Toldo ◽  
Pietro Zanuttigh

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  

Purpose This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies. Design/methodology/approach This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context. Findings This research paper concentrates on the relationship between company growth and digitalization, as measured in six growth companies in Finland. Growth was compartmentalized into three phases: pre-factors of growth, growth as a process, and growth as an outcome. Maintaining strategic flexibility powerfully facilitates digitalization. The companies generally integrated digitalization into the processes they built, creating a product and service platform from which they could reliably scale. Leaders in digital product-driven companies are encouraged by the study's authors to invest in and promote a culture of continual learning, so that their teams don't lose their instinct to keep innovating to achieve competitive advantages. Originality/value The briefing saves busy executives, strategists and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.


2021 ◽  
Vol 11 (24) ◽  
pp. 12078
Author(s):  
Daniel Turner ◽  
Pedro J. S. Cardoso ◽  
João M. F. Rodrigues

Learning to recognize a new object after having learned to recognize other objects may be a simple task for a human, but not for machines. The present go-to approaches for teaching a machine to recognize a set of objects are based on the use of deep neural networks (DNN). So, intuitively, the solution for teaching new objects on the fly to a machine should be DNN. The problem is that the trained DNN weights used to classify the initial set of objects are extremely fragile, meaning that any change to those weights can severely damage the capacity to perform the initial recognitions; this phenomenon is known as catastrophic forgetting (CF). This paper presents a new (DNN) continual learning (CL) architecture that can deal with CF, the modular dynamic neural network (MDNN). The presented architecture consists of two main components: (a) the ResNet50-based feature extraction component as the backbone; and (b) the modular dynamic classification component, which consists of multiple sub-networks and progressively builds itself up in a tree-like structure that rearranges itself as it learns over time in such a way that each sub-network can function independently. The main contribution of the paper is a new architecture that is strongly based on its modular dynamic training feature. This modular structure allows for new classes to be added while only altering specific sub-networks in such a way that previously known classes are not forgotten. Tests on the CORe50 dataset showed results above the state of the art for CL architectures.


2021 ◽  
Vol 11 (22) ◽  
pp. 10786
Author(s):  
Kyuchang Kang ◽  
Changseok Bae

Recent achievements on CNN (convolutional neural networks) and DNN (deep neural networks) researches provide a lot of practical applications on computer vision area. However, these approaches require construction of huge size of training data for learning process. This paper tries to find a way for continual learning which does not require prior high-cost training data construction by imitating a biological memory model. We employ SDR (sparse distributed representation) for information processing and semantic memory model, which is known as a representation model of firing patterns on neurons in neocortex area. This paper proposes a novel memory model to reflect remembrance of morphological semantics of visual input stimuli. The proposed memory model considers both memory process and recall process separately. First, memory process converts input visual stimuli to sparse distributed representation, and in this process, morphological semantic of input visual stimuli can be preserved. Next, recall process can be considered by comparing sparse distributed representation of new input visual stimulus and remembered sparse distributed representations. Superposition of sparse distributed representation is used to measure similarities. Experimental results using 10,000 images in MNIST (Modified National Institute of Standards and Technology) and Fashion-MNIST data sets show that the sparse distributed representation of the proposed model efficiently keeps morphological semantic of the input visual stimuli.


2021 ◽  
Author(s):  
Doris Voina ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

Humans and other animals navigate different landscapes and environments with ease, a feat that requires the brain's ability to rapidly and accurately adapt to different visual domains, generalizing across contexts/backgrounds. Despite recent progress in deep learning applied to classification and detection in the presence of multiple confounds including contextual ones, there remain important challenges to address regarding how networks can perform context-dependent computations and how contextually-invariant visual concepts are formed. For instance, recent studies have shown artificial networks that repeatedly misclassified familiar objects set on new backgrounds, e.g. incorrectly labeling known animals when they appeared in a different setting. Here, we show how a bio-inspired network motif can explicitly address this issue. We do this using a novel dataset which can be used as a benchmark for future studies probing invariance to backgrounds. The dataset consists of MNIST digits of varying transparency, set on one of two backgrounds with different statistics: a Gaussian noise or a more naturalistic background from the CIFAR-10 dataset. We use this dataset to learn digit classification when contexts are shown sequentially, and find that both shallow and deep networks have sharply decreased performance when returning to the first background after experience learning the second -- the catastrophic forgetting phenomenon in continual learning. To overcome this, we propose an architecture with additional ``switching'' units that are activated in the presence of a new background. We find that the switching network can learn the new context even with very few switching units, while maintaining the performance in the previous context -- but that they must be recurrently connected to network layers. When the task is difficult due to high transparency, the switching network trained on both contexts outperforms networks without switching trained on only one context. The switching mechanism leads to sparser activation patterns, and we provide intuition for why this helps to solve the task. We compare our architecture with other prominent learning methods, and find that elastic weight consolidation is not successful in our setting, while progressive nets are more complex but less effective. Our study therefore shows how a bio-inspired architectural motif can contribute to task generalization across context.


Sign in / Sign up

Export Citation Format

Share Document