scholarly journals Learning Evolution: a Survey

2021 ◽  
pp. 4978-4987
Author(s):  
Nada Hussain Ali ◽  
Matheel Emaduldeen Abdulmunem ◽  
Akbas Ezaldeen Ali

     Learning is the process of gaining knowledge and implementing this knowledge on behavior. The concept of learning is not strict to just human being, it expanded to include machine also. Now the machines can behave based on the gained knowledge learned from the environment. The learning process is evolving in both human and machine, to keep up with the technology in the world, the human learning evolved into micro-learning and the machine learning evolved to deep learning. In this paper, the evolution of learning is discussed as a formal survey accomplished with the foundation of machine learning and its evolved version of learning which is deep learning and micro-learning as a new learning technology can be implemented on human and machine learning. A procedural comparison is achieved to declare the purpose of this survey, also a related discussion integrates the aim of this study. Finally a concluded points are illustrated as outcome which summarized the practical evolution intervals of the machine learning different concepts.

2021 ◽  
Author(s):  
Lun Ai ◽  
Stephen H. Muggleton ◽  
Céline Hocquette ◽  
Mark Gromowski ◽  
Ute Schmid

AbstractGiven the recent successes of Deep Learning in AI there has been increased interest in the role and need for explanations in machine learned theories. A distinct notion in this context is that of Michie’s definition of ultra-strong machine learning (USML). USML is demonstrated by a measurable increase in human performance of a task following provision to the human of a symbolic machine learned theory for task performance. A recent paper demonstrates the beneficial effect of a machine learned logic theory for a classification task, yet no existing work to our knowledge has examined the potential harmfulness of machine’s involvement for human comprehension during learning. This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games and proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature. The approach involves a cognitive window consisting of two quantifiable bounds and it is supported by empirical evidence collected from human trials. Our quantitative and qualitative results indicate that human learning aided by a symbolic machine learned theory which satisfies a cognitive window has achieved significantly higher performance than human self learning. Results also demonstrate that human learning aided by a symbolic machine learned theory that fails to satisfy this window leads to significantly worse performance than unaided human learning.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


2021 ◽  
Author(s):  
Muhammad Sajid

Abstract Machine learning is proving its successes in all fields of life including medical, automotive, planning, engineering, etc. In the world of geoscience, ML showed impressive results in seismic fault interpretation, advance seismic attributes analysis, facies classification, and geobodies extraction such as channels, carbonates, and salt, etc. One of the challenges faced in geoscience is the availability of label data which is one of the most time-consuming requirements in supervised deep learning. In this paper, an advanced learning approach is proposed for geoscience where the machine observes the seismic interpretation activities and learns simultaneously as the interpretation progresses. Initial testing showed that through the proposed method along with transfer learning, machine learning performance is highly effective, and the machine accurately predicts features requiring minor post prediction filtering to be accepted as the optimal interpretation.


Deep Learning technology can accurately predict the presence of diseases and pests in the agricultural farms. Upon this Machine learning algorithm, we can even predict accurately the chance of any disease and pest attacks in future For spraying the correct amount of fertilizer/pesticide to elimate host, the normal human monitoring system unable to predict accurately the total amount and ardent of pest and disease attack in farm. At the specified target area the artificial percepton tells the value accurately and give corrective measure and amount of fertilizers/ pesticides to be sprayed.


2022 ◽  
Vol 2 ◽  
Author(s):  
Rasheed Omobolaji Alabi ◽  
Alhadi Almangush ◽  
Mohammed Elmusrati ◽  
Antti A. Mäkitie

Oral squamous cell carcinoma (OSCC) is one of the most prevalent cancers worldwide and its incidence is on the rise in many populations. The high incidence rate, late diagnosis, and improper treatment planning still form a significant concern. Diagnosis at an early-stage is important for better prognosis, treatment, and survival. Despite the recent improvement in the understanding of the molecular mechanisms, late diagnosis and approach toward precision medicine for OSCC patients remain a challenge. To enhance precision medicine, deep machine learning technique has been touted to enhance early detection, and consequently to reduce cancer-specific mortality and morbidity. This technique has been reported to have made a significant progress in data extraction and analysis of vital information in medical imaging in recent years. Therefore, it has the potential to assist in the early-stage detection of oral squamous cell carcinoma. Furthermore, automated image analysis can assist pathologists and clinicians to make an informed decision regarding cancer patients. This article discusses the technical knowledge and algorithms of deep learning for OSCC. It examines the application of deep learning technology in cancer detection, image classification, segmentation and synthesis, and treatment planning. Finally, we discuss how this technique can assist in precision medicine and the future perspective of deep learning technology in oral squamous cell carcinoma.


Author(s):  
Bhanu Chander

Artificial intelligence (AI) is defined as a machine that can do everything a human being can do and produce better results. Means AI enlightening that data can produce a solution for its own results. Inside the AI ellipsoidal, Machine learning (ML) has a wide variety of algorithms produce more accurate results. As a result of technology, improvement increasing amounts of data are available. But with ML and AI, it is very difficult to extract such high-level, abstract features from raw data, moreover hard to know what feature should be extracted. Finally, we now have deep learning; these algorithms are modeled based on how human brains process the data. Deep learning is a particular kind of machine learning that provides flexibility and great power, with its attempts to learn in multiple levels of representation with the operations of multiple layers. Deep learning brief overview, platforms, Models, Autoencoders, CNN, RNN, and Appliances are described appropriately. Deep learning will have many more successes in the near future because it requires very little engineering by hand.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Qi Zhu ◽  
Ning Yuan ◽  
Donghai Guan

In recent years, self-paced learning (SPL) has attracted much attention due to its improvement to nonconvex optimization based machine learning algorithms. As a methodology introduced from human learning, SPL dynamically evaluates the learning difficulty of each sample and provides the weighted learning model against the negative effects from hard-learning samples. In this study, we proposed a cognitive driven SPL method, i.e., retrospective robust self-paced learning (R2SPL), which is inspired by the following two issues in human learning process: the misclassified samples are more impressive in upcoming learning, and the model of the follow-up learning process based on large number of samples can be used to reduce the risk of poor generalization in initial learning phase. We simultaneously estimated the degrees of learning-difficulty and misclassified in each step of SPL and proposed a framework to construct multilevel SPL for improving the robustness of the initial learning phase of SPL. The proposed method can be viewed as a multilayer model and the output of the previous layer can guide constructing robust initialization model of the next layer. The experimental results show that the R2SPL outperforms the conventional self-paced learning models in classification task.


2019 ◽  
Vol 8 (3) ◽  
pp. 8619-8622

People, due to their complexity and volatile actions, are constantly faced with challenges in understanding the situation in the market share and the forecast for the future. For any financial investment, the stock market is a very important aspect. It is necessary to study while understanding the price fluctuations of the stock market. In this paper, the stock market prediction model using the Recurrent Digital natural Network (RDNN) is described. The model is designed using two important machine learning concepts: the recurrent neural network (RNN), multilayer perceptron (MLP) and reinforcement learning (RL). Deep learning is used to automatically extract important functions of the stock market; reinforcement learning of these functions will be useful for future prediction of the stock market, the system uses historical stock market data to understand the dynamic market behavior when you make decisions in an unknown environment. In this paper, the understanding of the dynamic stock market and the deep learning technology for predicting the price of the future stock market are described.


2018 ◽  
Vol 2 ◽  
pp. e25833
Author(s):  
Steve Kelling

Over the next 5 years major advances in the development and application of numerous technologies related to computing, mobile phones, artificial intelligence (AI), and augmented reality (AR) will have a dramatic impact in biodiversity monitoring and conservation. Over a 2-week period several of us had the opportunity to meet with multiple technology experts in the Silicon Valley, California, USA to discuss trends in technology innovation, and how they could be applied to conservation science and ecology research. Here we briefly highlight some of the key points of these meetings with respect to AI and Deep Learning. Computing: Investment and rapid growth in AI and Deep Learning technologies are transforming how machines can perceive the environment. Much of this change is due to increased processing speeds of Graphics Processing Units (GPUs), which is now a billion-dollar industry. Machine learning applications, such as convolutional neural networks (CNNs) run more efficiently on GPUs and are being applied to analyze visual imagery and sounds in real time. Rapid advances in CNNs that use both supervised and unsupervised learning to train the models is improving accuracy. By taking a Deep Learning approach where the base layers of the model are built upon datasets of known images and sounds (supervised learning) and later layers relying on unclassified images or sounds (unsupervised learning), dramatically improve the flexibility of CNNs in perceiving novel stimuli. The potential to have autonomous sensors gathering biodiversity data in the same way personal weather stations gather atmospheric information is close at hand. Mobile Phones: The phone is the most widely used information appliance in the world. No device is on the near horizon to challenge this platform, for several key reasons. First, network access is ubiquitous in many parts of the world. Second, batteries are improving by about 20% annually, allowing for more functionality. Third, app development is a growing industry with significant investment in specializing apps for machine-learning. While GPUs are already running on phones for video streaming, there is much optimism that reduced or approximate Deep Learning models will operate on phones. These models are already working in the lab, with the biggest hurdle being power consumption and developing energy efficient applications and algorithms to run complicated AI processes will be important. It is just a matter of time before industry will have AI functionality on phones. These rapid improvements in computing and mobile phone technologies have huge implications for biodiversity monitoring, conservation science, and understanding ecological systems. Computing: AI processing of video imagery or acoustic streams create the potential to deploy autonomous sensors in the environment that will be able to detect and classify organisms to species. Further, AI processing of Earth spectral imagery has the potential to provide finer grade classification of habitats, which is essential in developing fine scale models of species distributions over broad spatial and temporal extents. Mobile Phones: increased computing functionality and more efficient batteries will allow applications to be developed that will improve an individual’s perception of the world. Already AI functionality of Merlin improves a birder’s ability to accurately identify a bird. Linking this functionality to sensor devices like specialized glasses, binoculars, or listening devises will help an individual detect and classify objects in the environment. In conclusion, computing technology is advancing at a rapid rate and soon autonomous sensors placed strategically in the environment will augment the species occurrence data gathered by humans. The mobile phone in everyone’s pocket should be thought of strategically, in how to connect people to the environment and improve their ability to gather meaningful biodiversity information.


Sign in / Sign up

Export Citation Format

Share Document