scholarly journals Does the Human Brain Resort to AI's Deep Learning in Order to Solve Problems

2020 ◽  
pp. 73-86
Author(s):  
Prof. M S S El Namaki ◽  

Problem solving is a daily occurrence in business and, also, in human brains. Businesses resort to a variety of modes in order to find an answer to these problems.Human brains adopt, also, a variety of measures to solve their own brand of problems. Artificial Intelligence technologies seem to have been extending a helping hand to business in the search for problem solving mechanisms. Machine learning and deep learning are currently recognized as prime modes for business insight and problem solving. Does the human brain possess competencies and instruments that could compare to the deep learning technologies adopted by AI?

Author(s):  
Bhanu Chander

Artificial intelligence (AI) is defined as a machine that can do everything a human being can do and produce better results. Means AI enlightening that data can produce a solution for its own results. Inside the AI ellipsoidal, Machine learning (ML) has a wide variety of algorithms produce more accurate results. As a result of technology, improvement increasing amounts of data are available. But with ML and AI, it is very difficult to extract such high-level, abstract features from raw data, moreover hard to know what feature should be extracted. Finally, we now have deep learning; these algorithms are modeled based on how human brains process the data. Deep learning is a particular kind of machine learning that provides flexibility and great power, with its attempts to learn in multiple levels of representation with the operations of multiple layers. Deep learning brief overview, platforms, Models, Autoencoders, CNN, RNN, and Appliances are described appropriately. Deep learning will have many more successes in the near future because it requires very little engineering by hand.


2021 ◽  
Author(s):  
Yew Kee Wong

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, aswell as the opportunities provided by the AI applications in various decision making domains.


2016 ◽  
Vol 138 (04) ◽  
pp. 32-37
Author(s):  
Alan S. Brown

This article presents a dilemma related to increasing use of robots at work. Artificial intelligence could erase jobs or create them, but economists agree that a new generation of smart machines will alter the rules of employment. Two emerging technologies that will help robots learn even faster are cloud robotics and deep learning, an advanced type of machine learning that allows robots to learn things that humans understand tacitly. However, robots require controlled environments, while humans, who are more flexible, can cope with unstructured tasks. That same adaptability is essential for medical technicians, plumbers, electricians, and many other middle-skill jobs. The experts expect pressures on middle-skill jobs to eventually reverse because these jobs combine not only knowledge, but also adaptability, problem solving, common sense, and the ability to communicate with other people. Businesses are already pairing human flexibility with mechanical precision.


Author(s):  
Sanjay Saxena ◽  
Sudip Paul ◽  
Adhesh Garg ◽  
Angana Saikia ◽  
Amitava Datta

Computational neuroscience is inspired by the mechanism of the human brain. Neural networks have reformed machine learning and artificial intelligence. Deep learning is a type of machine learning that teaches computers to do what comes naturally to individuals: acquire by example. It is inspired by biological brains and became the essential class of models in the field of machine learning. Deep learning involves several layers of computation. In the current scenario, researchers and scientists around the world are focusing on the implementation of different deep models and architectures. This chapter consists the information about major architectures of deep network. That will give the information about convolutional neural network, recurrent neural network, multilayer perceptron, and many more. Further, it discusses CNN (convolutional neural network) and its different pretrained models due to its major requirements in visual imaginary. This chapter also deliberates about the similarity of deep model and architectures with the human brain.


Author(s):  
Prarthana Dutta ◽  
Naresh Babu Muppalaneni ◽  
Ripon Patgiri

The world has been evolving with new technologies and advances day-by-day. With the advent of various learning technologies in every field, the research community is able to provide solution in every aspect of life with the applications of Artificial Intelligence, Machine Learning, Deep Learning, Computer Vision, etc. However, with such high achievements, it is found to lag behind the ability to provide explanation against its prediction. The current situation is such that these modern technologies are able to predict and decide upon various cases more accurately and speedily than a human, but failed to provide an answer when the question of why to trust its prediction is put forward. In order to attain a deeper understanding into this rising trend, we explore a very recent and talked-about novel contribution which provides rich insight on a prediction being made -- ``Explainability.'' The main premise of this survey is to provide an overview for researches explored in the domain and obtain an idea of the current scenario along with the advancements published to-date in this field. This survey is intended to provide a comprehensive background of the broad spectrum of Explainability.


2020 ◽  
Vol 4 (02) ◽  
pp. 116-120
Author(s):  
Srinath Damodaran ◽  
Arjun Alva ◽  
Srinath Kumar ◽  
Muralidhar Kanchi

AbstractThe creation of intelligent software or system, machine learning, and deep learning technologies are the integral components of artificial intelligence. Point-of-care ultrasound involves the bedside use of ultrasound to answer specific diagnostic questions and to assess real-time physiologic responses to treatment. This article provides insight into the pearls and pitfalls of artificial intelligence in point-of-care ultrasound for the coronavirus disease 2019 pandemic.


2018 ◽  
Vol 15 (1) ◽  
pp. 6-28 ◽  
Author(s):  
Javier Pérez-Sianes ◽  
Horacio Pérez-Sánchez ◽  
Fernando Díaz

Background: Automated compound testing is currently the de facto standard method for drug screening, but it has not brought the great increase in the number of new drugs that was expected. Computer- aided compounds search, known as Virtual Screening, has shown the benefits to this field as a complement or even alternative to the robotic drug discovery. There are different methods and approaches to address this problem and most of them are often included in one of the main screening strategies. Machine learning, however, has established itself as a virtual screening methodology in its own right and it may grow in popularity with the new trends on artificial intelligence. Objective: This paper will attempt to provide a comprehensive and structured review that collects the most important proposals made so far in this area of research. Particular attention is given to some recent developments carried out in the machine learning field: the deep learning approach, which is pointed out as a future key player in the virtual screening landscape.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Kevin Page ◽  
Max Van Kleek ◽  
Omar Santos ◽  
...  

AbstractMultiple governmental agencies and private organisations have made commitments for the colonisation of Mars. Such colonisation requires complex systems and infrastructure that could be very costly to repair or replace in cases of cyber-attacks. This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


Sign in / Sign up

Export Citation Format

Share Document