General AI

Author(s):  
Stephen K. Reed

People use their cognitive skills to solve a wide range of problems whereas computers solve only a limited number of specific problems. A goal of artificial intelligence (AI) is to build on its previous success in specific environments to advance toward the generality of human level intelligence. People are efficient general-purpose learners who can adapt to many situations such as navigating in spatial environments and communicating by using language. To compare human and machine reasoning the AI community has proposed a standard model of the mind. Measuring progress in achieving general AI will require a wide variety of intelligence tests. Grand challenges, such as helping scientists win a Nobel prize, should stimulate development efforts.

2021 ◽  
pp. 3-23
Author(s):  
Stuart Russell

Following the analysis given by Alan Turing in 1951, one must expect that AI capabilities will eventually exceed those of humans across a wide range of real-world-decision making scenarios. Should this be a cause for concern, as Turing, Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real: we have to work out how to design AI systems that are far more powerful than ourselves while ensuring that they never have power over us. I believe the technical aspects of this problem are solvable. Whereas the standard model of AI proposes to build machines that optimize known, exogenously specified objectives, a preferable approach would be to build machines that are of provable benefit to humans. I introduce assistance games as a formal class of problems whose solution, under certain assumptions, has the desired property.


Philosophy ◽  
1954 ◽  
Vol 29 (110) ◽  
pp. 231-243 ◽  
Author(s):  
W. Mays

I DO not have to apologize for entering upon a discussion of intelligence and intelligence tests; it is a field which comes within the purview of philosophy as well as psychology. Any method of testing intelligence is therefore of common interest, especially as the methodology employed is usually based upon some definite theory as to its nature. The very word intelligence covers a wide range of meanings and psychologists seem to select sections of this range at will in accordance with their particular interest. It may include most of the behavioural activities of man, or be narrowed down so that it refers to certain quantitative or relational aspects of experience. To take the case of the factor analyst, after his concept of intelligence has been analysed and classified into g's and s's, it becomes not a description of the mind, but rather a closed cognitive model of it.


AI Magazine ◽  
2017 ◽  
Vol 38 (4) ◽  
pp. 13-26 ◽  
Author(s):  
John E. Laird ◽  
Christian Lebiere ◽  
Paul S. Rosenbloom

The purpose of this article is to begin the process of engaging the international research community in developing what can be called a standard model of the mind, where the mind we have in mind here is human-like. The notion of a standard model has its roots in physics, where over more than a half-century the international community has developed and tested a standard model that combines much of what is known about particles. This model is assumed to be internally consistent, yet still have major gaps. Its function is to serve as a cumulative reference point for the field while also driving efforts to both extend and break it.


2020 ◽  
Vol 27 (2) ◽  
pp. e100141
Author(s):  
John Fox ◽  
Matthew South ◽  
Omar Khan ◽  
Catriona Kennedy ◽  
Peter Ashby ◽  
...  

ObjectiveOpenClinical.net is a way of disseminating clinical guidelines to improve quality of care whose distinctive feature is to combine the benefits of clinical guidelines and other human-readable material with the power of artificial intelligence to give patient-specific recommendations. A key objective is to empower healthcare professionals to author, share, critique, trial and revise these ‘executable’ models of best practice.DesignOpenClinical.net Alpha (www.openclinical.net) is an operational publishing platform that uses a class of artificial intelligence techniques called knowledge engineering to capture human expertise in decision-making, care planning and other cognitive skills in an intuitive but formal language called PROforma.3 PROforma models can be executed by a computer to yield patient-specific recommendations, explain the reasons and provide supporting evidence on demand.ResultsPROforma has been validated in a wide range of applications in diverse clinical settings and specialties, with trials published in high impact peer-reviewed journals. Trials have included patient workup and risk assessment; decision support (eg, diagnosis, test and treatment selection, prescribing); adaptive care pathways and care planning. The OpenClinical software platform presently supports authoring, testing, sharing and maintenance. OpenClinical’s open-access, open-source repository Repertoire currently carries approximately 50+ diverse examples (https://openclinical.net/index.php?id=69).ConclusionOpenClinical.net is a showcase for a PROforma-based approach to improving care quality, safety, efficiency and better patient experience in many kinds of routine clinical practice. This human-centred approach to artificial intelligence will help to ensure that it is developed and used responsibly and in ways that are consistent with professional priorities and public expectations.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


Healthcare ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 331
Author(s):  
Daniele Giansanti ◽  
Ivano Rossi ◽  
Lisa Monoscalco

The development of artificial intelligence (AI) during the COVID-19 pandemic is there for all to see, and has undoubtedly mainly concerned the activities of digital radiology. Nevertheless, the strong perception in the research and clinical application environment is that AI in radiology is like a hammer in search of a nail. Notable developments and opportunities do not seem to be combined, now, in the time of the COVID-19 pandemic, with a stable, effective, and concrete use in clinical routine; the use of AI often seems limited to use in research applications. This study considers the future perceived integration of AI with digital radiology after the COVID-19 pandemic and proposes a methodology that, by means of a wide interaction of the involved actors, allows a positioning exercise for acceptance evaluation using a general purpose electronic survey. The methodology was tested on a first category of professionals, the medical radiology technicians (MRT), and allowed to (i) collect their impressions on the issue in a structured way, and (ii) collect their suggestions and their comments in order to create a specific tool for this professional figure to be used in scientific societies. This study is useful for the stakeholders in the field, and yielded several noteworthy observations, among them (iii) the perception of great development in thoracic radiography and CT, but a loss of opportunity in integration with non-radiological technologies; (iv) the belief that it is appropriate to invest in training and infrastructure dedicated to AI; and (v) the widespread idea that AI can become a strong complementary tool to human activity. From a general point of view, the study is a clear invitation to face the last yard of AI in digital radiology, a last yard that depends a lot on the opinion and the ability to accept these technologies by the operators of digital radiology.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seyed Hossein Jafari ◽  
Amir Mahdi Abdolhosseini-Qomi ◽  
Masoud Asadpour ◽  
Maseud Rahgozar ◽  
Naser Yazdani

AbstractThe entities of real-world networks are connected via different types of connections (i.e., layers). The task of link prediction in multiplex networks is about finding missing connections based on both intra-layer and inter-layer correlations. Our observations confirm that in a wide range of real-world multiplex networks, from social to biological and technological, a positive correlation exists between connection probability in one layer and similarity in other layers. Accordingly, a similarity-based automatic general-purpose multiplex link prediction method—SimBins—is devised that quantifies the amount of connection uncertainty based on observed inter-layer correlations in a multiplex network. Moreover, SimBins enhances the prediction quality in the target layer by incorporating the effect of link overlap across layers. Applying SimBins to various datasets from diverse domains, our findings indicate that SimBins outperforms the compared methods (both baseline and state-of-the-art methods) in most instances when predicting links. Furthermore, it is discussed that SimBins imposes minor computational overhead to the base similarity measures making it a potentially fast method, suitable for large-scale multiplex networks.


1991 ◽  
Vol 45 (10) ◽  
pp. 1739-1745
Author(s):  
Min J. Yang ◽  
Paul W. Yang

A computerized infrared interpreter has been developed on an IBM personal computer (PC) running under the Microsoft disk operating system (DOS). Based on the original Merck Sharp & Dhome Research Laboratory Program for the Analysis of InfRared Spectra (PAIRS), this infrared interpreter, PC PAIRS+, is capable of analyzing infrared spectra measured from a wide variety of spectrophotometers. Modifications to PAIRS now allow the application of both artificial intelligence and library searching techniques in the program. A new algorithm has been devised to combine the results from the library searching and the PAIRS program to enhance the dependability of interpretational data. The increased capability of this infrared interpreter along with its applicability on a personal computer results in a powerful, general-purpose, and easy-to-use infrared interpretation system. Applications of PC PAIRS+ on petrochemical samples are described.


Author(s):  
Annika Reinke ◽  
Minu D. Tizabi ◽  
Matthias Eisenmann ◽  
Lena Maier-Hein

Sign in / Sign up

Export Citation Format

Share Document