scholarly journals Artificial Intelligence for the Future Radiology Diagnostic Service

2021 ◽  
Vol 7 ◽  
Author(s):  
Seong K. Mun ◽  
Kenneth H. Wong ◽  
Shih-Chung B. Lo ◽  
Yanni Li ◽  
Shijir Bayarsaikhan

Radiology historically has been a leader of digital transformation in healthcare. The introduction of digital imaging systems, picture archiving and communication systems (PACS), and teleradiology transformed radiology services over the past 30 years. Radiology is again at the crossroad for the next generation of transformation, possibly evolving as a one-stop integrated diagnostic service. Artificial intelligence and machine learning promise to offer radiology new powerful new digital tools to facilitate the next transformation. The radiology community has been developing computer-aided diagnosis (CAD) tools based on machine learning (ML) over the past 20 years. Among various AI techniques, deep-learning convolutional neural networks (CNN) and its variants have been widely used in medical image pattern recognition. Since the 1990s, many CAD tools and products have been developed. However, clinical adoption has been slow due to a lack of substantial clinical advantages, difficulties integrating into existing workflow, and uncertain business models. This paper proposes three pathways for AI's role in radiology beyond current CNN based capabilities 1) improve the performance of CAD, 2) improve the productivity of radiology service by AI-assisted workflow, and 3) develop radiomics that integrate the data from radiology, pathology, and genomics to facilitate the emergence of a new integrated diagnostic service.

2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2018 ◽  
Vol 14 (4) ◽  
pp. 734-747 ◽  
Author(s):  
Constance de Saint Laurent

There has been much hype, over the past few years, about the recent progress of artificial intelligence (AI), especially through machine learning. If one is to believe many of the headlines that have proliferated in the media, as well as in an increasing number of scientific publications, it would seem that AI is now capable of creating and learning in ways that are starting to resemble what humans can do. And so that we should start to hope – or fear – that the creation of fully cognisant machine might be something we will witness in our life time. However, much of these beliefs are based on deep misconceptions about what AI can do, and how. In this paper, I start with a brief introduction to the principles of AI, machine learning, and neural networks, primarily intended for psychologists and social scientists, who often have much to contribute to the debates surrounding AI but lack a clear understanding of what it can currently do and how it works. I then debunk four common myths associated with AI: 1) it can create, 2) it can learn, 3) it is neutral and objective, and 4) it can solve ethically and/or culturally sensitive problems. In a third and last section, I argue that these misconceptions represent four main dangers: 1) avoiding debate, 2) naturalising our biases, 3) deresponsibilising creators and users, and 4) missing out some of the potential uses of machine learning. I finally conclude on the potential benefits of using machine learning in research, and thus on the need to defend machine learning without romanticising what it can actually do.


2015 ◽  
Vol 3 (2) ◽  
pp. 115-126 ◽  
Author(s):  
Naresh Babu Bynagari

Artificial Intelligence (AI) is one of the most promising and intriguing innovations of modernity. Its potential is virtually unlimited, from smart music selection in personal gadgets to intelligent analysis of big data and real-time fraud detection and aversion. At the core of the AI philosophy lies an assumption that once a computer system is provided with enough data, it can learn based on that input. The more data is provided, the more sophisticated its learning ability becomes. This feature has acquired the name "machine learning" (ML). The opportunities explored with ML are plentiful today, and one of them is an ability to set up an evolving security system learning from the past cyber-fraud experiences and developing more rigorous fraud detection mechanisms. Read on to learn more about ML, the types and magnitude of fraud evidenced in modern banking, e-commerce, and healthcare, and how ML has become an innovative, timely, and efficient fraud prevention technology.


To build up a particular profile about a person, the study of examining the comportment is known as Behavior analysis. Initially the Behavior analysis is used in psychology and for suggesting and developing different types the application content for user then it developed in information technology. To make the applications for user's personal needs it becoming a new trends with the use of artificial intelligence (AI). in many applications like innovation to do everything from anticipating buy practices to altering a home's indoor regulator to the inhabitant's optimal temperature for a specific time of day use machine learning and artificial intelligence technology. The technique that is use to advance the rule proficiency that rely upon the past experience is known as machine learning. By utilizing the insights hypothesis it makes the numerical model, and its real work is to infer from the models gave. To take the information clearly from the data the methodology utilizes computational techniques.


Author(s):  
Melda Yucel ◽  
Gebrail Bekdaş ◽  
Sinan Melih Nigdeli

This chapter presents a summary review of development of Artificial Intelligence (AI). Definitions of AI are given with basic features. The development process of AI and machine learning is presented. The developments of applications from the past to today are mentioned and use of AI in different categories is given. Prediction applications using artificial neural network are given for engineering applications. Usage of AI methods to predict optimum results is the current trend and it will be more important in the future.


2021 ◽  
Vol 8 ◽  
Author(s):  
Yujie Song ◽  
Laurène Bernard ◽  
Christian Jorgensen ◽  
Gilles Dusfour ◽  
Yves-Marie Pers

During the past 20 years, the development of telemedicine has accelerated due to the rapid advancement and implementation of more sophisticated connected technologies. In rheumatology, e-health interventions in the diagnosis, monitoring and mentoring of rheumatic diseases are applied in different forms: teleconsultation and telecommunications, mobile applications, mobile devices, digital therapy, and artificial intelligence or machine learning. Telemedicine offers several advantages, in particular by facilitating access to healthcare and providing personalized and continuous patient monitoring. However, some limitations remain to be solved, such as data security, legal problems, reimbursement method, accessibility, as well as the application of recommendations in the development of the tools.


10.23856/3303 ◽  
2019 ◽  
Vol 33 (2) ◽  
pp. 28-35 ◽  
Author(s):  
Inta Kotane ◽  
Daina Znotina ◽  
Serhii Hushko

One of the conditions for the future development of companies is the identification and use of digital capabilities. In recent years, the environment in which we live and work has changed radically. If the emergence of the Internet was revolutionary in the way we communicate and obtain information, currently the availability and mobility of technologies affects consumers' habits and promotes the transformation of classic business models. Aim of the study: to explore and learn about the development trends of digital marketing. Applied research methods: monographic descriptive method, analysis, synthesis, statistical method. The study based on scientific publications, statistics and other sources of information. The results of the study show that in 2019 digital marketing tools are most actively used: artificial intelligence / augmented reality / machine learning; video marketing; chatbots, virtual assistants.


2021 ◽  
pp. 104225872110384
Author(s):  
Fabio Bertoni ◽  
Stefano Bonini ◽  
Vincenzo Capizzi ◽  
Massimo G. Colombo ◽  
Sophie Manigart

Digitization creates new financial channels that complement traditional intermediaries, but may raise concerns over fraud, cybersecurity, or bubbles. Artificial intelligence and machine learning change the way in which traditional investors work. This special issue focuses on economic, cultural, and regulatory determinants of fintech development, and on the new forms of information production and processing engendered by digital entrepreneurial finance. We provide a general overview of digitization in the market for entrepreneurial finance, illustrate how the different articles in the special issue contribute to advance our knowledge, and identify promising avenues for research.


Thorax ◽  
2020 ◽  
Vol 75 (8) ◽  
pp. 695-701 ◽  
Author(s):  
Sherif Gonem ◽  
Wim Janssens ◽  
Nilakash Das ◽  
Marko Topalovic

The past 5 years have seen an explosion of interest in the use of artificial intelligence (AI) and machine learning techniques in medicine. This has been driven by the development of deep neural networks (DNNs)—complex networks residing in silico but loosely modelled on the human brain—that can process complex input data such as a chest radiograph image and output a classification such as ‘normal’ or ‘abnormal’. DNNs are ‘trained’ using large banks of images or other input data that have been assigned the correct labels. DNNs have shown the potential to equal or even surpass the accuracy of human experts in pattern recognition tasks such as interpreting medical images or biosignals. Within respiratory medicine, the main applications of AI and machine learning thus far have been the interpretation of thoracic imaging, lung pathology slides and physiological data such as pulmonary function tests. This article surveys progress in this area over the past 5 years, as well as highlighting the current limitations of AI and machine learning and the potential for future developments.


2020 ◽  
Vol 2 (11) ◽  
Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Rob Walton ◽  
Max Van Kleek ◽  
Rafael Mantilla Montalvo ◽  
...  

AbstractWe explore the potential and practical challenges in the use of artificial intelligence (AI) in cyber risk analytics, for improving organisational resilience and understanding cyber risk. The research is focused on identifying the role of AI in connected devices such as Internet of Things (IoT) devices. Through literature review, we identify wide ranging and creative methodologies for cyber analytics and explore the risks of deliberately influencing or disrupting behaviours to socio-technical systems. This resulted in the modelling of the connections and interdependencies between a system's edge components to both external and internal services and systems. We focus on proposals for models, infrastructures and frameworks of IoT systems found in both business reports and technical papers. We analyse this juxtaposition of related systems and technologies, in academic and industry papers published in the past 10 years. Then, we report the results of a qualitative empirical study that correlates the academic literature with key technological advances in connected devices. The work is based on grouping future and present techniques and presenting the results through a new conceptual framework. With the application of social science's grounded theory, the framework details a new process for a prototype of AI-enabled dynamic cyber risk analytics at the edge.


Sign in / Sign up

Export Citation Format

Share Document