scholarly journals AI in global healthcare: Need for robust governance frameworks

2020 ◽  
Author(s):  
Sandeep Reddy ◽  
Sonia Allan ◽  
Simon Coghlan ◽  
Paul Cooper

The re-emergence of artificial intelligence (AI) in popular discourse and its application in medicine, especially via machine learning (ML) algorithms, has excited interest from policymakers and clinicians alike. The use of AI in clinical care in both developed and developing countries is no longer a question of ‘if?’ but ‘when?’. This creates a pressing need not only for sound ethical guidelines but also for robust governance frameworks to regulate AI in medicine around the world. In this article, we discuss what components need to be considered in developing these governance frameworks and who should lead this worldwide effort?

Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


2020 ◽  
pp. 97-102
Author(s):  
Benjamin Wiggins

Can risk assessment be made fair? The conclusion of Calculating Race returns to actuarial science’s foundations in probability. The roots of probability rest in a pair of problems posed to Blaise Pascal and Pierre de Fermat in the summer of 1654: “the Dice Problem” and “the Division Problem.” From their very foundation, the mathematics of probability offered the potential not only to be used to gain an advantage (as in the case of the Dice Problem), but also to divide material fairly (as in the case of the Division Problem). As the United States and the world enter an age driven by Big Data, algorithms, artificial intelligence, and machine learning and characterized by an actuarialization of everything, we must remember that risk assessment need not be put to use for individual, corporate, or government advantage but, rather, that it has always been capable of guiding how to distribute risk equitably instead.


2020 ◽  
Vol 44 (2) ◽  
pp. 241-260
Author(s):  
Rabih Jamil

Using machine learning and artificial intelligence, Uber has been disrupting the world taxi industry. However, the Uber algorithmic apparatus managed to perfectionize the scalable decentralized tracking and surveillance of mobile living bodies. This article examines the Uber surveillance machinery and discusses the determinants of its algorithmically powered ‘all-seeing power’. The latter is being figured as an Algopticon that reinvents Bentham’s panopticon in the era of the platform economy.


2017 ◽  
Vol 5 (1) ◽  
pp. 54-58 ◽  
Author(s):  
Zhi-Hua Zhou

Abstract Machine learning is the driving force of the hot artificial intelligence (AI) wave. In an interview with NSR, Prof. Thomas Dietterich, the distinguished professor emeritus of computer science at Oregon State University in the USA, the former president of Association of Advancement of Artificial Intelligence (AAAI, the most prestigious association in the field of artificial intelligence) and the founding president of the International Machine Learning Society, talked about exciting recent advances and technical challenges of machine learning, as well as its big impact on the world.


2020 ◽  
Vol 3 (3) ◽  
pp. 214-227
Author(s):  
Yaojie Zhou ◽  
Xiuyuan Xu ◽  
Lujia Song ◽  
Chengdi Wang ◽  
Jixiang Guo ◽  
...  

Abstract Lung cancer is one of the most leading causes of death throughout the world, and there is an urgent requirement for the precision medical management of it. Artificial intelligence (AI) consisting of numerous advanced techniques has been widely applied in the field of medical care. Meanwhile, radiomics based on traditional machine learning also does a great job in mining information through medical images. With the integration of AI and radiomics, great progress has been made in the early diagnosis, specific characterization, and prognosis of lung cancer, which has aroused attention all over the world. In this study, we give a brief review of the current application of AI and radiomics for precision medical management in lung cancer.


Author(s):  
Lenart Kučić ◽  
Nicholas Mirzoeff

Optical and mechanical tools were the first major “augmentation” of human senses. The microscope approached the worlds that were too small for the optical performance of the eye. The telescope touched the too far-off space; X-rays radiated the inaccessible interior of the body. Such augmentations were not innocent, as they demanded a different interpretation of the world, which would correspond to images of infinitely small, remote or hidden. Similar augmentation is now happening with cloud computing, machine vision and artificial intelligence. With these tools, it may be possible to compile and analyze billions of digital images created daily by people and machines. But who will analyze these images and for what purpose? Will they help us to better understand society and learn from past mistakes? Or have they already been hijacked by attention-merchants and political demagogues who are effectively spreading old ideologies with new communication technologies? Keywords: augmented photography, communication technologies, machine learning, machine vision, reality


Author(s):  
Simon Checksfield

With increasing pressure on the limited taxonomical expertise in not only Commonwealth Scientific and Industry Research Organisation (CSIRO) but the world, new and innovative ways need to be found to assist in the curation and identification of biological specimens. CSIRO, through the National Research Collections Australia (NRCA) and Data 61 is hoping to begin a new program of work focused on using Artificial Intelligence (AI) and Machine Learning to build a framework and tools that can help identify a specimen from an image. The framework will include AI models that have been trained by expert taxonomists, thus providing a level of accuracy that has some intrinsic value. NRCA is also exploring how AI could be linked or cross referenced with another initiative using rapid genetic barcoding to identify all newly collected specimens. Combining genetic and AI determinations will add weight to each, and potentially expose some new AI challenges, such as identifying morphological elements against genomic elements. Whilst acknowledging challenges still exist regarding standards, acceptance of identification, provenance, accuracy and governance, the NRCA is hoping AI can assist in freeing the time of our researchers and technicians to work on more pressing and complex issues by reducing their time spent on basic identification. The impact of such a program will also reach into industry and the general public through tools based on the AI models. There is also an opportunity to use this initiative to create global centers of taxonomic expertise, which anyone can use to help identify a specimen.


2012 ◽  
pp. 695-703
Author(s):  
George Tzanis ◽  
Christos Berberidis ◽  
Ioannis Vlahavas

Machine learning is one of the oldest subfields of artificial intelligence and is concerned with the design and development of computational systems that can adapt themselves and learn. The most common machine learning algorithms can be either supervised or unsupervised. Supervised learning algorithms generate a function that maps inputs to desired outputs, based on a set of examples with known output (labeled examples). Unsupervised learning algorithms find patterns and relationships over a given set of inputs (unlabeled examples). Other categories of machine learning are semi-supervised learning, where an algorithm uses both labeled and unlabeled examples, and reinforcement learning, where an algorithm learns a policy of how to act given an observation of the world.


Author(s):  
George Tzanis ◽  
Christos Berberidis ◽  
Ioannis Vlahavas

Machine learning is one of the oldest subfields of artificial intelligence and is concerned with the design and development of computational systems that can adapt themselves and learn. The most common machine learning algorithms can be either supervised or unsupervised. Supervised learning algorithms generate a function that maps inputs to desired outputs, based on a set of examples with known output (labeled examples). Unsupervised learning algorithms find patterns and relationships over a given set of inputs (unlabeled examples). Other categories of machine learning are semi-supervised learning, where an algorithm uses both labeled and unlabeled examples, and reinforcement learning, where an algorithm learns a policy of how to act given an observation of the world.


The human brain is an extraordinary machine. Its ability to process information and adapt to circumstances by reprogramming itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. This has given rise to machine learning, intelligent systems, and robotics. Robots and AI might right now still seem the reserve of blockbuster science fiction movies and documentaries, but it's no doubt the world is changing. This chapter explores the origins, attitudes, and perceptions of robotics and the multiple types of robots that exist today. Perhaps most importantly, it focuses on ethical and societal concerns over the question: Are we heading for a brave new world or a science fiction horror-show where AI and robots displace or, perhaps more worryingly, replace humans?


Sign in / Sign up

Export Citation Format

Share Document