scholarly journals Towards a national capability framework for Artificial Intelligence and Digital Medicine tools – A learning needs approach

2021 ◽  
pp. 100047
Author(s):  
Hatim Abdulhussein ◽  
Richard Turnbull ◽  
Lucy Dodkin ◽  
Patrick Mitchell
2021 ◽  
pp. 002224372110503
Author(s):  
Jun Hyung Kim ◽  
Minki Kim ◽  
Do Won Kwak ◽  
Sol Lee

Despite a rising interest in artificial intelligence (AI) technology, research in services marketing has not evaluated its role in helping firms learn about customers’ needs and increasing the adaptability of service employees. Therefore, the authors develop a conceptual framework and investigate whether and to what extent providing AI assistance to service employees improves service outcomes. The randomized controlled trial in the context of tutoring services shows that helping service employees (tutors) adapt to students’ learning needs by providing AI-generated diagnoses significantly improves service outcomes measured by academic performance. However, the authors find that some tutors may not utilize AI assistance (i.e., AI aversion), and factors associated with unforeseen barriers to usage (i.e., technology overload) can moderate its impact on outcomes. Interestingly, tutors significantly contributing to the firm’s revenue relied heavily on AI assistance but unexpectedly benefited little from AI in improving service outcomes. Given the wide applicability of AI assistance in a variety of services marketing contexts, the authors suggest that firms should consider the potential difficulties employees face in using the technology rather than encourage them to use it as it is.


Author(s):  
Randi Williams ◽  
Hae Won Park ◽  
Lauren Oh ◽  
Cynthia Breazeal

PopBots is a hands-on toolkit and curriculum designed to help young children learn about artificial intelligence (AI) by building, programming, training, and interacting with a social robot. Today’s children encounter AI in the forms of smart toys and computationally curated educational and entertainment content. However, children have not yet been empowered to understand or create with this technology. Existing computational thinking platforms have made ideas like sequencing and conditionals accessible to young learners. Going beyond this, we seek to make AI concepts accessible. We designed PopBots to address the specific learning needs of children ages four to seven by adapting constructionist ideas into an AI curriculum. This paper describes how we designed the curriculum and evaluated its effectiveness with 80 Pre-K and Kindergarten children. We found that the use of a social robot as a learning companion and programmable artifact was effective in helping young children grasp AI concepts. We also identified teaching approaches that had the greatest impact on student’s learning. Based on these, we make recommendations for future modules and iterations for the PopBots platform.


2019 ◽  
Vol 9 (2) ◽  
pp. 110 ◽  
Author(s):  
Meng-Leong HOW ◽  
Wei Loong David HUNG

Artificial intelligence-enabled adaptive learning systems (AI-ALS) are increasingly being deployed in education to enhance the learning needs of students. However, educational stakeholders are required by policy-makers to conduct an independent evaluation of the AI-ALS using a small sample size in a pilot study, before that AI-ALS can be approved for large-scale deployment. Beyond simply believing in the information provided by the AI-ALS supplier, there arises a need for educational stakeholders to independently understand the motif of the pedagogical characteristics that underlie the AI-ALS. Laudable efforts were made by researchers to engender frameworks for the evaluation of AI-ALS. Nevertheless, those highly technical techniques often require advanced mathematical knowledge or computer programming skills. There remains a dearth in the extant literature for a more intuitive way for educational stakeholders—rather than computer scientists—to carry out the independent evaluation of an AI-ALS to understand how it could provide opportunities to educe the problem-solving abilities of the students so that they can successfully learn the subject matter. This paper proffers an approach for educational stakeholders to employ Bayesian networks to simulate predictive hypothetical scenarios with controllable parameters to better inform them about the suitability of the AI-ALS for the students.


This chapter looks across the landscape of learning in the current age of algorithms and so-called ‘artificial intelligence' with a focus on issues raised in the concept of “the master algorithm” around learning models and the future of learning. Pedro Domingos identifies five “scientific” theories of learning algorithms and presents them sequentially and so capable of improvement by the theorist (and he alone). By contrast, in her conversational framework, Diana Laurillard presents four approaches to framing learning models. The authors prefer Laurillard's modelling but believe the fifth dimension of rhizomatic learning needs to be added to her framework in order to enable the learner to take the final decisions on what has been learned and what they will do subsequently, and so produce a learner-centric framework for learning and architectures of participation. They examine several histories of thinking about intelligence as well as long-term views of technology before outlining, briefly, a phenomenonology of learning as the potential countervailing ideas to AI in education.


Author(s):  
Diego Araiza-Garaygordobil ◽  
Antonio Jordán-Ríos ◽  
Carlos R. Sierra-Fernández ◽  
Luis E. Juárez-Orozco

2020 ◽  
Author(s):  
Giovanni Briganti ◽  
Olivier Le Moine

Artificial intelligence-powered medical technologies are rapidly evolving into applicable solutions for clinical practice. Deep learning algorithms can deal with increasing amounts of data provided by wearables, smartphones and other mobile monitoring sensors in different areas of medicine.Currently, only very specific settings in clinical practice benefit from the application of artificial intelligence, such as the detection of atrial fibrillation, epilepsy seizures, and hypoglycemia, or the diagnosis of disease based on histopathological examination or medical imaging. The implementation of augmented medicine is long-awaited by patients because it allows for a greater autonomy and a more personalized treatment, however, it is met with resistance from physicians which were not prepared for such an evolution of clinical practice.This phenomenon also creates the need to validate these modern tools with traditional clinical trials, debate the educational upgrade of the medical curriculum in light of digital medicine as well as ethical consideration of the ongoing connected monitoring. The aim of this paper is to discuss recent scientific literature and provide a perspective on the benefits, future opportunities and risks of established artificial intelligence applications in clinical practice on physicians, healthcare institutions, medical education and bioethics.


2018 ◽  
Vol 1 (1) ◽  
Author(s):  
Alexander L. Fogel ◽  
Joseph C. Kvedar

2020 ◽  
Vol 7 (1) ◽  
pp. 205395172093399
Author(s):  
Paul Prinsloo

Higher education institutions have access to higher volumes and a greater variety and granularity of student data, often in real-time, than ever before. As such, the collection, analysis and use of student data are increasingly crucial in operational and strategic planning, and in delivering appropriate and effective learning experiences to students. Student data – not only in what data is (not) collected, but also how the data is framed and used – has material and discursive effects, both permanent and fleeting. We have to critically engage claims that artificial intelligence and the ever expansive/expanding systems of algorithmic decision-making provide speedy, accessible, revealing, panoramic, prophetic and smart analyses of students' risks, potential and learning needs. We need to pry open the black boxes higher education institutions (and increasingly venture capital and learning management system providers) use to admit, steer, predict and prescribe students’ learning journeys.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Sujay Kakarmath ◽  
Andre Esteva ◽  
Rima Arnaout ◽  
Hugh Harvey ◽  
Santosh Kumar ◽  
...  

Abstract Since its inception in 2017, npj Digital Medicine has attracted a disproportionate number of manuscripts reporting on uses of artificial intelligence. This field has matured rapidly in the past several years. There was initial fascination with the algorithms themselves (machine learning, deep learning, convoluted neural networks) and the use of these algorithms to make predictions that often surpassed prevailing benchmarks. As the discipline has matured, individuals have called attention to aberrancies in the output of these algorithms. In particular, criticisms have been widely circulated that algorithmically developed models may have limited generalizability due to overfitting to the training data and may systematically perpetuate various forms of biases inherent in the training data, including race, gender, age, and health state or fitness level (Challen et al. BMJ Qual. Saf. 28:231–237, 2019; O’neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Broadway Book, 2016). Given our interest in publishing the highest quality papers and the growing volume of submissions using AI algorithms, we offer a list of criteria that authors should consider before submitting papers to npj Digital Medicine.


Sign in / Sign up

Export Citation Format

Share Document