scholarly journals The Challenges of Telemedicine in Rheumatology

2021 ◽  
Vol 8 ◽  
Author(s):  
Yujie Song ◽  
Laurène Bernard ◽  
Christian Jorgensen ◽  
Gilles Dusfour ◽  
Yves-Marie Pers

During the past 20 years, the development of telemedicine has accelerated due to the rapid advancement and implementation of more sophisticated connected technologies. In rheumatology, e-health interventions in the diagnosis, monitoring and mentoring of rheumatic diseases are applied in different forms: teleconsultation and telecommunications, mobile applications, mobile devices, digital therapy, and artificial intelligence or machine learning. Telemedicine offers several advantages, in particular by facilitating access to healthcare and providing personalized and continuous patient monitoring. However, some limitations remain to be solved, such as data security, legal problems, reimbursement method, accessibility, as well as the application of recommendations in the development of the tools.

2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Author(s):  
Anil Babu Payedimarri ◽  
Diego Concina ◽  
Luigi Portinale ◽  
Massimo Canonico ◽  
Deborah Seys ◽  
...  

Artificial Intelligence (AI) and Machine Learning (ML) have expanded their utilization in different fields of medicine. During the SARS-CoV-2 outbreak, AI and ML were also applied for the evaluation and/or implementation of public health interventions aimed to flatten the epidemiological curve. This systematic review aims to evaluate the effectiveness of the use of AI and ML when applied to public health interventions to contain the spread of SARS-CoV-2. Our findings showed that quarantine should be the best strategy for containing COVID-19. Nationwide lockdown also showed positive impact, whereas social distancing should be considered to be effective only in combination with other interventions including the closure of schools and commercial activities and the limitation of public transportation. Our findings also showed that all the interventions should be initiated early in the pandemic and continued for a sustained period. Despite the study limitation, we concluded that AI and ML could be of help for policy makers to define the strategies for containing the COVID-19 pandemic.


2018 ◽  
Vol 14 (4) ◽  
pp. 734-747 ◽  
Author(s):  
Constance de Saint Laurent

There has been much hype, over the past few years, about the recent progress of artificial intelligence (AI), especially through machine learning. If one is to believe many of the headlines that have proliferated in the media, as well as in an increasing number of scientific publications, it would seem that AI is now capable of creating and learning in ways that are starting to resemble what humans can do. And so that we should start to hope – or fear – that the creation of fully cognisant machine might be something we will witness in our life time. However, much of these beliefs are based on deep misconceptions about what AI can do, and how. In this paper, I start with a brief introduction to the principles of AI, machine learning, and neural networks, primarily intended for psychologists and social scientists, who often have much to contribute to the debates surrounding AI but lack a clear understanding of what it can currently do and how it works. I then debunk four common myths associated with AI: 1) it can create, 2) it can learn, 3) it is neutral and objective, and 4) it can solve ethically and/or culturally sensitive problems. In a third and last section, I argue that these misconceptions represent four main dangers: 1) avoiding debate, 2) naturalising our biases, 3) deresponsibilising creators and users, and 4) missing out some of the potential uses of machine learning. I finally conclude on the potential benefits of using machine learning in research, and thus on the need to defend machine learning without romanticising what it can actually do.


2015 ◽  
Vol 3 (2) ◽  
pp. 115-126 ◽  
Author(s):  
Naresh Babu Bynagari

Artificial Intelligence (AI) is one of the most promising and intriguing innovations of modernity. Its potential is virtually unlimited, from smart music selection in personal gadgets to intelligent analysis of big data and real-time fraud detection and aversion. At the core of the AI philosophy lies an assumption that once a computer system is provided with enough data, it can learn based on that input. The more data is provided, the more sophisticated its learning ability becomes. This feature has acquired the name "machine learning" (ML). The opportunities explored with ML are plentiful today, and one of them is an ability to set up an evolving security system learning from the past cyber-fraud experiences and developing more rigorous fraud detection mechanisms. Read on to learn more about ML, the types and magnitude of fraud evidenced in modern banking, e-commerce, and healthcare, and how ML has become an innovative, timely, and efficient fraud prevention technology.


To build up a particular profile about a person, the study of examining the comportment is known as Behavior analysis. Initially the Behavior analysis is used in psychology and for suggesting and developing different types the application content for user then it developed in information technology. To make the applications for user's personal needs it becoming a new trends with the use of artificial intelligence (AI). in many applications like innovation to do everything from anticipating buy practices to altering a home's indoor regulator to the inhabitant's optimal temperature for a specific time of day use machine learning and artificial intelligence technology. The technique that is use to advance the rule proficiency that rely upon the past experience is known as machine learning. By utilizing the insights hypothesis it makes the numerical model, and its real work is to infer from the models gave. To take the information clearly from the data the methodology utilizes computational techniques.


Author(s):  
Melda Yucel ◽  
Gebrail Bekdaş ◽  
Sinan Melih Nigdeli

This chapter presents a summary review of development of Artificial Intelligence (AI). Definitions of AI are given with basic features. The development process of AI and machine learning is presented. The developments of applications from the past to today are mentioned and use of AI in different categories is given. Prediction applications using artificial neural network are given for engineering applications. Usage of AI methods to predict optimum results is the current trend and it will be more important in the future.


2018 ◽  
Vol 14 (4) ◽  
pp. 568-607 ◽  
Author(s):  
Ulrich Schwalbe

Abstract This paper discusses whether self-learning price-setting algorithms can coordinate their pricing behavior to achieve a collusive outcome that maximizes the joint profits of the firms using them. Although legal scholars have generally assumed that algorithmic collusion is not only possible but also exceptionally easy, computer scientists examining cooperation between algorithms as well as economists investigating collusion in experimental oligopolies have countered that coordinated, tacitly collusive behavior is not as rapid, easy, or even inevitable as often suggested. Research in experimental economics has shown that the exchange of information is vital to collusion when more than two firms operate within a given market. Communication between algorithms is also a topic in research on artificial intelligence, in which some scholars have recently indicated that algorithms can learn to communicate, albeit in somewhat limited ways. Taken together, algorithmic collusion currently seems far more difficult to achieve than legal scholars have often assumed and is thus not a particularly relevant competitive concern at present. Moreover, there are several legal problems associated with algorithmic collusion, including questions of liability, of auditing and monitoring algorithms, and of enforcing competition law.


2021 ◽  
pp. 249-257
Author(s):  
Наталия Дмитриевна Хрулёва

Мобильные ГИС-приложения становятся все более сложными, как решаемые с их помощью задачи. Обычное ГИС-приложение должно включать такие элементы, как искусственный интеллект, распознавание образов или машинное обучение, реляционные или нереляционные базы данных, пространственное представление и рассуждения. Такие компании, как Google и Apple, разрабатывают новые технологии, связанные с разработкой мобильных приложений. Например, Apple представила в 2019 году на WWDC2019 и WWDC2020 новую технологию под названием SwiftUI, которая направлена на сложности разработки мобильного приложения и позволяющая интегрировать такие технологии, как Mapkit, для представления пространственной информации. В данной работе представлены исследования преимуществ использования SwiftUI для интеграции Mapkit в качестве основы пространственного представления для облегчения разработки мобильных ГИС-приложений. Информационные технологии имеют большое разнообразие применений в различных областях науки. Например, искусственный интеллект и машинное обучение - это технологии, которые начинают широко использоваться в мобильных приложениях. Целью данной работы является исследования способов разработки мобильных приложений, которые могут выполнять представление и вычисления информации в соответствии с требованиями. Mobile GIS applications are becoming more and more complex, as the tasks they solve are. A typical GIS application should include elements such as artificial intelligence, pattern recognition or machine learning, relational or non-relational databases, spatial representation and reasoning. Companies such as Google and Apple are developing new technologies related to the development of mobile applications. For example, Apple introduced a new technology called SwiftUI at WWDC2019 and WWDC2020 in 2019, which aims to reduce the complexity of mobile application development and allows integrating technologies such as Mapkit to represent spatial information. This paper presents studies of the advantages of using SwiftUI to integrate Mapkit as a basis for spatial representation to facilitate the development of mobile GIS applications. Information technologies have a wide variety of applications in various fields of science. For example, artificial intelligence and machine learning are technologies that are beginning to be widely used in mobile applications. The purpose of this work is to investigate ways to develop mobile applications that can perform the presentation and calculation of information in accordance with the requirements.


Thorax ◽  
2020 ◽  
Vol 75 (8) ◽  
pp. 695-701 ◽  
Author(s):  
Sherif Gonem ◽  
Wim Janssens ◽  
Nilakash Das ◽  
Marko Topalovic

The past 5 years have seen an explosion of interest in the use of artificial intelligence (AI) and machine learning techniques in medicine. This has been driven by the development of deep neural networks (DNNs)—complex networks residing in silico but loosely modelled on the human brain—that can process complex input data such as a chest radiograph image and output a classification such as ‘normal’ or ‘abnormal’. DNNs are ‘trained’ using large banks of images or other input data that have been assigned the correct labels. DNNs have shown the potential to equal or even surpass the accuracy of human experts in pattern recognition tasks such as interpreting medical images or biosignals. Within respiratory medicine, the main applications of AI and machine learning thus far have been the interpretation of thoracic imaging, lung pathology slides and physiological data such as pulmonary function tests. This article surveys progress in this area over the past 5 years, as well as highlighting the current limitations of AI and machine learning and the potential for future developments.


Sign in / Sign up

Export Citation Format

Share Document