scholarly journals Improving Medical Data Annotation Including Humans in the Machine Learning Loop

2021 ◽  
Vol 7 (1) ◽  
pp. 39
Author(s):  
José Bobes-Bascarán ◽  
Eduardo Mosqueira-Rey ◽  
David Alonso-Ríos

At present, the great majority of Artificial Intelligence (AI) systems require the participation of humans in their development, tuning, and maintenance. Particularly, Machine Learning (ML) systems could greatly benefit from their expertise or knowledge. Thus, there is an increasing interest around how humans interact with those systems to obtain the best performance for both the AI system and the humans involved. Several approaches have been studied and proposed in the literature that can be gathered under the umbrella term of Human-in-the-Loop Machine Learning. The application of those techniques to the health informatics environment could provide a great value on prognosis and diagnosis tasks contributing to develop a better health service for Cancer related diseases.

2021 ◽  
Vol 11 (1) ◽  
pp. 32
Author(s):  
Oliwia Koteluk ◽  
Adrian Wartecki ◽  
Sylwia Mazurek ◽  
Iga Kołodziejczak ◽  
Andrzej Mackiewicz

With an increased number of medical data generated every day, there is a strong need for reliable, automated evaluation tools. With high hopes and expectations, machine learning has the potential to revolutionize many fields of medicine, helping to make faster and more correct decisions and improving current standards of treatment. Today, machines can analyze, learn, communicate, and understand processed data and are used in health care increasingly. This review explains different models and the general process of machine learning and training the algorithms. Furthermore, it summarizes the most useful machine learning applications and tools in different branches of medicine and health care (radiology, pathology, pharmacology, infectious diseases, personalized decision making, and many others). The review also addresses the futuristic prospects and threats of applying artificial intelligence as an advanced, automated medicine tool.


2021 ◽  
Vol 4 ◽  
Author(s):  
Lindsay Wells ◽  
Tomasz Bednarz

Research into Explainable Artificial Intelligence (XAI) has been increasing in recent years as a response to the need for increased transparency and trust in AI. This is particularly important as AI is used in sensitive domains with societal, ethical, and safety implications. Work in XAI has primarily focused on Machine Learning (ML) for classification, decision, or action, with detailed systematic reviews already undertaken. This review looks to explore current approaches and limitations for XAI in the area of Reinforcement Learning (RL). From 520 search results, 25 studies (including 5 snowball sampled) are reviewed, highlighting visualization, query-based explanations, policy summarization, human-in-the-loop collaboration, and verification as trends in this area. Limitations in the studies are presented, particularly a lack of user studies, and the prevalence of toy-examples and difficulties providing understandable explanations. Areas for future study are identified, including immersive visualization, and symbolic representation.


2020 ◽  
Vol 34 (2) ◽  
pp. 143-164 ◽  
Author(s):  
Tobias Baur ◽  
Alexander Heimerl ◽  
Florian Lingenfelser ◽  
Johannes Wagner ◽  
Michel F. Valstar ◽  
...  

Abstract In the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA. The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.


Author(s):  
Timo Minssen ◽  
Sara Gerke ◽  
Mateo Aboy ◽  
Nicholson Price ◽  
Glenn Cohen

Abstract Companies and healthcare providers are developing and implementing new applications of medical artificial intelligence, including the artificial intelligence sub-type of medical machine learning (MML). MML is based on the application of machine learning (ML) algorithms to automatically identify patterns and act on medical data to guide clinical decisions. MML poses challenges and raises important questions, including (1) How will regulators evaluate MML-based medical devices to ensure their safety and effectiveness? and (2) What additional MML considerations should be taken into account in the international context? To address these questions, we analyze the current regulatory approaches to MML in the USA and Europe. We then examine international perspectives and broader implications, discussing considerations such as data privacy, exportation, explanation, training set bias, contextual bias, and trade secrecy.


2021 ◽  
Author(s):  
Sylvain Cussat-Blanc ◽  
Céline Castets-Renard ◽  
Paul Monsarrat

UNSTRUCTURED Machine Learning (ML), a branch of Artificial Intelligence, now competes with human experts in many specialized biomedical fields and will play an increasing role in precision medicine. As with any other technological advance in medicine, the keys to understanding must be integrated into practitioner training. To respond to this challenge, this viewpoint discusses some necessary changes in the health studies curriculum to help practitioners interpret decisions made by a machine and question them in relation to the patient's medical context. The complexity of technology and the inherent criticality of its use in medicine also necessitate a new medical profession. To achieve this objective, this viewpoint will propose new medical practitioners, with skills both in medicine and in data science, The Doctor in Medical Data Sciences.


2019 ◽  
Vol 28 (2) ◽  
pp. 121-134 ◽  
Author(s):  
ANDREAS HOLZINGER ◽  
MARKUS PLASS ◽  
KATHARINA HOLZINGER ◽  
GLORIA CERASELA CRIS¸AN ◽  
CAMELIA-M. PINTEA ◽  
...  

The ultimate goal of the Machine Learning (ML) community is to develop algorithms that can automatically learn from data, to extract knowledge and to make decisions without any human intervention. Specifically, automatic Machine Learning (aML) approaches show impressive success, e.g. in speech/image recognition or autonomous drive and smart car industry. Recent results even demonstrate intriguingly that deep learning applied for automatic classification of skin lesions is on par with the performance of dermatologists, yet outperforms the average human efficiency. As human perception is inherently limited to 3D environments, such approaches can discover patterns, e.g. that two objects are similar, in arbitrarily high-dimensional spaces what no human is able to do. Humans can deal simultaneously only with limited amounts of data, whilst “big data” is not only beneficial but necessary for aML. However, in health informatics, there are few data sets; aML approaches often suffer from insufficient training samples. Many problems are computationally hard, e.g. subspace clustering, k-anonymization, or protein folding. Here, interactive machine learning (iML) could be successfully used, as a human-in-the-loop contributes to reduce a huge search space through heuristic selection of suitable samples. This can reduce the complexity of NP-hard problems through the knowledge brought in by a human agent involved into the learning algorithm. A huge motivation for iML is that standard black-box approaches lack transparency, hence do not foster trust and acceptance of ML among end-users. Most of all, rising legal and privacy aspects, e.g. the European General Data Protection Regulations (GDPR) make black-box approaches difficult to use, because they often are not able to explain why a decision has been made, e.g. why two objects are similar. All these reasons motivate the idea to open the black-box to a glass-box. In this paper, we present some experiments to demonstrate the effectiveness of the iML human-in-the-loop model, in particular when using a glass-box instead of a black-box model and thus enabling a human directly to interact with a learning algorithm. We selected the Ant Colony System (ACS) algorithm, and applied it on the Traveling Salesman Problem (TSP). The TSP-problem is a good example, because it is of high relevance for health informatics as for example on protein folding problem, thus of enormous importance for fostering cancer research. Finally, from studies of learning from observation, i.e. of how humans extract so much from so little data, fundamental ML-research also may benefit.


2021 ◽  
Vol 11 ◽  
Author(s):  
Congxin Dai ◽  
Bowen Sun ◽  
Renzhi Wang ◽  
Jun Kang

Pituitary adenomas (PAs) are a group of tumors with complex and heterogeneous clinical manifestations. Early accurate diagnosis, individualized management, and precise prediction of the treatment response and prognosis of patients with PA are urgently needed. Artificial intelligence (AI) and machine learning (ML) have garnered increasing attention to quantitatively analyze complex medical data to improve individualized care for patients with PAs. Therefore, we critically examined the current use of AI and ML in the management of patients with PAs, and we propose improvements for future uses of AI and ML in patients with PAs. AI and ML can automatically extract many quantitative features based on massive medical data; moreover, related diagnosis and prediction models can be developed through quantitative analysis. Previous studies have suggested that AI and ML have wide applications in early accurate diagnosis; individualized treatment; predicting the response to treatments, including surgery, medications, and radiotherapy; and predicting the outcomes of patients with PAs. In addition, facial imaging-based AI and ML, pathological picture-based AI and ML, and surgical microscopic video-based AI and ML have also been reported to be useful in assisting the management of patients with PAs. In conclusion, the current use of AI and ML models has the potential to assist doctors and patients in making crucial surgical decisions by providing an accurate diagnosis, response to treatment, and prognosis of PAs. These AI and ML models can improve the quality and safety of medical services for patients with PAs and reduce the complication rates of neurosurgery. Further work is needed to obtain more reliable algorithms with high accuracy, sensitivity, and specificity for the management of PA patients.


2021 ◽  
Author(s):  
Lazaros Toumanidis ◽  
Panagiotis Kasnesis ◽  
Christos Chatzigeorgiou ◽  
Michail Feidakis ◽  
Charalampos Patrikakis

A widespread practice in machine learning solutions is the continuous use of human intelligence to increase their quality and efficiency. A common problem in such solutions is the requirement of a large amount of labeled data. In this paper, we present a practical implementation of the human-in-the-loop computing practice, which includes the combination of active and transfer learning for sophisticated data sampling and weight initialization respectively, and a cross-platform mobile application for crowdsourcing data annotation tasks. We study the use of the proposed framework to a post-event building reconnaissance scenario, where we utilized the implementation of an existing pre-trained computer vision model, an image binary classification solution built on top of it, and max entropy and random sampling as uncertainty sampling methods for the active learning step. Multiple annotations with majority voting as quality assurance are required for new human-annotated images to be added on the train set and retrain the model. We provide the results and discuss our next steps.


Sign in / Sign up

Export Citation Format

Share Document