scholarly journals Machine learning: applications of artificial intelligence to imaging and diagnosis

2018 ◽  
Vol 11 (1) ◽  
pp. 111-118 ◽  
Author(s):  
James A. Nichols ◽  
Hsien W. Herbert Chan ◽  
Matthew A. B. Baker
2021 ◽  
Vol 11 (1) ◽  
pp. 32
Author(s):  
Oliwia Koteluk ◽  
Adrian Wartecki ◽  
Sylwia Mazurek ◽  
Iga Kołodziejczak ◽  
Andrzej Mackiewicz

With an increased number of medical data generated every day, there is a strong need for reliable, automated evaluation tools. With high hopes and expectations, machine learning has the potential to revolutionize many fields of medicine, helping to make faster and more correct decisions and improving current standards of treatment. Today, machines can analyze, learn, communicate, and understand processed data and are used in health care increasingly. This review explains different models and the general process of machine learning and training the algorithms. Furthermore, it summarizes the most useful machine learning applications and tools in different branches of medicine and health care (radiology, pathology, pharmacology, infectious diseases, personalized decision making, and many others). The review also addresses the futuristic prospects and threats of applying artificial intelligence as an advanced, automated medicine tool.


2021 ◽  
Vol 89 ◽  
pp. 177-198
Author(s):  
Quinlan D. Buchlak ◽  
Nazanin Esmaili ◽  
Jean-Christophe Leveque ◽  
Christine Bennett ◽  
Farrokh Farrokhi ◽  
...  

2018 ◽  
Vol 4 (5) ◽  
pp. 443-463
Author(s):  
Jim Shook ◽  
Robyn Smith ◽  
Alex Antonio

Businesses and consumers increasingly use artificial intelligence (“AI”)— and specifically machine learning (“ML”) applications—in their daily work. ML is often used as a tool to help people perform their jobs more efficiently, but increasingly it is becoming a technology that may eventually replace humans in performing certain functions. An AI recently beat humans in a reading comprehension test, and there is an ongoing race to replace human drivers with self-driving cars and trucks. Tomorrow there is the potential for much more—as AI is even learning to build its own AI. As the use of AI technologies continues to expand, and especially as machines begin to act more autonomously with less human intervention, important questions arise about how we can best integrate this new technology into our society, particularly within our legal and compliance frameworks. The questions raised are different from those that we have already addressed with other technologies because AI is different. Most previous technologies functioned as a tool, operated by a person, and for legal purposes we could usually hold that person responsible for actions that resulted from using that tool. For example, an employee who used a computer to send a discriminatory or defamatory email could not have done so without the computer, but the employee would still be held responsible for creating the email. While AI can function as merely a tool, it can also be designed to act after making its own decisions, and in the future, will act even more autonomously. As AI becomes more autonomous, it will be more difficult to determine who—or what—is making decisions and taking actions, and determining the basis and responsibility for those actions. These are the challenges that must be overcome to ensure AI’s integration for legal and compliance purposes.


As Artificial Intelligence penetrates all aspects of human life, more and more questions about ethical practices and fair uses arise, which has motivated the research community to look inside and develop methods to interpret these Artificial Intelligence/Machine Learning models. This concept of interpretability can not only help with the ethical questions but also can provide various insights into the working of these machine learning models, which will become crucial in trust-building and understanding how a model makes decisions. Furthermore, in many machine learning applications, the feature of interpretability is the primary value that they offer. However, in practice, many developers select models based on the accuracy score and disregarding the level of interpretability of that model, which can be chaotic as predictions by many high accuracy models are not easily explainable. In this paper, we introduce the concept of Machine Learning Model Interpretability, Interpretable Machine learning, and the methods used for interpretation and explanations.


Author(s):  
Daniel Hannon ◽  
Esa Rantanen ◽  
Ben Sawyer ◽  
Ashley Hughes ◽  
Katherine Darveau ◽  
...  

The continued advances in artificial intelligence and automation through machine learning applications, under the heading of data science, gives reason for pause within the educator community as we consider how to position future human factors engineers to contribute meaningfully in these projects. Do the lessons we learned and now teach regarding automation based on previous generations of technology still apply? What level of DS and ML expertise is needed for a human factors engineer to have a relevant role in the design of future automation? How do we integrate these topics into a field that often has not emphasized quantitative skills? This panel discussion brings together human factors engineers and educators at different stages of their careers to consider how curricula are being adapted to include data science and machine learning, and what the future of human factors education may look like in the coming years.


Sign in / Sign up

Export Citation Format

Share Document