scholarly journals Building tools for machine learning and artificial intelligence in cancer research: best practices and a case study with the PathML toolkit for computational pathology

2021 ◽  
pp. molcanres.0665.2021
Author(s):  
Jacob Rosenthal ◽  
Ryan Carelli ◽  
Mohamed Omar ◽  
David Brundage ◽  
Ella Halbert ◽  
...  
2019 ◽  
Vol 76 (6) ◽  
pp. 1681-1690 ◽  
Author(s):  
Alexander Winkler-Schwartz ◽  
Vincent Bissonnette ◽  
Nykan Mirchi ◽  
Nirros Ponnudurai ◽  
Recai Yilmaz ◽  
...  

First Monday ◽  
2019 ◽  
Author(s):  
Niel Chah

Interest in deep learning, machine learning, and artificial intelligence from industry and the general public has reached a fever pitch recently. However, these terms are frequently misused, confused, and conflated. This paper serves as a non-technical guide for those interested in a high-level understanding of these increasingly influential notions by exploring briefly the historical context of deep learning, its public presence, and growing concerns over the limitations of these techniques. As a first step, artificial intelligence and machine learning are defined. Next, an overview of the historical background of deep learning reveals its wide scope and deep roots. A case study of a major deep learning implementation is presented in order to analyze public perceptions shaped by companies focused on technology. Finally, a review of deep learning limitations illustrates systemic vulnerabilities and a growing sense of concern over these systems.


2020 ◽  
Vol 2 (1) ◽  
pp. 023-040
Author(s):  
Shi-Ming Huang Shi-Ming Huang ◽  
Chang-ping Chen Shi-Ming Huang ◽  
Tzu-ching Wong Chang-ping Chen

<p>Artificial intelligence is an important emerging technology in the accounting industry. Fear and hype associated with artificial intelligence and its impact on accounting and auditing jobs have pervaded the professional fields of accounting and auditing. It is important to develop AI competency in accountants and auditors. This paper presents a teaching case for a professor or lecturer to use for teaching machine learning to accounting students. The case is based on openly available data from the China Stock Market & Accounting Research database and aims to teach students how to predict the future audit report type of a China ST listed company. Through case teaching, students can learn skills related to computer-assisted auditing tools and machine learning (such as ACL) develop the confidence to apply artificial intelligence in their education and future work.</p> <p>&nbsp;</p>


2021 ◽  
Author(s):  
Bongs Lainjo

Abstract Background: Information technology has continued to shape contemporary thematic trends. Advances in communication have impacted almost all themes ranging from education, engineering, healthcare, and many other aspects of our daily lives. Method: This paper attempts to review the different dynamics of the thematic IoT platforms. A select number of themes are extensively analyzed with emphasis on data mining (DM), personalized healthcare (PHC), and thematic trends of a select number of subjectively identified IoT-related publications over three years. In this paper, the number of IoT-related-publications is used as a proxy representing the number of apps. DM remains the trailblazer, serving as a theme with crosscutting qualities that drive artificial intelligence (AI), machine learning (ML), and data transformation. A case study in PHC illustrates the importance, complexity, productivity optimization, and nuances contributing to a successful IoT platform. Among the initial 99 IoT themes, 18 are extensively analyzed using the number of IoT publications to demonstrate a combination of different thematic dynamics, including subtleties that influence escalating IoT publication themes. Results: Based on findings amongst the 99 themes, the annual median IoT-related publications for all the themes over the four years were increasingly 5510, 8930, 11700, and 14800 for 2016, 2017, 2018, and 2019 respectively; indicating an upbeat prognosis of IoT dynamics. Conclusion: The vulnerabilities that come with the successful implementation of IoT systems are highlighted including the successes currently achieved by institutions promoting the benefits of IoT-related systems like the case study. Security continues to be an issue of significant importance.


10.2196/14401 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e14401 ◽  
Author(s):  
Bach Xuan Tran ◽  
Carl A Latkin ◽  
Noha Sharafeldin ◽  
Katherina Nguyen ◽  
Giang Thu Vu ◽  
...  

Background Artificial intelligence (AI)–based therapeutics, devices, and systems are vital innovations in cancer control; particularly, they allow for diagnosis, screening, precise estimation of survival, informing therapy selection, and scaling up treatment services in a timely manner. Objective The aim of this study was to analyze the global trends, patterns, and development of interdisciplinary landscapes in AI and cancer research. Methods An exploratory factor analysis was conducted to identify research domains emerging from abstract contents. The Jaccard similarity index was utilized to identify the most frequently co-occurring terms. Latent Dirichlet Allocation was used for classifying papers into corresponding topics. Results From 1991 to 2018, the number of studies examining the application of AI in cancer care has grown to 3555 papers covering therapeutics, capacities, and factors associated with outcomes. Topics with the highest volume of publications include (1) machine learning, (2) comparative effectiveness evaluation of AI-assisted medical therapies, and (3) AI-based prediction. Noticeably, this classification has revealed topics examining the incremental effectiveness of AI applications, the quality of life, and functioning of patients receiving these innovations. The growing research productivity and expansion of multidisciplinary approaches are largely driven by machine learning, artificial neural networks, and AI in various clinical practices. Conclusions The research landscapes show that the development of AI in cancer care is focused on not only improving prediction in cancer screening and AI-assisted therapeutics but also on improving other corresponding areas such as precision and personalized medicine and patient-reported outcomes.


2021 ◽  
Author(s):  
Andrei Popa ◽  
Ben Amaba ◽  
Jeff Daniels

Abstract A practical framework that outlines the critical steps of a successful process that uses data, machine learning (Ml), and artificial intelligence (AI) is presented in this study. A practical case study is included to demonstrate the process. The use of artificial intelligent and machine learning has not only enhanced but also sped up problem-solving approaches in many domains, including the oil and gas industry. Moreover, these technologies are revolutionizing all key aspects of engineering including; framing approaches, techniques, and outcomes. The proposed framework includes key components to ensure integrity, quality, and accuracy of data and governance centered on principles such as responsibility, equitability, and reliability. As a result, the industry documentation shows that technology coupled with process advances can improve productivity by 20%. A clear work-break-down structure (WBS) to create value using an engineering framework has measurable outcomes. The AI and ML technologies enable the use of large amounts of information, combining static & dynamic data, observations, historical events, and behaviors. The Job Task Analysis (JTA) model is a proven framework to manage processes, people, and platforms. JTA is a modern data-focused approach that prioritizes in order: problem framing, analytics framing, data, methodology, model building, deployment, and lifecycle management. The case study exemplifies how the JTA model optimizes an oilfield production plant, similar to a manufacturing facility. A data-driven approach was employed to analyze and evaluate the production fluid impact during facility-planned or un-planned system disruptions. The workflows include data analytics tools such as ML&AI for pattern recognition and clustering for prompt event mitigation and optimization. The paper demonstrates how an integrated framework leads to significant business value. The study integrates surface and subsurface information to characterize and understand the production impact due to planned and unplanned plant events. The findings led to designing a relief system to divert the back pressure during plant shutdown. The study led to cost avoidance of a new plant, saving millions of dollars, environment impact, and safety considerations, in addition to unnecessary operating costs and maintenance. Moreover, tens of millions of dollars value per year by avoiding production loss of plant upsets or shutdown was created. The study cost nothing to perform, about two months of not focused time by a team of five engineers and data scientists. The work provided critical steps in "creating a trusting" model and "explainability’. The methodology was implemented using existing available data and tools; it was the process and engineering knowledge that led to the successful outcome. Having a systematic WBS has become vital in data analytics projects that use AI and ML technologies. An effective governance system creates 25% productivity improvement and 70% capital improvement. Poor requirements can consume 40%+ of development budget. The process, models, and tools should be used on engineering projects where data and physics are present. The proposed framework demonstrates the business impact and value creation generated by integrating models, data, AI, and ML technologies for modeling and optimization. It reflects the collective knowledge and perspectives of diverse professionals from IBM, Lockheed Martin, and Chevron, who joined forces to document a standard framework for achieving success in data analytics/AI projects.


2020 ◽  
Author(s):  
Vladimir Makarov ◽  
Terry Stouch ◽  
Brandon Allgood ◽  
Christopher Willis ◽  
Nick Lynch

We describe 11 best practices for the successful use of Artificial Intelligence and Machine Learning in the pharmaceutical and biotechnology research, on the data, technology, and organizational management levels.


Amicus Curiae ◽  
2020 ◽  
Vol 1 (3) ◽  
pp. 338-360
Author(s):  
Jamie Grace ◽  
Roxanne Bamford

Policymaking is increasingly being informed by ‘big data’ technologies of analytics, machine learning and artificial intelligence (AI). John Rawls used particular principles of reasoning in his 1971 book, A Theory of Justice, which might help explore known problems of data bias, unfairness, accountability and privacy, in relation to applications of machine learning and AI in government. This paper will investigate how the current assortment of UK governmental policy and regulatory developments around AI in the public sector could be said to meet, or not meet, these Rawlsian principles, and what we might do better by incorporating them when we respond legislatively to this ongoing challenge. This paper uses a case study of data analytics and machine-learning regulation as the central means of this exploration of Rawlsian thinking in relation to the redevelopment of algorithmic governance.


Sign in / Sign up

Export Citation Format

Share Document