Special Issue on Digital Pathology, Tissue Image Analysis, Artificial Intelligence, and Machine Learning: Approximation of the Effect of Novel Technologies on Toxicologic Pathology

2021 ◽  
pp. 019262332199375
Author(s):  
Famke Aeffner ◽  
Tobias Sing ◽  
Oliver C. Turner

For decades, it has been postulated that digital pathology is the future. By now it is safe to say that we are living that future. Digital pathology has expanded into all aspects of pathology, including human diagnostic pathology, veterinary diagnostics, research, drug development, regulatory toxicologic pathology primary reads, and peer review. Digital tissue image analysis has enabled users to extract quantitative and complex data from digitized whole-slide images. The following editorial provides an overview of the content of this special issue of Toxicologic Pathology to highlight the range of key topics that are included in this compilation. In addition, the editors provide a commentary on important current aspects to consider in this space, such as accessibility of publication content to the machine learning-novice pathologist, the importance of adequate test set selection, and allowing for data reproducibility.

Author(s):  
Oleksandr Dudin ◽  
◽  
Ozar Mintser ◽  
Oksana Sulaieva ◽  
◽  
...  

Introduction. Over the past few decades, thanks to advances in algorithm development, the introduction of available computing power, and the management of large data sets, machine learning methods have become active in various fields of life. Among them, deep learning possesses a special place, which is used in many spheres of health care and is an integral part and prerequisite for the development of digital pathology. Objectives. The purpose of the review was to gather the data on existing image analysis technologies and machine learning tools developed for the whole-slide digital images in pathology. Methods: Analysis of the literature on machine learning methods used in pathology, staps of automated image analysis, types of neural networks, their application and capabilities in digital pathology was performed. Results. To date, a wide range of deep learning strategies have been developed, which are actively used in digital pathology, and demonstrated excellent diagnostic accuracy. In addition to diagnostic solutions, the integration of artificial intelligence into the practice of pathomorphological laboratory provides new tools for assessing the prognosis and prediction of sensitivity to different treatments. Conclusions: The synergy of artificial intelligence and digital pathology is a key tool to improve the accuracy of diagnostics, prognostication and personalized medicine facilitation


2021 ◽  
pp. 030098582110404
Author(s):  
Aleksandra Zuraw ◽  
Famke Aeffner

Since whole-slide imaging has been commercially available for over 2 decades, digital pathology has become a constantly expanding aspect of the pathology profession that will continue to significantly impact how pathologists conduct their craft. While some aspects, such as whole-slide imaging for archiving, consulting, and teaching, have gained broader acceptance, other facets such as quantitative tissue image analysis and artificial intelligence–based assessments are still met with some reservations. While most vendors in this space have focused on diagnostic applications, that is, viewing one or few slides at a time, some are developing solutions tailored more specifically to the various aspects of veterinary pathology including updated diagnostic, discovery, and research applications. This has especially advanced the use of digital pathology in toxicologic pathology and drug development, for primary reads as well as peer reviews. It is crucial that pathologists gain a deeper understanding of digital pathology and tissue image analysis technology and their applications in order to fully use these tools in a way that enhances and improves the pathologist’s assessment as well as work environment. This review focuses on an updated introduction to the basics of digital pathology and image analysis and introduces emerging topics around artificial intelligence and machine learning.


Author(s):  
Byron Smith ◽  
Meyke Hermsen ◽  
Elizabeth Lesser ◽  
Deepak Ravichandar ◽  
Walter Kremers

Abstract Deep learning has pushed the scope of digital pathology beyond simple digitization and telemedicine. The incorporation of these algorithms in routine workflow is on the horizon and maybe a disruptive technology, reducing processing time, and increasing detection of anomalies. While the newest computational methods enjoy much of the press, incorporating deep learning into standard laboratory workflow requires many more steps than simply training and testing a model. Image analysis using deep learning methods often requires substantial pre- and post-processing order to improve interpretation and prediction. Similar to any data processing pipeline, images must be prepared for modeling and the resultant predictions need further processing for interpretation. Examples include artifact detection, color normalization, image subsampling or tiling, removal of errant predictions, etc. Once processed, predictions are complicated by image file size – typically several gigabytes when unpacked. This forces images to be tiled, meaning that a series of subsamples from the whole-slide image (WSI) are used in modeling. Herein, we review many of these methods as they pertain to the analysis of biopsy slides and discuss the multitude of unique issues that are part of the analysis of very large images.


Author(s):  
Douglas Mesadri GEWEHR ◽  
Allan Fernando GIOVANINI ◽  
Sofia Inez MUNHOZ ◽  
Seigo NAGASHIMA ◽  
Andressa de Souza BERTOLDI ◽  
...  

ABSTRACT Background: Heart dysfunction and liver disease often coexist because of systemic disorders. Any cause of right ventricular failure may precipitate hepatic congestion and fibrosis. Digital image technologies have been introduced to pathology diagnosis, allowing an objective quantitative assessment. The quantification of fibrous tissue in liver biopsy sections is extremely important in the classification, diagnosis and grading of chronic liver disease. Aim: To create a semi-automatic computerized protocol to quantify any amount of centrilobular fibrosis and sinusoidal dilatation in liver Masson’s Trichrome-stained specimen. Method: Once fibrosis had been established, liver samples were collected, histologically processed, stained with Masson’s trichrome, and whole-slide images were captured with an appropriated digital pathology slide scanner. After, a random selection of the regions of interest (ROI’s) was conducted. The data were subjected to software-assisted image analysis (ImageJ®). Results: The analysis of 250 ROI’s allowed to empirically obtain the best application settings to identify the centrilobular fibrosis (CF) and sinusoidal lumen (SL). After the establishment of the colour threshold application settings, an in-house Macro was recorded to set the measurements (fraction area and total area) and calculate the CF and SL ratios by an automatic batch processing. Conclusion: Was possible to create a more detailed method that identifies and quantifies the area occupied by fibrous tissue and sinusoidal lumen in Masson’s trichrome-stained livers specimens.


2012 ◽  
Vol 18 (2) ◽  
pp. 147-153
Author(s):  
LLUÍS MÀRQUEZ ◽  
ALESSANDRO MOSCHITTI

AbstractDuring last decade, machine learning and, in particular, statistical approaches have become more and more important for research in Natural Language Processing (NLP) and Computational Linguistics. Nowadays, most stakeholders of the field use machine learning, as it can significantly enhance both system design and performance. However, machine learning requires careful parameter tuning and feature engineering for representing language phenomena. The latter becomes more complex when the system input/output data is structured, since the designer has both to (i) engineer features for representing structure and model interdependent layers of information, which is usually a non-trivial task; and (ii) generate a structured output using classifiers, which, in their original form, were developed only for classification or regression. Research in empirical NLP has been tackling this problem by constructing output structures as a combination of the predictions of independent local classifiers, eventually applying post-processing heuristics to correct incompatible outputs by enforcing global properties. More recently, some advances of the statistical learning theory, namely structured output spaces and kernel methods, have brought techniques for directly encoding dependencies between data items in a learning algorithm that performs global optimization. Within this framework, this special issue aims at studying, comparing, and reconciling the typical domain/task-specific NLP approaches to structured data with the most advanced machine learning methods. In particular, the selected papers analyze the use of diverse structured input/output approaches, ranging from re-ranking to joint constraint-based global models, for diverse natural language tasks, i.e., document ranking, syntactic parsing, sequence supertagging, and relation extraction between terms and entities. Overall, the experience with this special issue shows that, although a definitive unifying theory for encoding and generating structured information in NLP applications is still far from being shaped, some interesting and effective best practice can be defined to guide practitioners in modeling their own natural language application on complex data.


2020 ◽  
Vol 1 (1) ◽  
pp. 31-38
Author(s):  
Maki Ogura ◽  
Tomoharu Kiyuna ◽  
Hiroshi Yoshida

2020 ◽  
pp. 019262332098031
Author(s):  
Aleksandra Zuraw ◽  
Michael Staup ◽  
Robert Klopfleisch ◽  
Famke Aeffner ◽  
Danielle Brown ◽  
...  

Digital tissue image analysis is a computational method for analyzing whole-slide images and extracting large, complex, and quantitative data sets. However, as with any analysis method, the quality of generated results is dependent on a well-designed quality control system for the entire digital pathology workflow. Such system requires clear procedural controls, appropriate user training, and involvement of specialists to oversee key steps of the workflow. The toxicologic pathologist is responsible for reporting data obtained by digital image analysis and therefore needs to ensure that it is correct. To accomplish that, they must understand the main parameters of the quality control system and should play an integral part in its conception and implementation. This manuscript describes the most common digital tissue image analysis end points and potential sources of analysis errors. In addition, it outlines recommended approaches for ensuring quality and correctness of results for both classical and machine-learning based image analysis solutions, as adapted from a recently proposed Food and Drug Administration regulatory framework for modifications to artificial intelligence/machine learning-based software as a medical device. These approaches are beneficial for any type of toxicopathologic study which uses the described end points and can be adjusted based on the intended use of the image analysis solution.


Sign in / Sign up

Export Citation Format

Share Document