Image Analysis and Coding Based on Ordinal Data Representation

Author(s):  
Simina Emerich ◽  
Eugen Lupu ◽  
Bogdan Belean ◽  
Septimiu Crisan
2014 ◽  
Vol 40 (10) ◽  
pp. 589-610 ◽  
Author(s):  
George Teodoro ◽  
Tony Pan ◽  
Tahsin Kurc ◽  
Jun Kong ◽  
Lee Cooper ◽  
...  

2020 ◽  
Vol 25 (7) ◽  
pp. 812-821 ◽  
Author(s):  
Stephan Steigele ◽  
Daniel Siegismund ◽  
Matthias Fassler ◽  
Marusa Kustec ◽  
Bernd Kappler ◽  
...  

Drug discovery programs are moving increasingly toward phenotypic imaging assays to model disease-relevant pathways and phenotypes in vitro. These assays offer richer information than target-optimized assays by investigating multiple cellular pathways simultaneously and producing multiplexed readouts. However, extracting the desired information from complex image data poses significant challenges, preventing broad adoption of more sophisticated phenotypic assays. Deep learning-based image analysis can address these challenges by reducing the effort required to analyze large volumes of complex image data at a quality and speed adequate for routine phenotypic screening in pharmaceutical research. However, while general purpose deep learning frameworks are readily available, they are not readily applicable to images from automated microscopy. During the past 3 years, we have optimized deep learning networks for this type of data and validated the approach across diverse assays with several industry partners. From this work, we have extracted five essential design principles that we believe should guide deep learning-based analysis of high-content images and multiparameter data: (1) insightful data representation, (2) automation of training, (3) multilevel quality control, (4) knowledge embedding and transfer to new assays, and (5) enterprise integration. We report a new deep learning-based software that embodies these principles, Genedata Imagence, which allows screening scientists to reliably detect stable endpoints for primary drug response, assess toxicity and safety-relevant effects, and discover new phenotypes and compound classes. Furthermore, we show how the software retains expert knowledge from its training on a particular assay and successfully reapplies it to different, novel assays in an automated fashion.


2006 ◽  
Author(s):  
Luis Ibanez ◽  
Lydia Ng ◽  
Josh Cates ◽  
Stephen Aylward ◽  
Bill Lorensen ◽  
...  

This course introduces attendees to select open-source efforts in the field of medical image analysis. Opportunities for users and developers are presented. The course particularly focuses on the open-source Insight Toolkit (ITK) for medical image segmentation and registration. The course describes the procedure for downloading and installing the toolkit and covers the use of its data representation and filtering classes. Attendees are shown how ITK can be used in their research, rapid prototyping, and application development.LEARNING OUTCOMES After completing this course, attendees will be able to: contribute to and benefit from open-source software for medical image analysis download and install the ITK toolkit start their own software project based on ITK design and construct an image processing pipeline combine ITK filters for medical image segmentation combine ITK components for medical image registrationINTENDED AUDIENCE This course is intended for anyone involved in medical image analysis. In particular it targets graduate students, researchers and professionals in the areas of computer science and medicine. Attendees should have an intermediate level on object oriented programming with C++ and must be familiar with the basics of medical image processing and analysis.


Author(s):  
S.F. Stinson ◽  
J.C. Lilga ◽  
M.B. Sporn

Increased nuclear size, resulting in an increase in the relative proportion of nuclear to cytoplasmic sizes, is an important morphologic criterion for the evaluation of neoplastic and pre-neoplastic cells. This paper describes investigations into the suitability of automated image analysis for quantitating changes in nuclear and cytoplasmic cross-sectional areas in exfoliated cells from tracheas treated with carcinogen.Neoplastic and pre-neoplastic lesions were induced in the tracheas of Syrian hamsters with the carcinogen N-methyl-N-nitrosourea. Cytology samples were collected intra-tracheally with a specially designed catheter (1) and stained by a modified Papanicolaou technique. Three cytology specimens were selected from animals with normal tracheas, 3 from animals with dysplastic changes, and 3 from animals with epidermoid carcinoma. One hundred randomly selected cells on each slide were analyzed with a Bausch and Lomb Pattern Analysis System automated image analyzer.


Author(s):  
F. A. Heckman ◽  
E. Redman ◽  
J.E. Connolly

In our initial publication on this subject1) we reported results demonstrating that contrast is the most important factor in producing the high image quality required for reliable image analysis. We also listed the factors which enhance contrast in order of the experimentally determined magnitude of their effect. The two most powerful factors affecting image contrast attainable with sheet film are beam intensity and KV. At that time we had only qualitative evidence for the ranking of enhancing factors. Later we carried out the densitometric measurements which led to the results outlined below.Meaningful evaluations of the cause-effect relationships among the considerable number of variables in preparing EM negatives depend on doing things in a systematic way, varying only one parameter at a time. Unless otherwise noted, we adhered to the following procedure evolved during our comprehensive study:Philips EM-300; 30μ objective aperature; magnification 7000- 12000X, exposure time 1 second, anti-contamination device operating.


Author(s):  
H.P. Rohr

Today, in image analysis the broadest possible rationalization and economization have become desirable. Basically, there are two approaches for image analysis: The image analysis through the so-called scanning methods which are usually performed without the human eye and the systems of optical semiautomatic analysis completely relying on the human eye.The new MOP AM 01 opto-manual system (fig.) represents one of the very promising approaches in this field. The instrument consists of an electronic counting and storing unit, which incorporates a microprocessor and a keyboard for choice of measuring parameters, well designed for easy use.Using the MOP AM 01 there are three possibilities of image analysis:the manual point counting,the opto-manual point counting andthe measurement of absolute areas and/or length (size distribution analysis included).To determine a point density for the calculation of the corresponding volume density the intercepts lying within the structure are scanned with the light pen.


Author(s):  
S. Nakahara ◽  
D. M. Maher

Since Head first demonstrated the advantages of computer displayed theoretical intensities from defective crystals, computer display techniques have become important in image analysis. However the computational methods employed resort largely to numerical integration of the dynamical equations of electron diffraction. As a consequence, the interpretation of the results in terms of the defect displacement field and diffracting variables is difficult to follow in detail. In contrast to this type of computational approach which is based on a plane-wave expansion of the excited waves within the crystal (i.e. Darwin representation ), Wilkens assumed scattering of modified Bloch waves by an imperfect crystal. For localized defects, the wave amplitudes can be described analytically and this formulation has been used successfully to predict the black-white symmetry of images arising from small dislocation loops.


Author(s):  
P. Hagemann

The use of computers in the analytical electron microscopy today shows three different trends (1) automated image analysis with dedicated computer systems, (2) instrument control by microprocessors and (3) data acquisition and processing e.g. X-ray or EEL Spectroscopy.While image analysis in the T.E.M. usually needs a television chain to get a sequential transmission suitable as computer input, the STEM system already has this necessary facility. For the EM400T-STEM system therefore an interface was developed, that allows external control of the beam deflection in TEM as well as the control of the STEM probe and video signal/beam brightness on the STEM screen.The interface sends and receives analogue signals so that the transmission rate is determined by the convertors in the actual computer periphery.


Author(s):  
Beverly L. Giammara ◽  
Jennifer S. Stevenson ◽  
Peggy E. Yates ◽  
Robert H. Gunderson ◽  
Jacob S. Hanker

An 11mm length of sciatic nerve was removed from 10 anesthetized adult rats and replaced by a biodegradable polyester Vicryl™ mesh sleeve which was then injected with the basement membrane gel, Matrigel™. It was noted that leg sensation and movement were much improved after 30 to 45 days and upon sacrifice nerve reconnection was noted in all animals. Epoxy sections of the repaired nerves were compared with those of the excised segments by the use of a variation of the PAS reaction, the PATS reaction, developed in our laboratories for light and electron microscopy. This microwave-accelerated technique employs periodic acid, thiocarbohydrazide and silver methenamine. It stains basement membrane or Type IV collagen brown and type III collagen (reticulin), axons, Schwann cells, endoneurium and perineurium black. Epoxy sections of repaired and excised nerves were also compared by toluidine blue (tb) staining. Comparison of the sections of control and repaired nerves was done by computer-assisted microscopic image analysis using an Olympus CUE-2 Image Analysis System.


Sign in / Sign up

Export Citation Format

Share Document