scholarly journals Apoptosis Quantification in Tissue: Development of a Semi-Automatic Protocol and Assessment of Critical Steps of Image Processing

Biomolecules ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1523
Author(s):  
Juliette de Noiron ◽  
Marion Hoareau ◽  
Jessie Colin ◽  
Isabelle Guénal

Apoptosis is associated with numerous phenotypical characteristics, and is thus studied with many tools. In this study, we compared two broadly used apoptotic assays: TUNEL and staining with an antibody targeting the activated form of an effector caspase. To compare them, we developed a protocol based on commonly used tools such as image filtering, z-projection, and thresholding. Even though it is commonly used in image-processing protocols, thresholding remains a recurring problem. Here, we analyzed the impact of processing parameters and readout choice on the accuracy of apoptotic signal quantification. Our results show that TUNEL is quite robust, even if image processing parameters may not always allow to detect subtle differences of the apoptotic rate. On the contrary, images from anti-cleaved caspase staining are more sensitive to handle and necessitate being processed more carefully. We then developed an open-source Fiji macro automatizing most steps of the image processing and quantification protocol. It is noteworthy that the field of application of this macro is wider than apoptosis and it can be used to treat and quantify other kind of images.

Author(s):  
Juliette de Noiron ◽  
Marion Hoareau ◽  
Jessie Colin ◽  
Isabelle Guénal

Apoptosis is associated with numerous phenotypical characteristics, and is thus studied with many tools. In this study, we compared two broadly used apoptotic assays: TUNEL and staining with an antibody targeting the activated form of an effector caspase. To compare them, we developed a protocol based on commonly used tools such as filters, zprojection and thresholding. Even though it is commonly used in imageprocessing protocols, thresholding remains a recurring problem. Here we analyzed the impact of processing parameters and readout choice on the accuracy of apoptotic signal quantification. Our results show that TUNEL is quite robust, even if image processing parameters can allow or not to detect subtle differences of the apoptotic rate. On the contrary, images from anticleaved caspase staining are more sensitive to handle and proved to necessitate to be processed more carefully. We then developed an open source Fiji macro automatizing most steps of the image processing and quantification protocol. It is noteworthy that the field of application of this macro is wider than apoptosis as it can perfectly be used to treat and quantify other kind of images.


2017 ◽  
Vol 3 (2) ◽  
pp. 199-202
Author(s):  
Markus Reischl ◽  
Andreas Bartschat ◽  
Urban Liebel ◽  
Jochen Gehrig ◽  
Ference Müller ◽  
...  

AbstractHigh-throughput microscopy makes it possible to observe the morphology of zebrafish on large scale to quantify genetic, toxic or drug effects. The image acquisition is done by automated microscopy, images are evaluated automatically by image processing pipelines, tailored specifically to the requirements of the scientific question. The transfer of such algorithms to other projects, however, is complex due to missing guidelines and lack of mathematical or programming knowledge. In this work, we implement an image processing pipeline for automatic fluorescence quantification in user-defined domains of zebrafish embryos and larvae of different age. The pipeline is capable of detecting embryos and larvae in image stacks and quantifying domain activity. To make this protocol available to the community, we developed an open source software package called „ZebrafishMiner“ which guides the user through all steps of the processing pipeline and makes the algorithms available and easy to handle. We implemented all routines in an MATLAB-based graphical user interface (GUI) that gives the user control over all image processing parameters. The software is shipped with a manual of 30 pages and three tutorial datasets, which guide the user through the manual step by step. It can be downloaded at https://sourceforge.net/projects/scixminer/.


Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3101
Author(s):  
Ahsan Bin Tufail ◽  
Yong-Kui Ma ◽  
Mohammed K. A. Kaabar ◽  
Ateeq Ur Rehman ◽  
Rahim Khan ◽  
...  

Alzheimer’s disease (AD) is a leading health concern affecting the elderly population worldwide. It is defined by amyloid plaques, neurofibrillary tangles, and neuronal loss. Neuroimaging modalities such as positron emission tomography (PET) and magnetic resonance imaging are routinely used in clinical settings to monitor the alterations in the brain during the course of progression of AD. Deep learning techniques such as convolutional neural networks (CNNs) have found numerous applications in healthcare and other technologies. Together with neuroimaging modalities, they can be deployed in clinical settings to learn effective representations of data for different tasks such as classification, segmentation, detection, etc. Image filtering methods are instrumental in making images viable for image processing operations and have found numerous applications in image-processing-related tasks. In this work, we deployed 3D-CNNs to learn effective representations of PET modality data to quantify the impact of different image filtering approaches. We used box filtering, median filtering, Gaussian filtering, and modified Gaussian filtering approaches to preprocess the images and use them for classification using 3D-CNN architecture. Our findings suggest that these approaches are nearly equivalent and have no distinct advantage over one another. For the multiclass classification task between normal control (NC), mild cognitive impairment (MCI), and AD classes, the 3D-CNN architecture trained using Gaussian-filtered data performed the best. For binary classification between NC and MCI classes, the 3D-CNN architecture trained using median-filtered data performed the best, while, for binary classification between AD and MCI classes, the 3D-CNN architecture trained using modified Gaussian-filtered data performed the best. Finally, for binary classification between AD and NC classes, the 3D-CNN architecture trained using box-filtered data performed the best.


2019 ◽  
Vol 2019 (1) ◽  
pp. 331-338 ◽  
Author(s):  
Jérémie Gerhardt ◽  
Michael E. Miller ◽  
Hyunjin Yoo ◽  
Tara Akhavan

In this paper we discuss a model to estimate the power consumption and lifetime (LT) of an OLED display based on its pixel value and the brightness setting of the screen (scbr). This model is used to illustrate the effect of OLED aging on display color characteristics. Model parameters are based on power consumption measurement of a given display for a number of pixel and scbr combinations. OLED LT is often given for the most stressful display operating situation, i.e. white image at maximum scbr, but having the ability to predict the LT for other configurations can be meaningful to estimate the impact and quality of new image processing algorithms. After explaining our model we present a use case to illustrate how we use it to evaluate the impact of an image processing algorithm for brightness adaptation.


MIS Quarterly ◽  
2019 ◽  
Vol 43 (3) ◽  
pp. 951-976
Author(s):  
Likoebe M. Maruping ◽  
◽  
Sherae L. Daniel ◽  
Marcelo Cataldo ◽  
◽  
...  

Minerals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 213
Author(s):  
Hamid Ait Said ◽  
Hassan Noukrati ◽  
Hicham Ben Youcef ◽  
Ayoub Bayoussef ◽  
Hassane Oudadesse ◽  
...  

Three-dimensional hydroxyapatite-chitosan (HA-CS) composites were formulated via solid-liquid technic and freeze-drying. The prepared composites had an apatitic nature, which was demonstrated by X-ray diffraction and Infrared spectroscopy analyses. The impact of the solid/liquid (S/L) ratio and the content and the molecular weight of the polymer on the composite mechanical strength was investigated. An increase in the S/L ratio from 0.5 to 1 resulted in an increase in the compressive strength for HA-CSL (CS low molecular weight: CSL) from 0.08 ± 0.02 to 1.95 ± 0.39 MPa and from 0.3 ± 0.06 to 2.40 ± 0.51 MPa for the HA-CSM (CS medium molecular weight: CSM). Moreover, the increase in the amount (1 to 5 wt%) and the molecular weight of the polymer increased the mechanical strength of the composite. The highest compressive strength value (up to 2.40 ± 0.51 MPa) was obtained for HA-CSM (5 wt% of CS) formulated at an S/L of 1. The dissolution tests of the HA-CS composites confirmed their cohesion and mechanical stability in an aqueous solution. Both polymer and apatite are assumed to work together, giving the synergism needed to make effective cylindrical composites, and could serve as a promising candidate for bone repair in the orthopedic field.


Materials ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 2621
Author(s):  
Aneta Bartkowska

The paper presents the results of a study of the microstructure, chemical composition, microhardness and corrosion resistance of Cr-B coatings produced on Vanadis 6 tool steel. In this study, chromium and boron were added to the steel surface using a laser alloying process. The main purpose of the study was to determine the impact of those chemical elements on surface properties. Chromium and boron as well as their mixtures were prepared in various proportions and then were applied on steel substrate in the form of precoat of 100 µm thickness. Depending on the type of precoat used and laser processing parameters, changes in microstructure and properties were observed. Coatings produced using precoat containing chromium and boron mixture were characterized by high microhardness (900 HV0.05–1300 HV0.005) while maintaining good corrosion resistance. It was also found that too low laser beam power contributed to the formation of cracks and porosity.


Author(s):  
Erin Polka ◽  
Ellen Childs ◽  
Alexa Friedman ◽  
Kathryn S. Tomsho ◽  
Birgit Claus Henn ◽  
...  

Sharing individualized results with health study participants, a practice we and others refer to as “report-back,” ensures participant access to exposure and health information and may promote health equity. However, the practice of report-back and the content shared is often limited by the time-intensive process of personalizing reports. Software tools that automate creation of individualized reports have been built for specific studies, but are largely not open-source or broadly modifiable. We created an open-source and generalizable tool, called the Macro for the Compilation of Report-backs (MCR), to automate compilation of health study reports. We piloted MCR in two environmental exposure studies in Massachusetts, USA, and interviewed research team members (n = 7) about the impact of MCR on the report-back process. Researchers using MCR created more detailed reports than during manual report-back, including more individualized numerical, text, and graphical results. Using MCR, researchers saved time producing draft and final reports. Researchers also reported feeling more creative in the design process and more confident in report-back quality control. While MCR does not expedite the entire report-back process, we hope that this open-source tool reduces the barriers to personalizing health study reports, promotes more equitable access to individualized data, and advances self-determination among participants.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3691
Author(s):  
Ciprian Orhei ◽  
Silviu Vert ◽  
Muguras Mocofan ◽  
Radu Vasiu

Computer Vision is a cross-research field with the main purpose of understanding the surrounding environment as closely as possible to human perception. The image processing systems is continuously growing and expanding into more complex systems, usually tailored to the certain needs or applications it may serve. To better serve this purpose, research on the architecture and design of such systems is also important. We present the End-to-End Computer Vision Framework, an open-source solution that aims to support researchers and teachers within the image processing vast field. The framework has incorporated Computer Vision features and Machine Learning models that researchers can use. In the continuous need to add new Computer Vision algorithms for a day-to-day research activity, our proposed framework has an advantage given by the configurable and scalar architecture. Even if the main focus of the framework is on the Computer Vision processing pipeline, the framework offers solutions to incorporate even more complex activities, such as training Machine Learning models. EECVF aims to become a useful tool for learning activities in the Computer Vision field, as it allows the learner and the teacher to handle only the topics at hand, and not the interconnection necessary for visual processing flow.


Sign in / Sign up

Export Citation Format

Share Document