biological images
Recently Published Documents


TOTAL DOCUMENTS

137
(FIVE YEARS 28)

H-INDEX

16
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Gustave Ronteix ◽  
Valentin Bonnet ◽  
Sebastien Sart ◽  
Jeremie Sobel ◽  
Elric Esposito ◽  
...  

Microscopy techniques and image segmentation algorithms have improved dramatically this decade, leading to an ever increasing amount of biological images and a greater reliance on imaging to investigate biological questions. This has created a need for methods to extract the relevant information on the behaviors of cells and their interactions, while reducing the amount of computing power required to organize this information. This task can be performed by using a network representation in which the cells and their properties are encoded in the nodes, while the neighborhood interactions are encoded by the links. Here we introduce Griottes, an open-source tool to build the "network twin" of 2D and 3D tissues from segmented microscopy images. We show how the library can provide a wide range of biologically relevant metrics on individual cells and their neighborhoods, with the objective of providing multi-scale biological insights. The library's capacities are demonstrated on different image and data types. This library is provided as an open-source tool that can be integrated into common image analysis workflows to increase their capacities.


Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 123
Author(s):  
Rania Almajalid ◽  
Ming Zhang ◽  
Juan Shan

In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the accuracy of segmenting structures such as cartilage, bone marrow lesion, and meniscus by identifying the bone structure first. U-net is a convolutional neural network that was originally designed to segment the biological images with limited training data. The input of the original U-net is a single 2D image and the output is a binary 2D image. In this study, we modified the U-net model to identify the knee bone structures using 3D MRI, which is a sequence of 2D slices. A fully automatic model has been proposed to detect and segment knee bones. The proposed model was trained, tested, and validated using 99 knee MRI cases where each case consists of 160 2D slices for a single knee scan. To evaluate the model’s performance, the similarity, dice coefficient (DICE), and area error metrics were calculated. Separate models were trained using different knee bone components including tibia, femur, patella, as well as a combined model for segmenting all the knee bones. Using the whole MRI sequence (160 slices), the method was able to detect the beginning and ending bone slices first, and then segment the bone structures for all the slices in between. On the testing set, the detection model accomplished 98.79% accuracy and the segmentation model achieved DICE 96.94% and similarity 93.98%. The proposed method outperforms several state-of-the-art methods, i.e., it outperforms U-net by 3.68%, SegNet by 14.45%, and FCN-8 by 2.34%, in terms of DICE score using the same dataset.


2021 ◽  
Author(s):  
Peter Andrew McAtee ◽  
Simona Nardozza ◽  
Annette Richardson ◽  
Mark Wohlers ◽  
Robert Schaffer

Abstract BackgroundThe ability to quantify the colour of fruit is extremely important for a number of applied fields including plant breeding, postharvest assessment, and consumer quality assessment. Fruit and other plant organs display highly complex colour patterning. This complexity makes it challenging to compare and contrast colours in an accurate and time efficient manner. Multiple methodologies exist that attempt to digitally quantify colour in complex images but these either require a priori knowledge to assign colours to a particular bin, or average the colours present within an assayed region into a single colour value. As such, to date there are no published methodologies that assess colour patterning using a data driven approach. Results In this study we present a methodology to acquire and process digital images of biological samples that contain complex colour gradients. The CIE (Commission internationale de l'éclairage / International Commission on Illumination) ΔE2000 formula was used to determine the perceptually unique colours (PUC) within images of fruit containing complex colour gradients. This process, on average, resulted in a 98% reduction in colour values from the number of unique colours (UC) in the original image. This data driven procedure summarised the colour data values while maintaining a linear relationship with the normalised colour complexity contained in the total image. A weighted ΔE2000 distance metric was used to generate a distance matrix and facilitated clustering of summarised colour data.ConclusionsClustering showed that our data driven methodology has the ability to group these complex images into their respective binomial families while maintaining the ability to detect subtle colour differences. This methodology was also able to differentiate closely related images. We provide a high quality set of complex biological images that span the visual spectrum that can be used in future colorimetric research to benchmark method development.


2021 ◽  
Author(s):  
Maël Balluet ◽  
Florian Sizaire ◽  
Youssef El Habouz ◽  
Thomas Walter ◽  
Jérémy Pont ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Fan Rao

In order to better reduce sports injury, a method based on functional motion biological image data is proposed. Through performing functional motion screening test on wushu athletes, including 7 items of test, each athlete is given a score according to the test standard. This paper summarizes the mistakes and deficiencies of common movement patterns of athletes and makes different intervention plans to improve the effect of sports injury screening. The results show that, at P > 0.001 , there was a significant difference, and the experimental group FMS total score (15.02 ± 3.7) was lower than the control group FMS total score (18.51 ± 1.45). The recognition rate of the system is higher than that of the system based on single feature, and the recognition performance is better than that of the standard SVM and KNN recognition methods. It is proved that the design of the system is feasible, reliable, and effective.


2021 ◽  
Author(s):  
Bella Baidak ◽  
Yahiya Hussain ◽  
Emma Kelminson ◽  
Thouis R. Jones ◽  
Loraine Franke ◽  
...  
Keyword(s):  

2021 ◽  
Vol 27 (3) ◽  
pp. 307-310
Author(s):  
Guozheng Zhu

ABSTRACT Introduction To reduce or avoid injuries during high-intensity sports and help treat the injured part, the method of recognizing biological images of the damaged part is a crucial point of current research. Objective To reduce the damage caused by high-intensity sports and improve the efficiency of injury treatment, this article explores the method of identifying damaged parts in biological imaging of high-intensity sports injuries. Methods A method is proposed to recognize damaged parts of biological images of high-intensity sports injuries based on an improved regional growth algorithm. Results A rough segmented image developed in black and white is obtained with the main body as the objective and background. Based on approximate segmentation, the region growth algorithm is used to accurately recognize the damaged region by improving the selection of the hotspots and the growth rules. Conclusion The recognition accuracy is high, and the recognition time is shorter. The algorithm proposed in this work can improve the precision of recognizing the damaged parts of the biological image of the sports injury and shorten the recognition time. It has the feasibility to determine the damaged parts of sports injuries. Level of evidence II; Therapeutic studies: investigation of treatment results.


2021 ◽  
Author(s):  
Alessio Mascolini ◽  
Dario Cardamone ◽  
Francesco Ponzio ◽  
Santa Di Cataldo ◽  
Elisa Ficarra

Abstract Computer-aided analysis of biological images typically requires extensive training on large-scale annotated datasets, which is not viable in many situations. In this paper, we present GAN-DL, a Discriminator Learner based on the StyleGAN2 architecture, which we employ for self-supervised image representation learning in the case of fluorescent biological images. We show that Wasserstein Generative Adversarial Networks combined with linear Support Vector Machines enable high-throughput compound screening based on raw images. We demonstrate this by classifying active and inactive compounds tested for the inhibition of SARS-CoV-2 infection in VERO and HRCE cell lines. In contrast to previous methods, our deep learning-based approach does not require any annotation besides the one that is normally collected during the sample preparation process. We test our technique on the RxRx19a Sars-CoV-2 image collection. The dataset consists of fluorescent images that were generated to assess the ability of regulatory-approved or late-stage clinical trials compounds to modulate the in vitro infection from SARS-CoV-2 in both VERO and HRCE cell lines. We show that our technique can be exploited not only for classification tasks but also to effectively derive a dose-response curve for the tested treatments, in a self-supervised manner. Lastly, we demonstrate its generalization capabilities by successfully addressing a zero-shot learning task, consisting of the categorization of four different cell types of the RxRx1 fluorescent images collection.


Nanomaterials ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1817
Author(s):  
Bruna Lallo da Silva ◽  
Laurent Lemaire ◽  
Jean-Pierre Benoit ◽  
Fernanda Hediger Borges ◽  
Rogéria Rocha Gonçalves ◽  
...  

In recent years, the use of quantum dots (Qdots) to obtain biological images has attracted attention due to their excellent luminescent properties and the possibility of their association with contrast agents for magnetic resonance imaging (MRI). In this study, Gd3+/ZnO (ZnOGd) were conjugated with Qdots composed of a gadolinium-copper-indium-sulphur core covered with a ZnS shell (GCIS/ZnS Qdots). This conjugation is an innovation that has not yet been described in the literature, and which aims to improve Qdot photoluminescent properties. Structural and morphological Qdots features were obtained by transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD) and thermogravimetric analyses (TGA). The photoluminescent properties were examined by emission (PL) and excitation (PLE) spectra. A new ZnOGd and GCIS/ZnS (ZnOGd-GCIS/ZnS) nanomaterial was synthesized with tunable optical properties depending on the ratio between the two native Qdots. A hydrophilic or lipophilic coating, using 3-glycidyloxypropyltrimethoxysilane (GPTMS) or hexadecyltrimethoxysilane (HTMS) on the surface of ZnOGd-GCIS/ZnS Qdots, was carried out before assessing their efficiency as magnetic resonance contrast agents. ZnOGd-GCIS/ZnS had excellent luminescence and MRI properties. The new Qdots developed ZnOGd-GCIS/ZnS, mostly constituted of ZnOGd (75%), which had less cytotoxicity when compared to ZnOGd, as well as greater cellular uptake.


2021 ◽  
Vol 11 (14) ◽  
pp. 6410
Author(s):  
Carlos Capitán-Agudo ◽  
Beatriz Pontes ◽  
Pedro Gómez-Gálvez ◽  
Pablo Vicente-Munuera

Analysing biological images coming from the microscope is challenging; not only is it complex to acquire the images, but also the three-dimensional shapes found on them. Thus, using automatic approaches that could learn and embrace that variance would be highly interesting for the field. Here, we use an evolutionary algorithm to obtain the 3D cell shape of curve epithelial tissues. Our approach is based on the application of a 3D segmentation algorithm called LimeSeg, which is a segmentation software that uses a particle-based active contour method. This program needs the fine-tuning of some hyperparameters that could present a long number of combinations, with the selection of the best parametrisation being highly time-consuming. Our evolutionary algorithm automatically selects the best possible parametrisation with which it can perform an accurate and non-supervised segmentation of 3D curved epithelial tissues. This way, we combine the segmentation potential of LimeSeg and optimise the parameters selection by adding automatisation. This methodology has been applied to three datasets of confocal images from Drosophila melanogaster, where a good convergence has been observed in the evaluation of the solutions. Our experimental results confirm the proper performing of the algorithm, whose segmented images have been compared to those manually obtained for the same tissues.


Sign in / Sign up

Export Citation Format

Share Document