source image
Recently Published Documents


TOTAL DOCUMENTS

411
(FIVE YEARS 171)

H-INDEX

20
(FIVE YEARS 7)

2022 ◽  
Author(s):  
Jonathan M Matthews ◽  
Brooke Schuster ◽  
Sara Saheb Kashaf ◽  
Ping Liu ◽  
Mustafa Bilgic ◽  
...  

Organoids are three-dimensional in vitro tissue models that closely represent the native heterogeneity, microanatomy, and functionality of an organ or diseased tissue. Analysis of organoid morphology, growth, and drug response is challenging due to the diversity in shape and size of organoids, movement through focal planes, and limited options for live-cell staining. Here, we present OrganoID, an open-source image analysis platform that automatically recognizes, labels, and tracks single organoids in brightfield and phase-contrast microscopy. The platform identifies organoid morphology pixel by pixel without the need for fluorescence or transgenic labeling and accurately analyzes a wide range of organoid types in time-lapse microscopy experiments. OrganoID uses a modified u-net neural network with minimal feature depth to encourage model generalization and allow fast execution. The network was trained on images of human pancreatic cancer organoids and was validated on images from pancreatic, lung, colon, and adenoid cystic carcinoma organoids with a mean intersection-over-union of 0.76. OrganoID measurements of organoid count and individual area concurred with manual measurements at 96% and 95% agreement respectively. Tracking accuracy remained above 89% over the duration of a four-day validation experiment. Automated single-organoid morphology analysis of a dose-response experiment identified significantly different organoid circularity after exposure to different concentrations of gemcitabine. The OrganoID platform enables straightforward, detailed, and accurate analysis of organoid images to accelerate the use of organoids as physiologically relevant models in high-throughput research.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Chunqiao Song ◽  
Xutong Wu

At present, image restoration has become a research hotspot in computer vision. The purpose of digital image restoration is to restore the lost information of the image or remove redundant objects without destroying the integrity and visual effects of the image. The operation of user interactive color migration is troublesome, resulting in low efficiency. And, when there are many kinds of colors, it is prone to errors. In response to these problems, this paper proposes automatic selection of sample color migration. Considering that the respective gray-scale histograms of the visual source image and the target image are approximately normal distributions, this paper takes the peak point as the mean value of the normal distribution to construct the objective function. We find all the required partitions according to the user’s needs and use the center points in these partitions as the initial clustering centers of the fuzzy C-means (FCM) algorithm to complete the automatic clustering of the two images. This paper selects representative pixels as sample blocks to realize automatic matching of sample blocks in the two images and complete the color migration of the entire image. We introduced the curvature into the energy functional of the p-harmonic model. According to whether there is noise in the image, a new wavelet domain image restoration model is proposed. According to the established model, the Euler–Lagrange equation is derived by the variational method, the corresponding diffusion equation is established, and the model is analyzed and numerically solved in detail to obtain the restored image. The results show that the combination of image sample texture synthesis and segmentation matching method used in this paper can effectively solve the problem of color unevenness. This not only saves the time for mural restoration but also improves the quality of murals, thereby achieving more realistic visual effects and connectivity.


2021 ◽  
Vol 7 ◽  
pp. e761
Author(s):  
Yuling He ◽  
Yingding Zhao ◽  
Wenji Yang ◽  
Yilu Xu

Due to the sophisticated entanglements for non-rigid deformation, generating person images from source pose to target pose is a challenging work. In this paper, we present a novel framework to generate person images with shape consistency and appearance consistency. The proposed framework leverages the graph network to infer the global relationship of source pose and target pose in a graph for better pose transfer. Moreover, we decompose the source image into different attributes (e.g., hair, clothes, pants and shoes) and combine them with the pose coding through operation method to generate a more realistic person image. We adopt an alternate updating strategy to promote mutual guidance between pose modules and appearance modules for better person image quality. Qualitative and quantitative experiments were carried out on the DeepFashion dateset. The efficacy of the presented framework are verified.


Author(s):  
Sudhanshu Mukherjee

Abstract: One of the primary concerns that is also a demanding issue within the realm of medical specialism is the detection and removal of tumours. Because visualisation approaches had the drawback of being adversarial, doctors relied heavily on MRI images to provide a superior result. Pre-processing, tumour segmentation, and tumour operations are the three stages in which tumour image processing takes place. Following the acquisition of the source image, the original image is converted to grayscale. Additionally, a noise removal filter and a median filter for quality development are provided, followed by an exploration stage that yields hits orgasmic identical images. Finally, the watershed algorithm is used to complete the segmentation. This proposed methodology is useful in automatically organising reports in a short amount of time, and exploration has resulted in the removal of many less tumour parameters. Keywords: MRI Imaging, Segmentation, Watershed Algorithm.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Liang Guo ◽  
Su Li ◽  
Xiangye Wang ◽  
Caihong Zeng ◽  
Chunyu Liu

AbstractApplied Current Thermoacoustic Imaging (ACTAI) is a new imaging method which combines electromagnetic excitation with ultrasound imaging, and takes ultrasonic signal as medium and biological tissue conductivity as detection target. Taking the high contrast advantage of Electrical Impedance Tomography (EIT) and high resolution advantage of ultrasound imaging, ACTAI has broad application prospects in the field of biomedical imaging. Although ACTAI has high excitation efficiency and strong detectable Signal-to-Noise Ratio, yet while under low frequency electromagnetic excitation, it is still a big challenge to reconstruct a high-resolution image of target conductivity. This paper proposes a new method for reconstructing conductivity based on Generative Adversarial Network, and it consists of three main steps: firstly, use Wiener filtering deconvolution to restore the electrical signal output by the ultrasonic probe to a real acoustic signal. Then obtain the initial acoustic source image with filtered backprojection technology. Finally, match the conductivity image with the initial sound source image, which are used as training samples for generating the adversarial network to establish a deep learning model for conductivity reconstruction. After theoretical analysis and simulation research, it is found that by introducing machine learning, the new method can dig out the inverse problem solving model contained in the data, which further reconstruct a high-resolution conductivity image and has strong anti-interference characteristics. The new method provides a new way to solve the problem of conductivity reconstruction in Applied Current Thermoacoustic Imaging.


2021 ◽  
Author(s):  
suxing liu ◽  
Wesley Paul Bonelli ◽  
Peter Pietrzyk ◽  
Alexander Bucksch

2021 ◽  
Author(s):  
suxing liu ◽  
Wesley Paul Bonelli ◽  
Peter Pietrzyk ◽  
Alexander Bucksch

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
M. Arza-García ◽  
C. Núñez-Temes ◽  
J. A. Lorenzana ◽  
J. Ortiz-Sanz ◽  
A. Castro ◽  
...  

AbstractDue to their cost, high-end commercial 3D-DIC (digital image correlation) systems are still inaccessible for many laboratories or small factories interested in lab testing materials. These professional systems can provide reliable and rapid full-field measurements that are essential in some laboratory tests with high-strain rate events or high dynamic loading. However, in many stress-controlled experiments, such as the Brazilian tensile strength (BTS) test of compacted soils, samples are usually large and fail within a timeframe of several minutes. In those cases, alternative low-cost methods could be successfully used instead of commercial systems. This paper proposes a methodology to apply 2D-DIC techniques using consumer-grade cameras and the open-source image processing software DICe (Sandia National Lab) for monitoring the standardized BTS test. Unlike most previous studies that theoretically estimate systematic errors or use local measures from strain gauges for accuracy assessment, we propose a contrast methodology with independent full-field measures. The displacement fields obtained with the low-cost system are benchmarked with the professional stereo-DIC system Aramis-3D (GOM GmbH) in four BTS experiments using compacted soil specimens. Both approaches proved to be valid tools for obtaining full-field measurements and showing the sequence of crack initiation, propagation and termination in the BTS, constituting reliable alternatives to traditional strain gauges. Mean deviations obtained between the low-cost 2D-DIC approach and Aramis-3D in measuring in-plane components were 0.08 mm in the perpendicular direction of loading (ΔX) and 0.06 mm in the loading direction (ΔY). The proposed low-cost approach implies considerable savings compared to commercial systems.


2021 ◽  
Vol 8 ◽  
Author(s):  
Hongtao Kang ◽  
Die Luo ◽  
Weihua Feng ◽  
Shaoqun Zeng ◽  
Tingwei Quan ◽  
...  

Stain normalization often refers to transferring the color distribution to the target image and has been widely used in biomedical image analysis. The conventional stain normalization usually achieves through a pixel-by-pixel color mapping model, which depends on one reference image, and it is hard to achieve accurately the style transformation between image datasets. In principle, this difficulty can be well-solved by deep learning-based methods, whereas, its complicated structure results in low computational efficiency and artifacts in the style transformation, which has restricted the practical application. Here, we use distillation learning to reduce the complexity of deep learning methods and a fast and robust network called StainNet to learn the color mapping between the source image and the target image. StainNet can learn the color mapping relationship from a whole dataset and adjust the color value in a pixel-to-pixel manner. The pixel-to-pixel manner restricts the network size and avoids artifacts in the style transformation. The results on the cytopathology and histopathology datasets show that StainNet can achieve comparable performance to the deep learning-based methods. Computation results demonstrate StainNet is more than 40 times faster than StainGAN and can normalize a 100,000 × 100,000 whole slide image in 40 s.


Sign in / Sign up

Export Citation Format

Share Document