scholarly journals AbsorbanceQ: An App for Generating Absorbance Images from Brightfield Images

Author(s):  
Stephen M. Zimmerman ◽  
Carl G. Simon Jr. ◽  
Greta Babakhanova

The AbsorbanceQ app converts brightfield microscope images into absorbance images that can be analyzed and compared across different operators, microscopes, and time. Because absorbance-based measurements are comparable across these parameters, they are useful when the aim is to manufacture biotherapeutics with consistent quality. AbsorbanceQ will be of value to those who want to capture quantitative absorbance images of cells. The AbsorbanceQ app has two modes - a single image processing mode and a batch processing mode for multiple images. Instructions for using the app are given on the ‘App Information’ tab when the app is opened. The input and output images for the app have been defined, and synthetic images were used to validate that the output images are correct. This article provides a description of how to use the app, software specifications, a description of how the app works, instructive advice on how to use the tools and a description of the methods used to generate the software. In addition, links are provided to a website where the app and test images are deployed.

PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251899
Author(s):  
Samir M. Badawy ◽  
Abd El-Naser A. Mohamed ◽  
Alaa A. Hefnawy ◽  
Hassan E. Zidan ◽  
Mohammed T. GadAllah ◽  
...  

Computer aided diagnosis (CAD) of biomedical images assists physicians for a fast facilitated tissue characterization. A scheme based on combining fuzzy logic (FL) and deep learning (DL) for automatic semantic segmentation (SS) of tumors in breast ultrasound (BUS) images is proposed. The proposed scheme consists of two steps: the first is a FL based preprocessing, and the second is a Convolutional neural network (CNN) based SS. Eight well-known CNN based SS models have been utilized in the study. Studying the scheme was by a dataset of 400 cancerous BUS images and their corresponding 400 ground truth images. SS process has been applied in two modes: batch and one by one image processing. Three quantitative performance evaluation metrics have been utilized: global accuracy (GA), mean Jaccard Index (mean intersection over union (IoU)), and mean BF (Boundary F1) Score. In the batch processing mode: quantitative metrics’ average results over the eight utilized CNNs based SS models over the 400 cancerous BUS images were: 95.45% GA instead of 86.08% without applying fuzzy preprocessing step, 78.70% mean IoU instead of 49.61%, and 68.08% mean BF score instead of 42.63%. Moreover, the resulted segmented images could show tumors’ regions more accurate than with only CNN based SS. While, in one by one image processing mode: there has been no enhancement neither qualitatively nor quantitatively. So, only when a batch processing is needed, utilizing the proposed scheme may be helpful in enhancing automatic ss of tumors in BUS images. Otherwise applying the proposed approach on a one-by-one image mode will disrupt segmentation’s efficiency. The proposed batch processing scheme may be generalized for an enhanced CNN based SS of a targeted region of interest (ROI) in any batch of digital images. A modified small dataset is available: https://www.kaggle.com/mohammedtgadallah/mt-small-dataset (S1 Data).


2021 ◽  
Vol 13 (21) ◽  
pp. 4434
Author(s):  
Chunhui Zhao ◽  
Chi Zhang ◽  
Yiming Yan ◽  
Nan Su

A novel framework for 3D reconstruction of buildings based on a single off-nadir satellite image is proposed in this paper. Compared with the traditional methods of reconstruction using multiple images in remote sensing, recovering 3D information that utilizes the single image can reduce the demands of reconstruction tasks from the perspective of input data. It solves the problem that multiple images suitable for traditional reconstruction methods cannot be acquired in some regions, where remote sensing resources are scarce. However, it is difficult to reconstruct a 3D model containing a complete shape and accurate scale from a single image. The geometric constraints are not sufficient as the view-angle, size of buildings, and spatial resolution of images are different among remote sensing images. To solve this problem, the reconstruction framework proposed consists of two convolutional neural networks: Scale-Occupancy-Network (Scale-ONet) and model scale optimization network (Optim-Net). Through reconstruction using the single off-nadir satellite image, Scale-Onet can generate water-tight mesh models with the exact shape and rough scale of buildings. Meanwhile, the Optim-Net can reduce the error of scale for these mesh models. Finally, the complete reconstructed scene is recovered by Model-Image matching. Profiting from well-designed networks, our framework has good robustness for different input images, with different view-angle, size of buildings, and spatial resolution. Experimental results show that an ideal reconstruction accuracy can be obtained both on the model shape and scale of buildings.


1998 ◽  
Vol 37 (12) ◽  
pp. 381-387 ◽  
Author(s):  
Yoichi Takagi ◽  
Akio Tsujikawa ◽  
Masao Takato ◽  
Takeshi Saito ◽  
Motoko Kaida

This paper describes how the authors developed a liquid level measuring system designed to directly analyze images of liquid surfaces. This measuring system is based on the principle that the contour of the image of a slanted metal strip placed in a liquid shows an inflection point on the liquid interface. This liquid level measuring system using image processing is useful in automatically measuring the levels of water, oil, liquefied gases, and alcoholic beverages. Among the features of this measuring system are: (1) it is of the noncontact type, so that there is no need to install a sensor or other high-precision devices close to the liquid to be measured; (2) it can be installed in a way not affected by the liquid; (3) the scale and the liquid surface can be analyzed on an image basis directly, so that periodic recalibration is unnecessary; (4) deviations in measurements can be easily detected by visually checking the monitor screen; (5) images from more than one camera can be processed with a single image processor to reduce total costs.


PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e4157 ◽  
Author(s):  
Gail E. Austen ◽  
Markus Bindemann ◽  
Richard A. Griffiths ◽  
David L. Roberts

Emerging technologies have led to an increase in species observations being recorded via digital images. Such visual records are easily shared, and are often uploaded to online communities when help is required to identify or validate species. Although this is common practice, little is known about the accuracy of species identification from such images. Using online images of newts that are native and non-native to the UK, this study asked holders of great crested newt (Triturus cristatus) licences (issued by UK authorities to permit surveying for this species) to sort these images into groups, and to assign species names to those groups. All of these experts identified the native species, but agreement among these participants was low, with some being cautious in committing to definitive identifications. Individuals’ accuracy was also independent of both their experience and self-assessed ability. Furthermore, mean accuracy was not uniform across species (69–96%). These findings demonstrate the difficulty of accurate identification of newts from a single image, and that expert judgements are variable, even within the same knowledgeable community. We suggest that identification decisions should be made on multiple images and verified by more than one expert, which could improve the reliability of species data.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Shida Zhao ◽  
Guangzhao Hao ◽  
Yichi Zhang ◽  
Shucai Wang

How to realize the accurate recognition of 3 parts of sheep carcass is the key to the research of mutton cutting robots. The characteristics of each part of the sheep carcass are connected to each other and have similar features, which make it difficult to identify and detect, but with the development of image semantic segmentation technology based on deep learning, it is possible to explore this technology for real-time recognition of the 3 parts of the sheep carcass. Based on the ICNet, we propose a real-time semantic segmentation method for sheep carcass images. We first acquire images of the sheep carcass and use augmentation technology to expand the image data, after normalization, using LabelMe to annotate the image and build the sheep carcass image dataset. After that, we establish the ICNet model and train it with transfer learning. The segmentation accuracy, MIoU, and the average processing time of single image are then obtained and used as the evaluation standard of the segmentation effect. In addition, we verify the generalization ability of the ICNet for the sheep carcass image dataset by setting different brightness image segmentation experiments. Finally, the U-Net, DeepLabv3, PSPNet, and Fast-SCNN are introduced for comparative experiments to further verify the segmentation performance of the ICNet. The experimental results show that for the sheep carcass image datasets, the segmentation accuracy and MIoU of our method are 97.68% and 88.47%, respectively. The single image processing time is 83 ms. Besides, the MIoU of U-Net and DeepLabv3 is 0.22% and 0.03% higher than the ICNet, but the processing time of a single image is longer by 186 ms and 430 ms. Besides, compared with the PSPNet and Fast-SCNN, the MIoU of the ICNet model is increased by 1.25% and 4.49%, respectively. However, the processing time of a single image is shorter by 469 ms and expands by 7 ms, respectively.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4434 ◽  
Author(s):  
Sangwon Kim ◽  
Jaeyeal Nam ◽  
Byoungchul Ko

Depth estimation is a crucial and fundamental problem in the computer vision field. Conventional methods re-construct scenes using feature points extracted from multiple images; however, these approaches require multiple images and thus are not easily implemented in various real-time applications. Moreover, the special equipment required by hardware-based approaches using 3D sensors is expensive. Therefore, software-based methods for estimating depth from a single image using machine learning or deep learning are emerging as new alternatives. In this paper, we propose an algorithm that generates a depth map in real time using a single image and an optimized lightweight efficient neural network (L-ENet) algorithm instead of physical equipment, such as an infrared sensor or multi-view camera. Because depth values have a continuous nature and can produce locally ambiguous results, pixel-wise prediction with ordinal depth range classification was applied in this study. In addition, in our method various convolution techniques are applied to extract a dense feature map, and the number of parameters is greatly reduced by reducing the network layer. By using the proposed L-ENet algorithm, an accurate depth map can be generated from a single image quickly and, in a comparison with the ground truth, we can produce depth values closer to those of the ground truth with small errors. Experiments confirmed that the proposed L-ENet can achieve a significantly improved estimation performance over the state-of-the-art algorithms in depth estimation based on a single image.


2018 ◽  
Vol 7 (2.2) ◽  
pp. 70
Author(s):  
Darius Shyafary ◽  
Rony H ◽  
Rheo Malani ◽  
Anggri Sartika W

A mosaic is a combination of two or more images with various combining techniques. One of the computer graphics applications is the image mosaic used for various purposes such as texture maps and better image backgrounds. One of the important things in making image mosaic is how to create small pieces of the image in such a way that it produces a good image mosaic. A number of methods have been proposed to build an image mosaic system that produces good mosaic results, but it usually requires complicated calculations. Fuzzy image processing is a form of information processing that input and output both images. This is a collection of fuzzy approaches that understand, represent and process their images, segments, and features as a fuzzy set. In this study, fuzzy image processing concept is used to create image mosaic by random seed generation using Fuzzy Membership Function (MF).  


Author(s):  
C. Radhika ◽  
R. Parvathi ◽  
N. Karthikeyani Visalakshi

Image processing is any form of information processing in which both input and output are images. Most of the image processing involves in treating the image as two dimensional representations and applying standard techniques to it. Images contain lot of uncertainties and are fuzzy/vague in nature. Various fuzzy filtering techniques are defined for noise removal in image processing and these existing filters helps to enhance the image using only the membership values. Further, by incorporating intuitionistic fuzzy filters, vagueness and ambiguity are managed by taking the non-membership values also into consideration. In this paper, light is thrown on some important types of noise and a comparative analysis is done. This paper also presents the results of applying different noise types to an image and investigates the results of various intuitionistic fuzzy filtering techniques. A comparison is made on the results of all the techniques.


Sign in / Sign up

Export Citation Format

Share Document