scholarly journals QuickFigures: A toolkit and ImageJ PlugIn to quickly transform microscope images into scientific figures

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0240280
Author(s):  
Gregory Mazo

Publications involving fluorescent microscopy images generally contain many panels with split channels, merged images, scale bars and label text. Similar layouts of panels are used when displaying other microscopy images, electron micrographs, photographs, and other images. Assembling and editing these figures with even spacing, consistent font, text position, accurate scale bars, and other features can be tedious and time consuming. In order to save time, I have created a toolset and ImageJ Plugin called QuickFigures. QuickFigures includes many helpful features that streamline the process of creating, aligning, and editing scientific figures. Those features include tools that automatically create split channel figures from a region of interest (“Quick Figure” button and “Inset Tool”), layouts that make it easy to rearrange panels, multiple tools to align objects, and “Figure Format” menu options that help a user ensure that large numbers of figures have consistent appearance. QuickFigures was compared to previous tools by measuring the amount of time needed for a user to create a figure using each software (QuickFigures, OMERO.figure. EZFig, FigureJ and PowerPoint). QuickFigures significantly reduced the amount of time required to create a figure. The toolsets were also compared by checking each software against a list of features. QuickFigures had the most extensive set of features. Therefore, QuickFigures is an advantageous alternative to traditional methods of constructing scientific figures. After a user has saved time by creating their work in QuickFigures, the figures can be exported to a variety of formats including PowerPoint, PDF, SVG, PNG, TIFF and Adobe Illustrator. Export was successfully tested for each file format and object type. Exported objects and text are editable in their target software, making them suitable for sharing with collaborators. The software is free, open source and can be installed easily.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2925
Author(s):  
Antonio Mederos-Barrera ◽  
Cristo Jurado-Verdu ◽  
Victor Guerra ◽  
Jose Rabadan ◽  
Rafael Perez-Jimenez

Visible light communications (VLC) technology is emerging as a candidate to meet the demand for interconnected devices’ communications. However, the costs of incorporating specific hardware into end-user devices slow down its market entry. Optical camera communication (OCC) technology paves the way by reusing cameras as receivers. These systems have generally been evaluated under static conditions, in which transmitting sources are recognized using computationally expensive discovery algorithms. In vehicle-to-vehicle networks and wearable devices, tracking algorithms, as proposed in this work, allow one to reduce the time required to locate a moving source and hence the latency of these systems, increasing the data rate by up to 2100%. The proposed receiver architecture combines discovery and tracking algorithms that analyze spatial features of a custom RGB LED transmitter matrix, highlighted in the scene by varying the cameras’ exposure time. By using an anchor LED and changing the intensity of the green LED, the receiver can track the light source with a slow temporal deterioration. Moreover, data bits sent over the red and blue channels do not significantly affect detection, hence transmission occurs uninterrupted. Finally, a novel experimental methodology to evaluate the evolution of the detection’s performance is proposed. With the analysis of the mean and standard deviation of novel K parameters, it is possible to evaluate the detected region-of-interest scale and centrality against the transmitter source’s ideal location.


2012 ◽  
Vol 30 (23) ◽  
pp. 2823-2828 ◽  
Author(s):  
Robert A. Ramirez ◽  
Christopher G. Wang ◽  
Laura E. Miller ◽  
Courtney A. Adair ◽  
Allen Berry ◽  
...  

Purpose Pathologic nodal stage affects prognosis in patients with surgically resected non–small-cell lung cancer (NSCLC). Unlike examination of mediastinal lymph nodes (LNs), which depends on surgical practice, accurate examination of intrapulmonary (N1) nodes depends primarily on pathology practice. We investigated the completeness of N1 LN examination in NSCLC resection specimens and its potential impact on stage. Patients and Methods We performed a case-control study of a special pathologic examination (SPE) protocol using thin gross dissection with retrieval and microscopic examination of all LN-like material on remnant NSCLC resection specimens after routine pathologic examination (RPE). We compared LNs retrieved by the SPE protocol with nodes examined after RPE of the same lung specimens and with those of an external control cohort. Results We retrieved additional LNs in 66 (90%) of 73 patient cases and discovered metastasis in 56 (11%) of 514 retrieved LNs from 27% of all patients. We found unexpected LN metastasis in six (12%) of 50 node-negative patients. Three other patients had undetected satellite metastatic nodules. Pathologic stage was upgraded in eight (11%) of 73 patients. The time required for the SPE protocol decreased significantly with experience, with no change in the number of LNs found. Conclusion Standard pathology practice frequently leaves large numbers of N1 LNs unexamined, a clinically significant proportion of which harbor metastasis. By improving N1 LN examination, SPE can have an impact on prognosis and adjuvant management. We suggest adoption of the SPE to improve pathologic staging of resected NSCLC.


Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

AbstractAnnotatorJ combines single-cell identification with deep learning and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses e.g. expression measurements may be carried out precisely and without bias. Deep learning has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such deep learning applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations.We propose AnnotatorJ, an ImageJ plugin for the semi-automatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based pre-segmentation. The manual labour of hand-annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, deep learning or otherwise, when used as training data.


Author(s):  
Gabriel Landini ◽  
Giovanni Martinelli ◽  
Filippo Piccinini

Abstract Motivation Microscopy images of stained cells and tissues play a central role in most biomedical experiments and routine histopathology. Storing colour histological images digitally opens the possibility to process numerically colour distribution and intensity to extract quantitative data. Among those numerical procedures are colour deconvolution, which enable decomposing an RGB image into channels representing the optical absorbance and transmittance of the dyes when their RGB representation is known. Consequently, a range of new applications become possible for morphological and histochemical segmentation, automated marker localization and image enhancement. Availability and implementation Colour deconvolution is presented here in two open-source forms: a MATLAB program/function and an ImageJ plugin written in Java. Both versions run in Windows, Macintosh and UNIX-based systems under the respective platforms. Source code and further documentation are available at: https://blog.bham.ac.uk/intellimic/g-landini-software/colour-deconvolution-2/. Supplementary information Supplementary data are available at Bioinformatics online.


2006 ◽  
Vol 95 (2) ◽  
pp. 995-1007 ◽  
Author(s):  
Rory Sayres ◽  
Kalanit Grill-Spector

Object-selective cortical regions exhibit a decreased response when an object stimulus is repeated [repetition suppression (RS)]. RS is often associated with priming: reduced response times and increased accuracy for repeated stimuli. It is unknown whether RS reflects stimulus-specific repetition, the associated changes in response time, or the combination of the two. To address this question, we performed a rapid event-related functional MRI (fMRI) study in which we measured BOLD signal in object-selective cortex, as well as object recognition performance, while we manipulated stimulus repetition. Our design allowed us to examine separately the roles of response time and repetition in explaining RS. We found that repetition played a robust role in explaining RS: repeated trials produced weaker BOLD responses than nonrepeated trials, even when comparing trials with matched response times. In contrast, response time played a weak role in explaining RS when repetition was controlled for: it explained BOLD responses only for one region of interest (ROI) and one experimental condition. Thus repetition suppression seems to be mostly driven by repetition rather than performance changes. We further examined whether RS reflects processes occurring at the same time as recognition or after recognition by manipulating stimulus presentation duration. In one experiment, durations were longer than required for recognition (2 s), whereas in a second experiment, durations were close to the minimum time required for recognition (85–101 ms). We found significant RS for brief presentations (albeit with a reduced magnitude), which again persisted when controlling for performance. This suggests a substantial amount of RS occurs during recognition.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Hongyu Hu ◽  
Zhaowei Qu ◽  
Zhihui Li ◽  
Jinhui Hu ◽  
Fulu Wei

A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG) feature extraction method is proposed later. A support vector machine (SVM) classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.


2017 ◽  
Vol 282 ◽  
pp. 20-33 ◽  
Author(s):  
Maryana Alegro ◽  
Panagiotis Theofilas ◽  
Austin Nguy ◽  
Patricia A. Castruita ◽  
William Seeley ◽  
...  

1986 ◽  
Vol 20 (3) ◽  
pp. 202-205 ◽  
Author(s):  
J. P. Allchin ◽  
G. O. Evans

The method described is sufficiently sensitive to detect major changes in the protein excretion patterns of rat urine, and the short time required for technical procedures makes the method suitable for screening large numbers of rat urine samples. The patterns observed for normal adult male rats are similar to previously published data, and the method may also be used to identify pseudoproteinuria.


Sign in / Sign up

Export Citation Format

Share Document