tile size
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 13)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Vol 13 (19) ◽  
pp. 3996
Author(s):  
Rob Holman ◽  
Erwin W. J. Bergsma

This manuscript describes and tests a set of improvements to the cBathy algorithm, published in 2013 by Holman et al. [hereafter HPH13], for the estimation of bathymetry based on optical observations of propagating nearshore waves. Three versions are considered, the original HPH13 algorithm (now labeled V1.0), an intermediate version that has seen moderate use but limited testing (V1.2), and a substantially updated version (V2.0). Important improvements from V1.0 include a new deep-water weighting scheme, removal of a spurious variable in the nonlinear fitting, an adaptive scheme for determining the optimum tile size based on the approximate wavelength, and a much-improved search seed algorithm. While V1.2 was tested and results listed, the primary interest is in comparing V1.0, the original code, with the new version V2.0. The three versions were tested against an updated dataset of 39 ground-truth surveys collected from 2015 to 2019 at the Field Research Facility in Duck, NC. In all, 624 cBathy collections were processed spanning a four-day period up to and including each survey date. Both the unfiltered phase 2 and the Kalman-filtered phase 3 bathymetry estimates were tested. For the Kalman-filtered estimates, only the estimate from mid-afternoon on the survey date was used for statistical measures. Of those 39 Kalman products, the bias, rms error, and 95% exceedance for V1.0 were 0.15, 0.47, and 0.96 m, respectively, while for V2.0, they were 0.08, 0.38, and 0.78 m. The mean observed coverage, the percentage of successful estimate locations in the map, were 99.1% for V1.0 and 99.9% for V2.0. Phase 2 (unfiltered) bathymetry estimates were also compared to ground truth for the 624 available data runs. The mean bias, rms error, and 95% exceedance statistics for V1.0 were 0.19, 0.64, and 1.27 m, respectively, and for V2.0 were 0.16, 0.56, and 1.19 m, an improvement in all cases. The coverage also increased from 78.8% for V1.0 to 84.7% for V2.0, about a 27% reduction in the number of failed estimates. The largest errors were associated with both large waves and poor imaging conditions such as fog, rain, or darkness that greatly reduced the percentage of successful coverage. As a practical mitigation of large errors, data runs for which the significant wave height was greater than 1.2 m or the coverage was less than 50% were omitted from the analysis, reducing the number of runs from 624 to 563. For this reduced dataset, the bias, rms error, and 95% exceedance errors for V1.0 were 0.15, 0.58, and 1.16 m and for V2.0 were 0.09, 0.41, and 0.85 m, respectively. Successful coverage for V1.0 was 82.8%, while for V2.0, it was 90.0%, a roughly 42% reduction in the number of failed estimates. Performance for V2.0 individual (non-filtered) estimates is slightly better than the Kalman results in the original HPH13 paper, and it is recommended that version 2.0 becomes the new standard algorithm.


2021 ◽  
Vol 7 (1) ◽  
pp. 63-66
Author(s):  
M. Bengs ◽  
S. Pant ◽  
M. Bockmayr ◽  
U. Schüller ◽  
A. Schlaefer

Abstract Medulloblastoma (MB) is a primary central nervous system tumor and the most common malignant brain cancer among children. Neuropathologists perform microscopic inspection of histopathological tissue slides under a microscope to assess the severity of the tumor. This is a timeconsuming task and often infused with observer variability. Recently, pre-trained convolutional neural networks (CNN) have shown promising results for MB subtype classification. Typically, high-resolution images are divided into smaller tiles for classification, while the size of the tiles has not been systematically evaluated. We study the impact of tile size and input strategy and classify the two major histopathological subtypes-Classic and Desmoplastic/Nodular. To this end, we use recently proposed EfficientNets and evaluate tiles with increasing size combined with various downsampling scales. Our results demonstrate using large input tiles pixels followed by intermediate downsampling and patch cropping significantly improves MB classification performance. Our top-performing method achieves the AUC-ROC value of 90.90% compared to 84.53% using the previous approach with smaller input tiles.


Author(s):  
Kumudha Narasimhan ◽  
Aravind Acharya ◽  
Abhinav Baid ◽  
Uday Bondhugula

Author(s):  
Michael Majurski ◽  
Peter Bajcsy

We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be applied on arbitrarily sized images, although it is still constrained by the available GPU memory. This work is motivated by overcoming the GPU memory size constraint without numerically impacting the fnal result. Our approach is to select a tile size that will ft into GPU memory with a halo border of half the network receptive feld. Next, stride across the image by that tile size without the halo. The input tile halos will overlap, while the output tiles join exactly at the seams. Such an approach enables inference to be performed on whole slide microscopy images, such as those generated by a slide scanner. The novelty of this work is in documenting the formulas for determining tile size and stride and then validating them on U-Net and FC-DenseNet architectures. In addition, we quantify the errors due to tiling confgurations which do not satisfy the constraints, and we explore the use of architecture effective receptive felds to estimate the tiling parameters.


Author(s):  
Peter J. Schüffler ◽  
Evangelos Stamelos ◽  
Ishtiaque Ahmed ◽  
D. Vijay K. Yarlagadda ◽  
Matthew G. Hanna ◽  
...  

Context.— Wide adoption of digital pathology requires efficient visualization and navigation in Web-based digital slide viewers, which is poorly defined. Objective.— To define and quantify relevant performance metrics for efficient visualization of cases and slides in digital slide viewers. Design.— With a universal slide viewer used in clinical routine diagnostics, we evaluate the impact of slide caching, compression type, tile, and block size of whole slide images generated from Philips, Leica, and 3DHistech scanners on streaming performance on case, slide, and field of view levels. Results.— Two hundred thirty-nine pathologists routinely reviewed 60 080 whole slide images over 3 months. The median time to open a case's slides from the laboratory information system was less than 4 seconds, the time to change to a slide within the case was less than 1 second, and the time to render the adjacent field of view when navigating the slide was less than one-quarter of a second. A whole slide image's block size and a viewer tile size of 1024 pixels showed best performance to display a field of view and was preferrable over smaller tiles due to fewer mosaic effects. For Philips, fastest median slide streaming pace was 238 ms per field of view and for 3DHistech, 125 ms. For Leica, the fastest pace of 108 ms per field of view was established with block serving without decompression. Conclusions.— This is the first study to systematically assess user-centric slide visualization performance metrics for digital viewers, including time to open a case, time to change a slide, and time to change a field of view. These metrics help to improve the viewer's configuration, leading to an efficient visualization baseline that is widely accepted among pathologists using routine digital pathology.


2020 ◽  
Vol 42 (3) ◽  
pp. 1-27
Author(s):  
Abhinav Jangda ◽  
Uday Bondhugula
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document