Automatic HTML Code Generation Using Image Processing

Author(s):  
Shreya Khandekar ◽  
Shraddha Korade ◽  
Rutuja Kulkarni ◽  
Tejashree Pathak ◽  
Satish Kamble
2021 ◽  
Vol 33 (5) ◽  
pp. 181-204
Author(s):  
Vladimir Frolov ◽  
Vadim Sanzharov ◽  
Vladimir Galaktionov ◽  
Alexander Shcherbakov

In this paper we propose a high-level approach to developing GPU applications based on the Vulkan API. The purpose of the work is to reduce the complexity of developing and debugging applications that implement complex algorithms on the GPU using Vulkan. The proposed approach uses the technology of code generation by translating a C++ program into an optimized implementation in Vulkan, which includes automatic shader generation, resource binding, and the use of synchronization mechanisms (Vulkan barriers). The proposed solution is not a general-purpose programming technology, but specializes in specific tasks. At the same time, it has extensibility, which allows to adapt the solution to new problems. For single input C++ program, we can generate several implementations for different cases (via translator options) or different hardware. For example, a call to virtual functions can be implemented either through a switch construct in a kernel, or through sorting threads and an indirect dispatching via different kernels, or through the so-called callable shaders in Vulkan. Instead of creating a universal programming technology for building various software systems, we offer an extensible technology that can be customized for a specific class of applications. Unlike, for example, Halide, we do not use a domain-specific language, and the necessary knowledge is extracted from ordinary C++ code. Therefore, we do not extend with any new language constructs or directives and the input source code is assumed to be normal C++ source code (albeit with some restrictions) that can be compiled by any C++ compiler. We use pattern matching to find specific patterns (or patterns) in C++ code and convert them to GPU efficient code using Vulkan. Pattern are expressed through classes, member functions, and the relationship between them. Thus, the proposed technology makes it possible to ensure a cross-platform solution by generating different implementations of the same algorithm for different GPUs. At the same time, due to this, it allows you to provide access to specific hardware functionality required in computer graphics applications. Patterns are divided into architectural and algorithmic. The architectural pattern defines the domain and behavior of the translator as a whole (for example, image processing, ray tracing, neural networks, computational fluid dynamics and etc.). Algorithmic pattern express knowledge of data flow and control and define a narrower class of algorithms that can be efficiently implemented in hardware. Algorithmic patterns can occur within architectural patterns. For example, parallel reduction, compaction (parallel append), sorting, prefix sum, histogram calculation, map-reduce, etc. The proposed generator works on the principle of code morphing. The essence of this approach is that, having a certain class in the program and transformation rules, one can automatically generate another class with the desired properties (for example, the implementation of the algorithm on the GPU). The generated class inherits from the input class and thus has access to all data and functions of the input class. Overriding virtual functions in generated class helps user to carefully connect generated code to the other Vulkan code written by hand. Shaders can be generated in two variants: OpenCL shaders for google “clspv” compiler and GLSL shaders for an arbitrary GLSL compiler. Clspv variant is better for code which intensively uses pointers and the GLSL generator is better if specific HW features are used (like hardware ray tracing acceleration). We have demonstrated our technology on several examples related to image processing and ray tracing on which we get 30-100 times acceleration over multithreaded CPU implementation.


2020 ◽  
Author(s):  
Robert Haase ◽  
Akanksha Jain ◽  
Stéphane Rigaud ◽  
Daniela Vorkel ◽  
Pradeep Rajasekhar ◽  
...  

AbstractModern life science relies heavily on fluorescent microscopy and subsequent quantitative bio-image analysis. The current rise of graphics processing units (GPUs) in the context of image processing enables batch processing large amounts of image data at unprecedented speed. In order to facilitate adoption of this technology in daily practice, we present an expert system based on the GPU-accelerated image processing library CLIJ: The CLIJ-assistant keeps track of which operations formed an image and suggests subsequent operations. It enables new ways of interaction with image data and image processing operations because its underlying GPU-accelerated image data flow graphs (IDFGs) allow changes to parameters of early processing steps and instantaneous visualization of their final results. Operations, their parameters and connections in the IDFG are stored at any point in time enabling the CLIJ-assistant to offer an undo-function for virtually unlimited rewinding parameter changes. Furthermore, to improve reproducibility of image data analysis workflows and interoperability with established image analysis platforms, the CLIJ-assistant can generate code from IDFGs in programming languages such as ImageJ Macro, Java, Jython, JavaScipt, Groovy, Python and C++ for later use in ImageJ, Fiji, Icy, Matlab, QuPath, Jupyter Notebooks and Napari. We demonstrate the CLIJ-assistant for processing image data in multiple scenarios to highlight its general applicability. The CLIJ-assistant is open source and available online: https://clij.github.io/assistant/


1999 ◽  
Vol 173 ◽  
pp. 243-248
Author(s):  
D. Kubáček ◽  
A. Galád ◽  
A. Pravda

AbstractUnusual short-period comet 29P/Schwassmann-Wachmann 1 inspired many observers to explain its unpredictable outbursts. In this paper large scale structures and features from the inner part of the coma in time periods around outbursts are studied. CCD images were taken at Whipple Observatory, Mt. Hopkins, in 1989 and at Astronomical Observatory, Modra, from 1995 to 1998. Photographic plates of the comet were taken at Harvard College Observatory, Oak Ridge, from 1974 to 1982. The latter were digitized at first to apply the same techniques of image processing for optimizing the visibility of features in the coma during outbursts. Outbursts and coma structures show various shapes.


2000 ◽  
Vol 179 ◽  
pp. 229-232
Author(s):  
Anita Joshi ◽  
Wahab Uddin

AbstractIn this paper we present complete two-dimensional measurements of the observed brightness of the 9th November 1990Hαflare, using a PDS microdensitometer scanner and image processing software MIDAS. The resulting isophotal contour maps, were used to describe morphological-cum-temporal behaviour of the flare and also the kernels of the flare. Correlation of theHαflare with SXR and MW radiations were also studied.


Author(s):  
M.A. O'Keefe ◽  
W.O. Saxton

A recent paper by Kirkland on nonlinear electron image processing, referring to a relatively new textbook, highlights the persistence in the literature of calculations based on incomplete and/or incorrect models of electron imageing, notwithstanding the various papers which have recently pointed out the correct forms of the appropriate equations. Since at least part of the problem can be traced to underlying assumptions about the illumination coherence conditions, we attempt to clarify both the assumptions and the corresponding equations in this paper, illustrating the effects of an incorrect theory by means of images calculated in different ways.The first point to be made clear concerning the illumination coherence conditions is that (except for very thin specimens) it is insufficient simply to know the source profiles present, i.e. the ranges of different directions and energies (focus levels) present in the source; we must also know in general whether the various illumination components are coherent or incoherent with respect to one another.


Author(s):  
R.W. Horne

The technique of surrounding virus particles with a neutralised electron dense stain was described at the Fourth International Congress on Electron Microscopy, Berlin 1958 (see Home & Brenner, 1960, p. 625). For many years the negative staining technique in one form or another, has been applied to a wide range of biological materials. However, the full potential of the method has only recently been explored following the development and applications of optical diffraction and computer image analytical techniques to electron micrographs (cf. De Hosier & Klug, 1968; Markham 1968; Crowther et al., 1970; Home & Markham, 1973; Klug & Berger, 1974; Crowther & Klug, 1975). These image processing procedures have allowed a more precise and quantitative approach to be made concerning the interpretation, measurement and reconstruction of repeating features in certain biological systems.


Author(s):  
R. C. Gonzalez

Interest in digital image processing techniques dates back to the early 1920's, when digitized pictures of world news events were first transmitted by submarine cable between New York and London. Applications of digital image processing concepts, however, did not become widespread until the middle 1960's, when third-generation digital computers began to offer the speed and storage capabilities required for practical implementation of image processing algorithms. Since then, this area has experienced vigorous growth, having been a subject of interdisciplinary research in fields ranging from engineering and computer science to biology, chemistry, and medicine.


Author(s):  
Yasushi Kokubo ◽  
Hirotami Koike ◽  
Teruo Someya

One of the advantages of scanning electron microscopy is the capability for processing the image contrast, i.e., the image processing technique. Crewe et al were the first to apply this technique to a field emission scanning microscope and show images of individual atoms. They obtained a contrast which depended exclusively on the atomic numbers of specimen elements (Zcontrast), by displaying the images treated with the intensity ratio of elastically scattered to inelastically scattered electrons. The elastic scattering electrons were extracted by a solid detector and inelastic scattering electrons by an energy analyzer. We noted, however, that there is a possibility of the same contrast being obtained only by using an annular-type solid detector consisting of multiple concentric detector elements.


Author(s):  
L. Montoto ◽  
M. Montoto ◽  
A. Bel-Lan

INTRODUCTION.- The physical properties of rock masses are greatly influenced by their internal discontinuities, like pores and fissures. So, these need to be measured as a basis for interpretation. To avoid the basic difficulties of measurement under optical microscopy and analogic image systems, the authors use S.E.M. and multiband digital image processing. In S.E.M., analog signal processing has been used to further image enhancement (1), but automatic information extraction can be achieved by simple digital processing of S.E.M. images (2). The use of multiband image would overcome difficulties such as artifacts introduced by the relative positions of sample and detector or the typicals encountered in optical microscopy.DIGITAL IMAGE PROCESSING.- The studied rock specimens were in the form of flat deformation-free surfaces observed under a Phillips SEM model 500. The SEM detector output signal was recorded in picture form in b&w negatives and digitized using a Perkin Elmer 1010 MP flat microdensitometer.


Sign in / Sign up

Export Citation Format

Share Document