A Novel and Fast Numerical Technique for Large-Scale Electromagnetic Imaging Systems

2012 ◽  
Vol 48 (11) ◽  
pp. 2781-2784 ◽  
Author(s):  
He Huang ◽  
Yiming Deng
Author(s):  
Nhan Phan-Thien ◽  
Sangtae Kim

Analytical solutions to a set of boundary integral equations are rare, even with simple geometries and boundary conditions. To make any reasonable progress, a numerical technique must be used. There are basically four issues that must be discussed in any numerical scheme dealing with integral equations. The first and most basic one is how numerical integration can be effected, together with an effective way of dealing with singular kernels of the type encountered in elastostatics. Numerical integration is usually termed numerical quadrature, meaning mathematical formulae for numerical integration. The second issue is the boundary discretization: when integration over the whole boundary is replaced by a sum of the integrations over the individual patches on the boundary. Each patch would be a finite element, or in our case, a boundary element on the surface. Obviously a high-order integration scheme can be devised for the whole domain, thus eliminating the need for boundary discretization. Such a scheme would be problem dependent and therefore would not be very useful to us. The third issue has to do with the fact that we are constrained by the very nature of the numerical approximation process to search for solutions within a certain subspace of L2, say the space of piecewise constant functions in which the unknowns are considered to be constant over a boundary element. It is the order of this subspace, together with the order and the nature of the interpolation of the geometry, that gives rise to the names of various boundary element schemes. Finally, one is faced with the task of solving a set of linear algebraic equations, which is usually dense (the system matrix is fully populated) and potentially ill-conditioned. A direct solver such as Gauss elimination may be very efficient for small- to medium-sized problems but will become stuck in a large-scale simulation, where the only feasible solution strategy is an iterative method. In fact, iterative solution strategies lead naturally to a parallel algorithm under a suitable parallel computing environment. This chapter will review various issues involved in the practical implementation of the CDL-BIEM on a serial computer and on a distributed computing environment.


2014 ◽  
Vol 11 (S308) ◽  
pp. 115-118
Author(s):  
Cora Uhlemann ◽  
Michael Kopp

AbstractWe investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.


2019 ◽  
Vol 11 (21) ◽  
pp. 2472
Author(s):  
He ◽  
Wang ◽  
Chang ◽  
Zhang ◽  
Feng

Stripes are common in remote sensing imaging systems equipped with multichannel time delay integration charge-coupled devices (TDI CCDs) and have different scale characteristics depending on their causes. Large-scale stripes appearing between channels are difficult to process by most current methods. The framework of column-by-column nonuniformity correction (CCNUC) is introduced to eliminate large-scale stripes. However, the worst problem of CCNUC is the unavoidable cumulative error, which will cause an overall color cast. To eliminate large-scale stripes and suppress the cumulative error, we proposed a destriping method via unidirectional multiscale decomposition (DUMD). The striped image was decomposed by constructing a unidirectional pyramid and making difference maps layer by layer. The highest layer of the pyramid was processed by CCNUC to eliminate large-scale stripes, and multiple cumulative error suppression measures were performed to reduce overall color cast. The difference maps of the pyramid were processed by a designed filter to eliminate small-scale stripes. Experiments showed that DUMD had good destriping performance and was robust with respect to different terrains.


Author(s):  
Thomas Hildebrandt ◽  
Wolfgang Ganzert ◽  
Leonhard Fottner

An extensive numerical study was accomplished in order to accompany an experimental research program. The present work is focussed on the influence of the shape and the inclination of film cooling holes on the aerodynamic of the turbine cooling flow. Four different cooling hole geometries located on the suction side of a large scale turbine cascade were modelled and numerically simulated over the entire range of practically applicable blowing ratios. The thermodynamic conditions chosen, were in order to simulate comparable engine conditions. Having computer limitations in mind, former simulations had to cope with either of two limitations. Meshing the entire domain led to an insufficient grid resolution in the vicinity of the ejection area, omitting valuable detailed flow information. In contrast, the so-called local approach (Vogel, 1996) overcame this problem by isolating an area close to the ejection zone, hence leading to a proper numerical resolution. A major drawback of this method is the required assumption for a limiting streamsurface, which often led to an inaccurate pressure distribution on the blade surface. Therefore, a new numerical technique in applying a 3D Navier-Stokes code on cooling flow problems — the global approach — was used, overcoming the restrictions of the above mentioned approaches. In the frame of these numerical investigations the commercially available CFD-package FINE™/Turbo by NUMECA was used. The CFD-package incorporates a modern flow solver and the capability to perform multi-species computations, which was utilised herein. A second species (tracer gas) with the properties of air, was introduced, ejecting from the cooling plenum, hence strongly facilitating the visual detection of the emerging and dispersing cooling flow. The numerical resolution reached well over one million grid points. The distributions of the tracer concentration and the oil and dye visualisations clearly reveal a strong dependency between the cooling hole shape and the efficiency of the film cooling. A considerable increase in the latter could be achieved, if non-cylindrical cooling hole geometries were used. The CFD simulations are in a very good agreement with the measurements (Ganzert, Hildebrandt and Fottner, 2000), clearly uncoveringvery detailed flow phenomena, which could also be detected in the experimental results. It was found out that the shape and the inclination angle of the cooling holes are of paramount importance to the distributing pattern of the cooling air and hence to the cooling efficiency.


VLSI Design ◽  
1998 ◽  
Vol 8 (1-4) ◽  
pp. 393-399
Author(s):  
Elizabeth J. Brauer ◽  
Marek Turowski ◽  
James M. McDonough

A new numerical method for semiconductor device simulation is presented. The additive decomposition method has been successfully applied to Burgers' and Navier-Stokes equations governing turbulent fluid flow by decomposing the equations into large-scale and small-scale parts without averaging. The additive decomposition (AD) technique is well suited to problems with a large range of time and/or space scales, for example, thermal-electrical simulation of power semiconductor devices with large physical size. Furthermore, AD adds a level of parallelization for improved computational efficiency. The new numerical technique has been tested on the 1-D drift-diffusion model of a p-i-n diode for reverse and forward biases. Distributions of φ, n and p have been calculated using the AD method on a coarse large-scale grid and then in parallel small-scale grid sections. The AD results agreed well with the results obtained with a traditional one-grid approach, while potentially reducing memory requirements with the new method.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Agustin Di Paolo ◽  
Thomas E. Baker ◽  
Alexandre Foley ◽  
David Sénéchal ◽  
Alexandre Blais

AbstractWe use a tensor network method to compute the low-energy excitations of a large-scale fluxonium qubit up to a desired accuracy. We employ this numerical technique to estimate the pure-dephasing coherence time of the fluxonium qubit due to charge noise and coherent quantum phase slips from first principles, finding an agreement with previously obtained experimental results. By developing an accurate single-mode theory that captures the details of the fluxonium device, we benchmark the results obtained with the tensor network for circuits spanning a Hilbert space as large as 15180. Our algorithm is directly applicable to the wide variety of circuit-QED systems and may be a useful tool for scaling up superconducting quantum technologies.


BME Frontiers ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Waleed Tahir ◽  
Sreekanth Kura ◽  
Jiabei Zhu ◽  
Xiaojun Cheng ◽  
Rafat Damseh ◽  
...  

Objective and Impact Statement. Segmentation of blood vessels from two-photon microscopy (2PM) angiograms of brains has important applications in hemodynamic analysis and disease diagnosis. Here, we develop a generalizable deep learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups. The technique is computationally efficient, thus ideal for large-scale neurovascular analysis. Introduction. Vascular segmentation from 2PM angiograms is an important first step in hemodynamic modeling of brain vasculature. Existing segmentation methods based on deep learning either lack the ability to generalize to data from different imaging systems or are computationally infeasible for large-scale angiograms. In this work, we overcome both these limitations by a method that is generalizable to various imaging systems and is able to segment large-scale angiograms. Methods. We employ a computationally efficient deep learning framework with a loss function that incorporates a balanced binary-cross-entropy loss and total variation regularization on the network’s output. Its effectiveness is demonstrated on experimentally acquired in vivo angiograms from mouse brains of dimensions up to 808×808×702 μm. Results. To demonstrate the superior generalizability of our framework, we train on data from only one 2PM microscope and demonstrate high-quality segmentation on data from a different microscope without any network tuning. Overall, our method demonstrates 10× faster computation in terms of voxels-segmented-per-second and 3× larger depth compared to the state-of-the-art. Conclusion. Our work provides a generalizable and computationally efficient anatomical modeling framework for brain vasculature, which consists of deep learning-based vascular segmentation followed by graphing. It paves the way for future modeling and analysis of hemodynamic response at much greater scales that were inaccessible before.


A numerical technique is presented for the analysis of turbulent flow associated with combustion. The technique uses Chorin’s random vortex method (r.v.m .), an algorithm capable of tracing the action of elementary turbulent eddies and their cumulative effects without imposing any restriction upon their motion. In the past, the r.v.m . has been used with success to treat non-reacting turbulent flows, revealing in particular the mechanics of large-scale flow patterns, the so-called coherent structures. Introduced here is a flame propagation algorithm , also developed by Chorin, in conjunction with volume sources modelling the mechanical effects of the exothermic process of combustion. As an illustration of its use, the technique is applied to flow in a combustion tunnel w here the flame is stabilized by a back-facing step. Solutions for both non-reacting and reacting flow fields are obtained. Although these solutions are restricted by a set of far-reaching idealizations, they nonetheless mimic quite satisfactorily the essential features of turbulent combustion in a lean propane—air mixture that were observed in the laboratory by means of high speed schlieren photography.


Author(s):  
P. Roshni ◽  
K. A. Prajwala

Phenotype is the combination of genotype and environment where the plant grows. Phenomics is a way of speeding up phenotyping with the help of high-tech imaging systems and computing power. It has been a practice in plant breeding for selecting the best genotype after studying phenotypic expression in different environmental conditions and also using them in hybridization programs, to develop new improved genotypes. Phenomics share the advantages of faster evaluation, facilitating a more dynamic whole-of-lifecycle measurement with improved precision, being less dependent on periodic destructive assays and with reduced need for replication in the field. Phenomics aids to obtain high-dimensional phenotypic data on an organism at large scale with the various tools involved. Phenomics, however is more than just data collection paired with data mining. It is a comprehensive approach that combines systems biology and statistical correlation. Data mining techniques and big data approaches have the power to create knowledge that cannot be created otherwise with accurate precision manually.


2020 ◽  
Author(s):  
Waleed Tahir ◽  
Sreekanth Kura ◽  
Jiabei Zhu ◽  
Xiaojun Cheng ◽  
Rafat Damseh ◽  
...  

AbstractObjective and Impact StatementSegmentation of blood vessels from two-photon microscopy (2PM) angiograms of brains has important applications in hemodynamic analysis and disease diagnosis. Here we develop a generalizable deep learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups. The technique is computationally efficient, thus ideal for large-scale neurovascular analysis.IntroductionVascular segmentation from 2PM angiograms is an important first step in hemodynamic modeling of brain vasculature. Existing segmentation methods based on deep learning either lack the ability to generalize to data from different imaging systems, or are computationally infeasible for large-scale angiograms. In this work, we overcome both these limitations by a method that is generalizable to various imaging systems, and is able to segment large-scale angiograms.MethodsWe employ a computationally efficient deep learning framework with a loss function that incorporates a balanced binary-cross-entropy loss and a total variation regularization on the network’s output. Its effectiveness is demonstrated on experimentally acquired in-vivo angiograms from mouse brains of dimensions up to 808×808×702 μm.ResultsTo demonstrate the superior generalizability of our framework, we train on data from only one 2PM microscope, and demonstrate high-quality segmentation on data from a different microscope without any network tuning. Overall, our method demonstrates 10× faster computation in terms of voxels-segmented-per-second and 3× larger depth compared to the state-of-the-art.ConclusionOur work provides a generalizable and computationally efficient anatomical modeling framework for brain vasculature, which consists of deep learning based vascular segmentation followed by graphing. It paves the way for future modeling and analysis of hemodynamic response at much greater scales that were inaccessible before.


Sign in / Sign up

Export Citation Format

Share Document