element size
Recently Published Documents


TOTAL DOCUMENTS

275
(FIVE YEARS 52)

H-INDEX

28
(FIVE YEARS 4)

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258481
Author(s):  
Timothy P. Szczykutowicz ◽  
Sean D. Rose ◽  
Alexander Kitt

Purpose Previous efforts at increasing spatial resolution have relied on decreasing focal spot and or detector element size. Many “super resolution” methods require physical movement of a component of the imaging system. This work describes a method for achieving spatial resolution on a scale smaller than the detector pixel without motion of the object or detector. Methods We introduce a weighting of the photon energy spectrum on a length scale smaller than a single pixel using a physical filter that can be placed between the focal spot and the object, between the object and the detector, or integrated into the x-ray source or detector. We refer to the method as sub pixel encoding (SPE). We show that if one acquires multiple measurements (i.e. x-ray projections), information can be synthesized at a spatial scale defined by the spectrum modulation, not the detector element size. Specifically, if one divides a detector pixel into n sub regions, and m photon-matter interactions are present, the number of x-ray measurements needed to solve for the detector response of each sub region is mxn. We discuss realizations of SPE using multiple x-ray spectra with an energy integrating detector, a single spectra with a photon counting detector, and the single photon-matter interaction case. We demonstrate the feasibility of the approach using a simulated energy integrating detector with a detector pitch of 2 mm for 80-140 kV medical and 200-600 kV industrial applications. Phantoms used for both example SPE realization had some features only a 1 mm detector could resolve. We calculate the covariance matrix of SPE output to characterize the and noise propagation and correlation of our test examples. Results The mathematical foundation of SPE is provided, with details worked out for several detector types and energy ranges. Two numerical simulations were provided to demonstrate feasibility. In both the medical and industrial simulations, some phantom features were only observable with the 1 mm and SPE synthesized 2 mm detector, while the 2 mm detector was not able to visualize them. Covariance matrix analysis demonstrated negative diagonal terms for both example cases. Conclusions The concept of encoding object information at a length scale smaller than a single pixel element, and then retrieving that information was introduced. SPE simultaneously allows for an increase in spatial resolution and provides “dual energy” like information about the underlying photon-matter interactions.


2021 ◽  
Author(s):  
Francisco Daniel Filip Duarte

Abstract Artificial intelligence in general and optimization tasks applied to the design of aerospace, space,and automotive structures, rely on response surfaces to forecast the output of functions, and are vital part of these methodologies. Yet they have important limitations, since greater precisions require greater data sets, thus, training or updating larger response surfaces become computationally expensive, sometimes unfeasible. This has been a bottle neck limitation to achieve more promising results, rendering many AI related task with a low efficiency.To solve this challenge, a new methodology created to segment response surfaces is hereby presented. Differently than other similar methodologies, the novel algorithm here presented named outer input method, has a very simple and robust operation. With only one operational parameter, maximum element size, it efficiently generates a near isopopulated mesh for any data set with any type of distribution, such as random, Cartesian, or clustered, for domains with any number of coordinates.Thus, it is possible to simplify the response surfaces by generating an ensemble of response surfaces, here denominated response surface mesh. This study demonstrates how a metamodel denominated Kriging, trained with a large data set, can be simplified with a response surface mesh, significantly reducing its often expensive computation costs> experiments here presented achieved an speed increase up to 180 times, while using a dual core parallel processingcomputer. This methodology can be applied to any metamodel, and metamodel elements can be easily parallelized and updated individually. Thus, its already faster training operation has its speed increased.


2021 ◽  
Author(s):  
Francisco Daniel Filip Duarte

Abstract Artificial intelligence in general and optimization tasks applied to the design of aerospace, space,and automotive structures, rely on response surfaces to forecast the output of functions, and are vital part of these methodologies. Yet they have important limitations, since greater precisions require greater data sets, thus, training or updating larger response surfaces become computationally expensive, sometimes unfeasible. This has been a bottle neck limitation to achieve more promising results, rendering many AI related task with a low efficiency.To solve this challenge, a new methodology created to segment response surfaces is hereby presented. Differently than other similar methodologies, the novel algorithm here presented named outer input method, has a very simple and robust operation. With only one operational parameter, maximum element size, it efficiently generates a near isopopulated mesh for any data set with any type of distribution, such as random, Cartesian, or clustered, for domains with any number of coordinates.Thus, it is possible to simplify the response surfaces by generating an ensemble of response surfaces, here denominated response surface mesh. This study demonstrates how a metamodel denominated Kriging, trained with a large data set, can be simplified with a response surface mesh, significantly reducing its often expensive computation costs> experiments here presented achieved an speed increase up to 180 times, while using a dual core parallel processingcomputer. This methodology can be applied to any metamodel, and metamodel elements can be easily parallelized and updated individually. Thus, its already faster training operation has its speed increased.


2021 ◽  
Vol 80 (10) ◽  
pp. 7423-7439 ◽  
Author(s):  
Zhiyong Yang ◽  
Jiayan Nie ◽  
Xing Peng ◽  
Dong Tang ◽  
Xueyou Li

Author(s):  
Emmanouil Parastatidis ◽  
Mark W. Hildyard ◽  
Andy Nowacki

AbstractSeismic waves can be an effective probe to retrieve fracture properties particularly when measurements are coupled with forward and inverse modelling. These seismic models then need an appropriate representation of the fracturing. The fractures can be modelled either explicitly, considering zero thickness frictional slip surfaces, or by considering an effective medium which incorporates the effect of the fractures into the properties of the medium, creating anisotropy in the wave velocities. In this work, we use a third approach which is a hybrid of the previous two. The area surrounding the predefined fracture is treated as an effective medium and the rest of the medium is made homogeneous and isotropic, creating a Localised Effective Medium (LEM). LEM can be as accurate as the explicit but more efficient in run-time. We have shown that the LEM model can closely match an explicit model in reproducing waveforms recorded in a laboratory experiment, for wave propagating parallel and perpendicular to the fractures. The LEM model performs close to the explicit model when the wavelength is much larger than the element size and larger than the fracture spacing. By the definition of the LEM model, we expect that as the LEM layer becomes coarser the model will start approaching the effective medium result. However, what are the limitations of the LEM and is there a balance between the stiffness, the frequency and the thickness, where the LEM performs close to an explicit model or approaches the effective medium model? To define the limits of the LEM we experiment varying fracture stiffness and source frequency. We then compare for each frequency and stiffness the explicit and effective medium with five models of LEM with different thickness. Finally, we conclude that the thick LEM layers with lower resolution perform the same as the thinner and finer resolution LEM layers for lower frequencies and higher fracture stiffness.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1020
Author(s):  
Michiel Stevens ◽  
Peng Liu ◽  
Tom Niessink ◽  
Anouk Mentink ◽  
Leon Abelmann ◽  
...  

Due to the low frequency of circulating tumor cells (CTC), the standard CellSearch method of enumeration and isolation using a single tube of blood is insufficient to measure treatment effects consistently, or to steer personalized therapy. Using diagnostic leukapheresis this sample size can be increased; however, this also calls for a suitable new method to process larger sample inputs. In order to achieve this, we have optimized the immunomagnetic enrichment process using a flow-through magnetophoretic system. An overview of the major forces involved in magnetophoretic separation is provided and the model used for optimizing the magnetic configuration in flow through immunomagnetic enrichment is presented. The optimal Halbach array element size was calculated and both optimal and non-optimal arrays were built and tested using anti-EpCAM ferrofluid in combination with cell lines of varying EpCAM antigen expression. Experimentally measured distributions of the magnetic moment of the cell lines used for comparison were combined with predicted recoveries and fit to the experimental data. Resulting predictions agree with measured data within measurement uncertainty. The presented method can be used not only to optimize magnetophoretic separation using a variety of flow configurations but could also be adapted to optimize other (static) magnetic separation techniques.


2021 ◽  
Vol 11 (9) ◽  
pp. 4062
Author(s):  
Grzegorz Zboiński ◽  
Magdalena Zielińska

This paper concerns the algorithm of transition piezoelectric elements for adaptive analysis of electro-mechanical systems. In addition, effectivity of the proposed elements in such an analysis is presented. The elements under consideration are assigned for joining basic elements which correspond to the mechanical models of either the first or higher order, while the electric model is of arbitrary order. In this work, three variants of the transition models are applied. The first one assures continuity of displacements between the basic models and continuity of electric potential between these models, as well. The second transition piezoelectric model guarantees additional continuity of the stress field between the basic models. The third transition model additionally enables continuous change of the strain state between the basic models. Based on the mentioned models, three types of the corresponding transition finite elements are introduced. The applied finite element approximations are hpq/hp-adaptive ones, which allows element-wise changes of the element size parameter h, and the element longitudinal and transverse orders of approximation, respectively, p and q, depending on the error level. Numerical effectiveness of the models and their approximations is investigated in the contexts of: ability to remove high stress gradients between the basic and transition models, and convergence of the numerical solutions for the model problems of piezoelectrics with and without the proposed transition elements.


Sign in / Sign up

Export Citation Format

Share Document