scholarly journals Multiscale Detection of Circles, Ellipses and Line Segments, Robust to Noise and Blur

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 25554-25578
Author(s):  
Onofre Martorell ◽  
Antoni Buades ◽  
Jose Luis Lisani
2009 ◽  
Author(s):  
Robert G. Cook ◽  
Carl Erick Hagmann
Keyword(s):  

2020 ◽  
Author(s):  
Anna Nowakowska ◽  
Alasdair D F Clarke ◽  
Jessica Christie ◽  
Josephine Reuther ◽  
Amelia R. Hunt

We measured the efficiency of 30 participants as they searched through simple line segment stimuli and through a set of complex icons. We observed a dramatic shift from highly variable, and mostly inefficient, strategies with the line segments, to uniformly efficient search behaviour with the icons. These results demonstrate that changing what may initially appear to be irrelevant, surface-level details of the task can lead to large changes in measured behaviour, and that visual primitives are not always representative of more complex objects.


2009 ◽  
Vol 29 (5) ◽  
pp. 1359-1361
Author(s):  
Tong ZHANG ◽  
Zhao LIU ◽  
Ning OUYANG

Author(s):  
Stuart P. Wilson

Self-organization describes a dynamic in a system whereby local interactions between individuals collectively yield global order, i.e. spatial patterns unobservable in their entirety to the individuals. By this working definition, self-organization is intimately related to chaos, i.e. global order in the dynamics of deterministic systems that are locally unpredictable. A useful distinction is that a small perturbation to a chaotic system causes a large deviation in its trajectory, i.e. the butterfly effect, whereas self-organizing patterns are robust to noise and perturbation. For many, self-organization is as important to the understanding of biological processes as natural selection. For some, self-organization explains where the complex forms that compete for survival in the natural world originate from. This chapter outlines some fundamental ideas from the study of simulated self-organizing systems, before suggesting how self-organizing principles could be applied through biohybrid societies to establish new theories of living systems.


2021 ◽  
Vol 79 (2) ◽  
pp. 503-520
Author(s):  
Ignacio Araya ◽  
Damir Aliquintui ◽  
Franco Ardiles ◽  
Braulio Lobo

2021 ◽  
Vol 11 (2) ◽  
pp. 535
Author(s):  
Mahbubunnabi Tamal

Quantification and classification of heterogeneous radiotracer uptake in Positron Emission Tomography (PET) using textural features (termed as radiomics) and artificial intelligence (AI) has the potential to be used as a biomarker of diagnosis and prognosis. However, textural features have been predicted to be strongly correlated with volume, segmentation and quantization, while the impact of image contrast and noise has not been assessed systematically. Further continuous investigations are required to update the existing standardization initiatives. This study aimed to investigate the relationships between textural features and these factors with 18F filled torso NEMA phantom to yield different contrasts and reconstructed with different durations to represent varying levels of noise. The phantom was also scanned with heterogeneous spherical inserts fabricated with 3D printing technology. All spheres were delineated using: (1) the exact boundaries based on their known diameters; (2) 40% fixed; and (3) adaptive threshold. Six textural features were derived from the gray level co-occurrence matrix (GLCM) using different quantization levels. The results indicate that homogeneity and dissimilarity are the most suitable for measuring PET tumor heterogeneity with quantization 64 provided that the segmentation method is robust to noise and contrast variations. To use these textural features as prognostic biomarkers, changes in textural features between baseline and treatment scans should always be reported along with the changes in volumes.


Author(s):  
Mehdi Bahri ◽  
Eimear O’ Sullivan ◽  
Shunwang Gong ◽  
Feng Liu ◽  
Xiaoming Liu ◽  
...  

AbstractStandard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.


Sign in / Sign up

Export Citation Format

Share Document