uniform scaling
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 1)

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 278
Author(s):  
Cătălina Lucia Cocianu ◽  
Cristian Răzvan Uscatu

Many technological applications of our time rely on images captured by multiple cameras. Such applications include the detection and recognition of objects in captured images, the tracking of objects and analysis of their motion, and the detection of changes in appearance. The alignment of images captured at different times and/or from different angles is a key processing step in these applications. One of the most challenging tasks is to develop fast algorithms to accurately align images perturbed by various types of transformations. The paper reports a new method used to register images in the case of geometric perturbations that include rotations, translations, and non-uniform scaling. The input images can be monochrome or colored, and they are preprocessed by a noise-insensitive edge detector to obtain binarized versions. Isotropic scaling transformations are used to compute multi-scale representations of the binarized inputs. The algorithm is of memetic type and exploits the fact that the computation carried out in reduced representations usually produces promising initial solutions very fast. The proposed method combines bio-inspired and evolutionary computation techniques with clustered search and implements a procedure specially tailored to address the premature convergence issue in various scaled representations. A long series of tests on perturbed images were performed, evidencing the efficiency of our memetic multi-scale approach. In addition, a comparative analysis has proved that the proposed algorithm outperforms some well-known registration procedures both in terms of accuracy and runtime.


2021 ◽  
pp. 1-5
Author(s):  
Robert J Buenker ◽  

A number of the most often cited results of relativity theory deal with the relationships between energy, momentum and inertial mass. The history of how Einstein and Planck came to these conclusions is reviewed. It is pointed out that considerations of how the speed of light is affected by the motion of the Earth played a determining role in these developments. After the Michelson-Morley null-interference result became available, Voigt introduced a new space-time transformation by amending the classical Galilean transformation so that the speed of light in free space has the same value of c regardless of the state of motion of both the light source and the observer. This led to the Lorentz transformation which has been the cornerstone of relativity theory for the past century. A thought experiment is presented which proves, however, that there are many situations for which the measured speed of light is NOT equal to c. Furthermore, it is pointed out that the rate of an inertial clock cannot change spontaneously, which result is perfectly compatible with Newton’s First Law of Kinetics (Law of Inertia). This result contradicts the space-time mixing characteristic of the Lorentz transformation and leads to the conclusion that events which are spontaneous for one inertial frame will also be so for every other one. The uniform scaling procedure is a generalization of this result for all other physical properties than elapsed times. Its application shows that the commonly accepted relationships between energy and momentum are only special cases in which it is assumed that the observer is stationary in the rest frame in which force has been applied to cause the object’s acceleration


2021 ◽  
pp. 1-6
Author(s):  
Robert J Buenker ◽  

One of the most basic principles in science is the objectivity of measurement of physical properties. According to the special theory of relativity (STR), this ancient principle is violated for observers in relative motion since it predicts that they generally will disagree on the ratios of the lengths of two objects and also on whose clock is running slower at any given time. Both predictions stem from the Lorentz transformation (LT), which is the centerpiece of Einstein's STR. It has recently been pointed out that two of the claims of this theory are mutually contradictory; it is impossible that the rates of two clocks in motion are strictly proportional to one another (time dilation) while one of them finds that two events are simultaneous whereas the other does not (remote nonsimultaneity). This recognition proves that the LT is not a valid component of the relativistic theory of motion, including its well-known thesis that space and time are not distinct quantities. Instead, it has always been found experimentally that the rates of clocks in motion are governed by a Universal Timedilation Law (UTDL), whereby the speed of the clock relative to a specific rest system is the sole determining factor. A simple way of describing this state of affairs is to say that the standard unit of time in each rest frame is different and increases with its relative speed to the above rest system by a definite factor. The measurement process is thereby rendered to be completely objective in nature. A key goal of relativity theory is therefore to develop a quantitatively valid method for determining this factor. It will be shown that the same factor appears in the true relativistic space-time transformation and that it also plays a key role in the uniform scaling of all other physical properties


2021 ◽  
pp. 1-27
Author(s):  
Santiago Barreda

AbstractThe evaluation of normalization methods sometimes focuses on the maximization of vowel-space similarity. This focus can lead to the adoption of methods that erase legitimate phonetic variation from our data, that is, overnormalization. First, a production corpus is presented that highlights three types of variation in formant patterns: uniform scaling, nonuniform scaling, and centralization. Then the results of two perceptual experiments are presented, both suggesting that listeners tend to ignore variation according to uniform scaling, while associating nonuniform scaling and centralization with phonetic differences. Overall, results suggest that normalization methods that remove variation not according to uniform scaling can remove legitimate phonetic variation from vowel formant data. As a result, although these methods can provide more similar vowel spaces, they do so by erasing phonetic variation from vowel data that may be socially and linguistically meaningful, including a potential male-female difference in the low vowels in our corpus.


2020 ◽  
Author(s):  
Nilay Kumar ◽  
Francisco Huizar ◽  
Trent Robinett ◽  
Keity J. Farfán-Pira ◽  
Dharsan Soundarrajan ◽  
...  

SummaryPhenomics requires quantification of large volumes of image data, necessitating high throughput image processing approaches. Existing image processing pipelines for Drosophila wings, a powerful model for studying morphogenesis, are limited in speed, versatility, and precision. To overcome these limitations, we developed MAPPER, a fully-automated machine learning-based pipeline that quantifies high dimensional phenotypic signatures, with each dimension representing a unique morphological feature. MAPPER magnifies the power of Drosophila genetics by rapidly identifying subtle phenotypic differences in sample populations. To demonstrate its widespread utility, we used MAPPER to reveal new insights connecting patterning and growth across Drosophila genotypes and species. The morphological features extracted using MAPPER identified the presence of a uniform scaling of proximal-distal axis length across four different species of Drosophila. Observation of morphological features extracted by MAPPER from Drosophila wings by modulating insulin signaling pathway activity revealed the presence of a scaling gradient across the anterior-posterior axis. Additionally, batch processing of samples with MAPPER revealed a key function for the mechanosensitive calcium channel, Piezo, in regulating bilateral symmetry and robust organ growth. MAPPER is an open source tool for rapid analysis of large volumes of imaging data. Overall, MAPPER provides new capabilities to rigorously and systematically identify genotype-to-phenotype relationships in an automated, high throughput fashion.Graphical abstract


Author(s):  
Ruizhe Zhao ◽  
Brian Vogel ◽  
Tanvir Ahmed ◽  
Wayne Luk

By leveraging the half-precision floating-point format (FP16) well supported by recent GPUs, mixed precision training (MPT) enables us to train larger models under the same or even smaller budget. However, due to the limited representation range of FP16, gradients can often experience severe underflow problems that hinder backpropagation and degrade model accuracy. MPT adopts loss scaling, which scales up the loss value just before backpropagation starts, to mitigate underflow by enlarging the magnitude of gradients. Unfortunately, scaling once is insufficient: gradients from distinct layers can each have different data distributions and require non-uniform scaling. Heuristics and hyperparameter tuning are needed to minimize these side-effects on loss scaling. We propose gradient scaling, a novel method that analytically calculates the appropriate scale for each gradient on-the-fly. It addresses underflow effectively without numerical problems like overflow and the need for tedious hyperparameter tuning. Experiments on a variety of networks and tasks show that gradient scaling can improve accuracy and reduce overall training effort compared with the state-of-the-art MPT.


2020 ◽  
Vol 93 (1106) ◽  
pp. 20190639 ◽  
Author(s):  
Rosie Goodburn ◽  
Evanthia Kousi ◽  
Alison Macdonald ◽  
Veronica Morgan ◽  
Erica Scurr ◽  
...  

Objective: To present and evaluate an automated method to correct scaling between Dixon water/fat images used in breast density (BD) assessments. Methods: Dixon images were acquired in 14 subjects with different T1 weightings (flip angles, FA, 4°/16°). Our method corrects intensity differences between water ([Formula: see text]) and fat ([Formula: see text]) images via the application of a uniform scaling factor (SF), determined subject-by-subject. Based on the postulation that optimal SFs yield relatively featureless summed fat/scaled-water ([Formula: see text]) images, each SF was chosen as that which generated the lowest 95th-percentile in the absolute spatial-gradient image-volume of [Formula: see text] . Water-fraction maps were calculated for data acquired with low/high FAs, and BD (%) was the total percentage water within each breast volume. Results: Corrected/uncorrected BD ranged from, respectively, 10.9–71.8%/8.9–66.7% for low-FA data to 8.1–74.3%/5.6–54.3% for high-FA data. Corrected metrics had an average absolute increase in BD of 6.4% for low-FA data and 18.4% for high-FA data. BD values estimated from low- and high-FA data were closer following SF-correction. Conclusion: Our results demonstrate need for scaling in such BD assessments, where our method brought high-FA and low-FA data into closer agreement. Advances in knowledge: We demonstrated a feasible method to address a main source of inaccuracy in Dixon-based BD measurements.


Author(s):  
Yilin Shen ◽  
Yanping Chen ◽  
Eamonn Keogh ◽  
Hongxia Jin
Keyword(s):  

Author(s):  
David Ross-Pinnock ◽  
Glen Mullineux

Control of temperature in large-scale manufacturing environments is not always practical or economical, introducing thermal effects including variation in ambient refractive index and thermal expansion. Thermal expansion is one of the largest contributors to measurement uncertainty; however, temperature distributions are not widely measured. Uncertainties can also be introduced in scaling to standard temperature. For more complex temperature distributions with non-linear temperature gradients, uniform scaling is unrealistic. Deformations have been measured photogrammetrically in two thermally challenging scenarios with localised heating. Extended temperature measurement has been tested with finite element analysis to assess a compensation methodology for coordinate measurement. This has been compared to commonly used uniform scaling and has outperformed this with a highly simplified finite element analysis simulation in scaling a number of coordinates at once. This work highlighted the need for focus on reproducible temperature measurement for dimensional measurement in non-standard environments.


2017 ◽  
Vol 141 (5) ◽  
pp. 3582-3582
Author(s):  
Terrance M. Nearey ◽  
Santiago Barreda ◽  
Michael Kiefte ◽  
Peter F. Assmann

Sign in / Sign up

Export Citation Format

Share Document