Quantitative analysis of eyes and other optical systems in linear optics

2017 ◽  
Vol 37 (3) ◽  
pp. 347-352 ◽  
Author(s):  
William F. Harris ◽  
Tanya Evans ◽  
Radboud D. van Gool
2007 ◽  
Vol 66 (2) ◽  
Author(s):  
S. D. Marhebula

The primary purpose of this paper is to illustrate the quantitative analysis of the linear-optical character of a cornea using transformed ray transferences in a 10-dimensional Hamiltonian linear space. A Pentacam was utilized to obtain 43 successive measurements of  the powers of the anterior and posterior corneal surfaces and the central corneal thicknesses of the right eye of a single subject.  From these measurements 44 ×  ray transferences were calculated (and principal matrix logarithms for all the transferences were determined).  This produced a set of 43 transformed transferences for the cornea which represent 43 points in a 10- dimensional Hamiltonian space.  A 10-component mean and  10 10 ×  variance-covariance matrix were calculated from the transformed transferences.  The mean, and variances and covariances represent, in the 10-space, the average and the spread respectively of the measurements characterising the optical nature of the cornea.  The matrix exponential of the mean gives a value for the mean transference of the cornea; it represents the average cornea.  The analysis described here can be applied to most optical systems including whole eyes and is complete within linear optics.  We believe it to be the first such analysis.


2007 ◽  
Vol 66 (2) ◽  
Author(s):  
W. F. Harris

There is a need for methods for quantitative analysis of the first-order optical character of optical systems including the eye and components of the eye.  Because of their symplectic nature ray transferences themselves are not closed under addition and multiplication by ascalar and, hence, are not amenable to conventional quantitative analysis such as the calculation of an arithmetic mean.  However transferences can be transformed into augmented Hamiltonian matrices which are amenable to such analysis.  This paper provides a general methodology and in particular shows how to calculate means and variance-covariances representing the first-order optical character of optical systems.  The systems may be astigmatic and may have decentred elements.  An accompanying paper shows application to the cornea of the human eye with allowance for thickness.


2009 ◽  
Vol 68 (2) ◽  
Author(s):  
W. F. Harris

That a thin refracting element can have a dioptric power which is asymmetric immediately raises questions at the fundamentals of linear optics.  In optometry the important concept of vergence, in particular, depends on the concept of a pencil of rays which in turn depends on the existence of a focus.  But systems that contain refracting elements of asymmetric power may have no focus at all.  Thus the existence of thin systems with asym-metric power forces one to go back to basics and redevelop a linear optics from scratch that is sufficiently general to be able to accommodate suchsystems.  This paper offers an axiomatic approach to such a generalized linear optics.  The paper makes use of two axioms: (i) a ray in a homogeneous medium is a segment of a straight line, and (ii) at an interface between two homogeneous media a ray refracts according to Snell’s equation.  The familiar paraxial assumption of linear optics is also made.  From the axioms a pencil of rays at a transverse plane T in a homogeneous medium is defined formally (Definition 1) as an equivalence relation with no necessary association with a focus.  At T the reduced inclination of a ray in a pencil is an af-fine function of its transverse position.  If the pencilis centred the function is linear.  The multiplying factor M, called the divergency of the pencil at T, is a real  2 2×  matrix.  Equations are derived for the change of divergency across thin systems and homogeneous gaps.  Although divergency is un-defined at refracting surfaces and focal planes the pencil of rays is defined at every transverse plane ina system (Definition 2).  The eigenstructure gives aprincipal meridional representation of divergency;and divergency can be decomposed into four natural components.  Depending on its divergency a pencil in a homogeneous gap may have exactly one point focus, one line focus, two line foci or no foci.Equations are presented for the position of a focusand of its orientation in the case of a line focus.  All possible cases are examined.  The equations allow matrix step-along procedures for optical systems in general including those with elements that haveasymmetric power.  The negative of the divergencyis the (generalized) vergence of the pencil.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Aonan Zhang ◽  
Hao Zhan ◽  
Junjie Liao ◽  
Kaimin Zheng ◽  
Tao Jiang ◽  
...  

AbstractQuantum computing is seeking to realize hardware-optimized algorithms for application-related computational tasks. NP (nondeterministic-polynomial-time) is a complexity class containing many important but intractable problems like the satisfiability of potentially conflict constraints (SAT). According to the well-founded exponential time hypothesis, verifying an SAT instance of size n requires generally the complete solution in an O(n)-bit proof. In contrast, quantum verification algorithms, which encode the solution into quantum bits rather than classical bit strings, can perform the verification task with quadratically reduced information about the solution in $$\tilde O(\sqrt n )$$ O ̃ ( n ) qubits. Here we realize the quantum verification machine of SAT with single photons and linear optics. By using tunable optical setups, we efficiently verify satisfiable and unsatisfiable SAT instances and achieve a clear completeness-soundness gap even in the presence of experimental imperfections. The protocol requires only unentangled photons, linear operations on multiple modes and at most two-photon joint measurements. These features make the protocol suitable for photonic realization and scalable to large problem sizes with the advances in high-dimensional quantum information manipulation and large scale linear-optical systems. Our results open an essentially new route toward quantum advantages and extend the computational capability of optical quantum computing.


2021 ◽  
Vol 20 (9) ◽  
Author(s):  
Juan Carlos Garcia-Escartin ◽  
Vicent Gimeno ◽  
Julio José Moyano-Fernández

AbstractLinear optical systems acting on photon number states produce many interesting evolutions, but cannot give all the allowed quantum operations on the input state. Using Toponogov’s theorem from differential geometry, we propose an iterative method that, for any arbitrary quantum operator U acting on n photons in m modes, returns an operator $$\widetilde{U}$$ U ~ which can be implemented with linear optics. The approximation method is locally optimal and converges. The resulting operator $$\widetilde{U}$$ U ~ can be translated into an experimental optical setup using previous results.


2016 ◽  
Vol 75 (1) ◽  
Author(s):  
William F. Harris ◽  
Tanya Evans ◽  
Radboud D. Van Gool

Because dioptric power matrices of thin systems constitute a (three-dimensional) inner-product space, it is possible to define distances and angles in the space and so do quantitative analyses on dioptric power for thin systems. That includes astigmatic corneal powers and refractive errors. The purpose of this study is to generalise to thick systems. The paper begins with the ray transference of a system. Two 10-dimensional inner-product spaces are devised for the holistic quantitative analysis of the linear optical character of optical systems. One is based on the point characteristic and the other on the angle characteristic; the first has distances with the physical dimension L−1 and the second has the physical dimension L. A numerical example calculates the locations, distances from the origin and angles subtended at the origin in the 10-dimensional space for two arbitrary astigmatic eyes.


Author(s):  
J.P. Fallon ◽  
P.J. Gregory ◽  
C.J. Taylor

Quantitative image analysis systems have been used for several years in research and quality control applications in various fields including metallurgy and medicine. The technique has been applied as an extension of subjective microscopy to problems requiring quantitative results and which are amenable to automatic methods of interpretation.Feature extraction. In the most general sense, a feature can be defined as a portion of the image which differs in some consistent way from the background. A feature may be characterized by the density difference between itself and the background, by an edge gradient, or by the spatial frequency content (texture) within its boundaries. The task of feature extraction includes recognition of features and encoding of the associated information for quantitative analysis.Quantitative Analysis. Quantitative analysis is the determination of one or more physical measurements of each feature. These measurements may be straightforward ones such as area, length, or perimeter, or more complex stereological measurements such as convex perimeter or Feret's diameter.


Author(s):  
V. V. Damiano ◽  
R. P. Daniele ◽  
H. T. Tucker ◽  
J. H. Dauber

An important example of intracellular particles is encountered in silicosis where alveolar macrophages ingest inspired silica particles. The quantitation of the silica uptake by these cells may be a potentially useful method for monitoring silica exposure. Accurate quantitative analysis of ingested silica by phagocytic cells is difficult because the particles are frequently small, irregularly shaped and cannot be visualized within the cells. Semiquantitative methods which make use of particles of known size, shape and composition as calibration standards may be the most direct and simplest approach to undertake. The present paper describes an empirical method in which glass microspheres were used as a model to show how the ratio of the silicon Kα peak X-ray intensity from the microspheres to that of a bulk sample of the same composition correlated to the mass of the microsphere contained within the cell. Irregular shaped silica particles were also analyzed and a calibration curve was generated from these data.


Author(s):  
H.J. Dudek

The chemical inhomogenities in modern materials such as fibers, phases and inclusions, often have diameters in the region of one micrometer. Using electron microbeam analysis for the determination of the element concentrations one has to know the smallest possible diameter of such regions for a given accuracy of the quantitative analysis.In th is paper the correction procedure for the quantitative electron microbeam analysis is extended to a spacial problem to determine the smallest possible measurements of a cylindrical particle P of high D (depth resolution) and diameter L (lateral resolution) embeded in a matrix M and which has to be analysed quantitative with the accuracy q. The mathematical accounts lead to the following form of the characteristic x-ray intens ity of the element i of a particle P embeded in the matrix M in relation to the intensity of a standard S


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Sign in / Sign up

Export Citation Format

Share Document