scholarly journals The complexity of retina operators

2002 ◽  
Vol 2 (1) ◽  
pp. 23-50 ◽  
Author(s):  
Bernard Beauzamy

An artificial retina is a plane circuit, consisting of a matrix of photocaptors; each has its own memory, consisting in a small number of cells (3 to 5), arranged in parallel planes. The treatment consists in logical operations between planes, plus translations of any plane: they are called “elementary operations” (EO). A retina operator (RO) is a transformation of the image, defined by a specific representation of a Boolean function ofnvariables(nis the number of neighboring cells taken into account). What is the best way to represent an RO by means of EO, considering the strong limitation of memory? For most retina operators, the complexity (i.e., the number of EO needed) is exponential, no matter what representation is used, but, for specific classes, threshold functions and more generally symmetric functions, we obtain a result several orders of magnitude better than previously known ones. It uses a new representation, called “Block Addition of Variables.” For instance, the threshold functionT 25,12(find if at least 12 pixels are at 1 in a square of5×5) required 62 403 599 EO to be performed. With our method, it requires only 38 084 operations, using three memory cells.

2014 ◽  
Vol 536-537 ◽  
pp. 222-225
Author(s):  
Jing Wen Li ◽  
Shan Hong Yang

In the acquisition process of palmprint image, the image is often subject to interference from the outside noise.This noise will affect the future of Palmprint feature extraction. Denoising of wavelet is a common method for denoising of palmprint image. The defect of traditional soft and hard threshold functions in image denoising is analyzed. An improved threshold function is proposed on the basis of soft and hard threshold functions. The conventional hard, soft threshold functions and the improved function are used respectively in image denosing. Experiments prove that the improved wavelet threshold function is better than conventional soft, hard threshold function in denoising effect.


1993 ◽  
Vol 3 (4) ◽  
Author(s):  
A.A. Irmatov

AbstractA Boolean function is called a threshold function if its truth domain is a part of the n-cube cut off by some hyperplane. The number of threshold functions of n variables P(2, n) was estimated in [1, 2, 3]. Obtaining the lower bounds is a problem of special difficulty. Using a result of the paper [4], Zuev in [3] showed that for sufficiently large nP(2, n) > 2In the present paper a new proof which gives a more precise lower bound of P(2, n) is proposed, namely, it is proved that for sufficiently large nP(2, n) > 2


2005 ◽  
Vol DMTCS Proceedings vol. AE,... (Proceedings) ◽  
Author(s):  
Kazuyuki Amano ◽  
Jun Tarui

International audience Let $T_t$ denote the $t$-threshold function on the $n$-cube: $T_t(x) = 1$ if $|\{i : x_i=1\}| \geq t$, and $0$ otherwise. Define the distance between Boolean functions $g$ and $h$, $d(g,h)$, to be the number of points on which $g$ and $h$ disagree. We consider the following extremal problem: Over a monotone Boolean function $g$ on the $n$-cube with $s$ zeros, what is the maximum of $d(g,T_t)$? We show that the following monotone function $p_s$ maximizes the distance: For $x \in \{0,1\}^n$, $p_s(x)=0$ if and only if $N(x) < s$, where $N(x)$ is the integer whose $n$-bit binary representation is $x$. Our result generalizes the previous work for the case $t=\lceil n/2 \rceil$ and $s=2^{n-1}$ by Blum, Burch, and Langford [BBL98-FOCS98], who considered the problem to analyze the behavior of a learning algorithm for monotone Boolean functions, and the previous work for the same $t$ and $s$ by Amano and Maruoka [AM02-ALT02].


2011 ◽  
Vol 90-93 ◽  
pp. 2858-2863
Author(s):  
Wei Li ◽  
Xu Wang

Due to the soft and hard threshold function exist shortcomings. This will reduce the performance in wavelet de-noising. in order to solve this problem,This article proposes Modulus square approach. the new approach avoids the discontinuity of the hard threshold function and also decreases the fixed bias between the estimated wavelet coefficients and the wavelet coefficients of the soft-threshold method.Simulation results show that SNR and MSE are better than simply using soft and hard threshold,having good de-noising effect in Deformation Monitoring.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Zhuxiang Shen ◽  
Wei Li ◽  
Hui Han

To explore the utilization of the convolutional neural network (CNN) and wavelet transform in ultrasonic image denoising and the influence of the optimized wavelet threshold function (WTF) algorithm on image denoising, in this exploration, first, the imaging principle of ultrasound images is studied. Due to the limitation of the principle of ultrasound imaging, the inherent speckle noise will seriously affect the quality of ultrasound images. The denoising principle of the WTF based on the wavelet transform is analyzed. Based on the traditional threshold function algorithm, the optimized WTF algorithm is proposed and applied to the simulation experiment of ultrasound images. By comparing quantitatively and qualitatively with the traditional threshold function algorithm, the advantages of the optimized WTF algorithm are analyzed. The results suggest that the image is denoised by the optimized WTF. The mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measurement (SSIM) of the images are 20.796 dB, 34.294 dB, and 0.672 dB, respectively. The denoising effect is better than the traditional threshold function. It can denoise the image to the maximum extent without losing the image information. In addition, in this exploration, the optimized function is applied to the actual medical image processing, and the ultrasound images of arteries and kidneys are denoised separately. It is found that the quality of the denoised image is better than that of the original image, and the extraction of effective information is more accurate. In summary, the optimized WTF algorithm can not only remove a lot of noise but also obtain better visual effect. It has important value in assisting doctors in disease diagnosis, so it can be widely applied in clinics.


2018 ◽  
Vol 115 (38) ◽  
pp. E8939-E8947 ◽  
Author(s):  
Hesham M. Shehata ◽  
Shahzada Khan ◽  
Elise Chen ◽  
Patrick E. Fields ◽  
Richard A. Flavell ◽  
...  

Identifying novel pathways that promote robust function and longevity of cytotoxic T cells has promising potential for immunotherapeutic strategies to combat cancer and chronic infections. We show that sprouty 1 and 2 (Spry1/2) molecules regulate the survival and function of memory CD8+ T cells. Spry1/2 double-knockout (DKO) ovalbumin (OVA)-specific CD8+ T cells (OT-I cells) mounted more vigorous autoimmune diabetes than WT OT-I cells when transferred to mice expressing OVA in their pancreatic β-islets. To determine the consequence of Spry1/2 deletion on effector and memory CD8+ T cell development and function, we used systemic infection with lymphocytic choriomeningitis virus (LCMV) Armstrong. Spry1/2 DKO LCMV gp33-specific P14 CD8+ T cells survive contraction better than WT cells and generate significantly more polyfunctional memory T cells. The larger number of Spry1/2 DKO memory T cells displayed enhanced infiltration into infected tissue, demonstrating that absence of Spry1/2 can result in increased recall capacity. Upon adoptive transfer into naive hosts, Spry1/2 DKO memory T cells controlled Listeria monocytogenes infection better than WT cells. The enhanced formation of more functional Spry1/2 DKO memory T cells was associated with significantly reduced mTORC1 activity and glucose uptake. Reduced p-AKT, p-FoxO1/3a, and T-bet expression was also consistent with enhanced survival and memory accrual. Collectively, loss of Spry1/2 enhances the survival of effector CD8+ T cells and results in the formation of more protective memory cells. Deleting Spry1/2 in antigen-specific CD8+ T cells may have therapeutic potential for enhancing the survival and functionality of effector and memory CD8+ T cells in vivo.


1995 ◽  
Vol 27 (01) ◽  
pp. 161-184 ◽  
Author(s):  
Béla Bollobás ◽  
Graham Brightwell

The random k-dimensional partial order P k (n) on n points is defined by taking n points uniformly at random from [0,1] k . Previous work has concentrated on the case where k is constant: we consider the model where k increases with n. We pay particular attention to the height H k (n) of P k (n). We show that k = (t/log t!) log n is a sharp threshold function for the existence of a t-chain in P k (n): if k – (t/log t!) log n tends to + ∞ then the probability that P k (n) contains a t-chain tends to 0; whereas if the quantity tends to − ∞ then the probability tends to 1. We describe the behaviour of H k (n) for the entire range of k(n). We also consider the maximum degree of P k (n). We show that, for each fixed d ≧ 2, is a threshold function for the appearance of an element of degree d. Thus the maximum degree undergoes very rapid growth near this value of k. We make some remarks on the existence of threshold functions in general, and give some bounds on the dimension of P k (n) for large k(n).


1992 ◽  
Vol 03 (01) ◽  
pp. 19-30 ◽  
Author(s):  
AKIRA NAMATAME ◽  
YOSHIAKI TSUKAMOTO

We propose a new learning algorithm, structural learning with the complementary coding for concept learning problems. We introduce the new grouping measure that forms the similarity matrix over the training set and show this similarity matrix provides a sufficient condition for the linear separability of the set. Using the sufficient condition one should figure out a suitable composition of linearly separable threshold functions that classify exactly the set of labeled vectors. In the case of the nonlinear separability, the internal representation of connectionist networks, the number of the hidden units and value-space of these units, is pre-determined before learning based on the structure of the similarity matrix. A three-layer neural network is then constructed where each linearly separable threshold function is computed by a linear-threshold unit whose weights are determined by the one-shot learning algorithm that requires a single presentation of the training set. The structural learning algorithm proceeds to capture the connection weights so as to realize the pre-determined internal representation. The pre-structured internal representation, the activation value spaces at the hidden layer, defines intermediate-concepts. The target-concept is then learned as a combination of those intermediate-concepts. The ability to create the pre-structured internal representation based on the grouping measure distinguishes the structural learning from earlier methods such as backpropagation.


1994 ◽  
Vol 11 (4) ◽  
pp. 695-702 ◽  
Author(s):  
Zheng-Shi Lin ◽  
Stephen Yazulla

AbstractIncrement threshold functions of the electroretinogram (ERG) b–wave were obtained from goldfish using an in vivo preparation to study intraretinal mechanisms underlying the increase in perceived brightness induced by depletion of retinal dopamine by 6–hydroxydopamine (6–OHDA). Goldfish received unilateral intraocular injections of 6–OHDA plus pargyline on successive days. Depletion of retinal dopamine was confirmed by the absence of tyrosine-hydroxylase immunoreactivity at 2 to 3 weeks postinjection as compared to sham-injected eyes from the same fish. There was no difference among normal, sham-injected or 6–OHDA-injected eyes with regard to ERG waveform, intensity-response functions or increment threshold functions. Dopamine-depleted eyes showed a Purkinje shift, that is, a transition from rod-to-cone dominated vision with increasing levels of adaptation. We conclude (1) dopamine-depleted eyes are capable of photopic vision; and (2) the ERG b–wave is not diagnostic for luminosity coding at photopic backgrounds. We also predict that (1) dopamine is not required for the transition from scotopic to photopic vision in goldfish; (2) the ERG b–wave in goldfish is influenced by chromatic interactions; (3) horizontal cell spinules, though correlated with photopic mechanisms in the fish retina, are not necessary for the transition from scotopic to photopic vision; and (4) the OFF pathway, not the ON pathway, is involved in the action of dopamine on luminosity coding in the retina.


2014 ◽  
Vol 574 ◽  
pp. 432-435 ◽  
Author(s):  
Jie Zhan ◽  
Zhen Xing Li

An improved wavelet thresholding method is presented and successfully applied to CCD measuring image denoising. On the analysis of the current widely used soft threshold and hard threshold, combining characteristics of the CCD measuring image and use of local correlation of wavelet coefficients, an improved threshold function is proposed, and the denoising results were contrasted among different threshold functions. The simulation results show that adopting the improved threshold function can acquire better filtering effect than traditional soft threshold and hard threshold methods.


Sign in / Sign up

Export Citation Format

Share Document