USE OF NEURAL NETS TO MEASURE THE τ POLARIZATION AND ITS BAYESIAN INTERPRETATION

1991 ◽  
Vol 02 (03) ◽  
pp. 221-228 ◽  
Author(s):  
Lluís Garrido ◽  
Vicens Gaitan

We have tested a neural network (NN) technique as a method to determine the helicity of the τ particles in the process: e+e−→(Z0, γ*)→τ+τ−→(ρν)(ρν). It takes into account in a natural way the fact that both taus have different helicity and gives efficiencies comparable to the Bayesian method. We have found this “academic” example a nice way to introduce the analytical interpretation of the net output, showing that these neural nets techniques are equivalent to a Bayesian Decision Rule.

Doklady BGUIR ◽  
2021 ◽  
Vol 19 (7) ◽  
pp. 13-21
Author(s):  
V. S. Mukha

At present, neural networks are increasingly used to solve many problems instead of traditional methods for solving them. This involves comparing the neural network and the traditional method for specific tasks. In this paper, computer modeling of the Bayesian decision rule and the probabilistic neural network is carried out in order to compare their operational characteristics for recognizing Gaussian patterns. Recognition of four and six images (classes) with the number of features from 1 to 6 was simulated in cases where the images are well and poorly separated. The sizes of the training and test samples are chosen quiet big: 500 implementations for each image. Such characteristics as training time of the decision rule, recognition time on the test sample, recognition reliability on the test sample, recognition reliability on the training sample were analyzed. In framework of these conditions it was found that the recognition reliability on the test sample in the case of well separated patterns and with any number of the instances is close to 100 percent for both decision rules. The neural network loses 0,1–16 percent to Bayesian decision rule in the recognition reliability on the test sample for poorly separated patterns. The training time of the neural network exceeds the training time of the Bayesian decision rule in 4–5 times and the recognition time – in 4–6 times. As a result, there are no obvious advantages of the probabilistic neural network over the Bayesian decision rule in the problem of Gaussian pattern recognition. The existing generalization of the Bayesian decision rule described in the article is an alternative to the neural network for the case of non-Gaussian patterns.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4115 ◽  
Author(s):  
Yuxia Li ◽  
Bo Peng ◽  
Lei He ◽  
Kunlong Fan ◽  
Zhenxu Li ◽  
...  

Roads are vital components of infrastructure, the extraction of which has become a topic of significant interest in the field of remote sensing. Because deep learning has been a popular method in image processing and information extraction, researchers have paid more attention to extracting road using neural networks. This article proposes the improvement of neural networks to extract roads from Unmanned Aerial Vehicle (UAV) remote sensing images. D-Linknet was first considered for its high performance; however, the huge scale of the net reduced computational efficiency. With a focus on the low computational efficiency problem of the popular D-LinkNet, this article made some improvements: (1) Replace the initial block with a stem block. (2) Rebuild the entire network based on ResNet units with a new structure, allowing for the construction of an improved neural network D-Linknetplus. (3) Add a 1 × 1 convolution layer before DBlock to reduce the input feature maps, reducing parameters and improving computational efficiency. Add another 1 × 1 convolution layer after DBlock to recover the required number of output channels. Accordingly, another improved neural network B-D-LinknetPlus was built. Comparisons were performed between the neural nets, and the verification were made with the Massachusetts Roads Dataset. The results show improved neural networks are helpful in reducing the network size and developing the precision needed for road extraction.


The implementation of neural network for the fault diagnosis is to improve the dependability of the proposed scheme by providing a more accurate, faster diagnosis relaying scheme as compared with the conventional relaying schemes. It is important to improve the relaying schemes regarding the shortcoming of the system and increase the dependability of the system by using the proposed relaying scheme. It also provide more accurate, faster relaying scheme. It also gives selective schemes as compared to conventional system. The techniques for survey employed some methods for the collection of data which involved a literature review of journals, from review on books, newspaper, magazines as well as field work, additional data was collected from researchers who are working in this field. To achieve optimum result we have to improve following things: (i) Training time, (ii) Selection of training vector, (iii) Upgrading of trained neural nets and integration of technologies. AI with its promise of adaptive training and generalization deserves scope. As a result we obtain a system which is more reliable, more accurate, and faster, has more dependability as well as it will selective according to the proposed relaying scheme as compare to the conventional relaying scheme. This system helps us to reduce the shortcoming like major faults which we faced in the complex system of transmission lines which will helps in reducing human effort, saves cost for maintaining the transmission system.


2020 ◽  
pp. 74-80
Author(s):  
Philippe Schweizer ◽  

We would like to show the small distance in neutropsophy applications in sciences and humanities, has both finally consider as a terminal user a human. The pace of data production continues to grow, leading to increased needs for efficient storage and transmission. Indeed, the consumption of this information is preferably made on mobile terminals using connections invoiced to the user and having only reduced storage capacities. Deep learning neural networks have recently exceeded the compression rates of algorithmic techniques for text. We believe that they can also significantly challenge classical methods for both audio and visual data (images and videos). To obtain the best physiological compression, i.e. the highest compression ratio because it comes closest to the specificity of human perception, we propose using a neutrosophical representation of the information for the entire compression-decompression cycle. Such a representation consists for each elementary information to add to it a simple neutrosophical number which informs the neural network about its characteristics relative to compression during this treatment. Such a neutrosophical number is in fact a triplet (t,i,f) representing here the belonging of the element to the three constituent components of information in compression; 1° t = the true significant part to be preserved, 2° i = the inderterminated redundant part or noise to be eliminated in compression and 3° f = the false artifacts being produced in the compression process (to be compensated). The complexity of human perception and the subtle niches of its defects that one seeks to exploit requires a detailed and complex mapping that a neural network can produce better than any other algorithmic solution, and networks with deep learning have proven their ability to produce a detailed boundary surface in classifiers.


2004 ◽  
Vol 98 (2) ◽  
pp. 371-378 ◽  
Author(s):  
SCOTT DE MARCHI ◽  
CHRISTOPHER GELPI ◽  
JEFFREY D. GRYNAVISKI

Beck, King, and Zeng (2000) offer both a sweeping critique of the quantitative security studies field and a bold new direction for future research. Despite important strengths in their work, we take issue with three aspects of their research: (1) the substance of the logit model they compare to their neural network, (2) the standards they use for assessing forecasts, and (3) the theoretical and model-building implications of the nonparametric approach represented by neural networks. We replicate and extend their analysis by estimating a more complete logit model and comparing it both to a neural network and to a linear discriminant analysis. Our work reveals that neural networks do not perform substantially better than either the logit or the linear discriminant estimators. Given this result, we argue that more traditional approaches should be relied upon due to their enhanced ability to test hypotheses.


2020 ◽  
Vol 29 (05) ◽  
pp. 2050011
Author(s):  
Anargyros Angeleas ◽  
Nikolaos Bourbakis

Within this paper, we present two neural nets for view-independent complex human activity recognition (HAR) from video frames. For our study here, we reduce the number of frames produced by a video sequence given that we can identify activities from a sparsely sampled sequence of body poses, and, at the same time, we are able to reduce the processing complexity and response while hardly affecting the accuracy, precision, and recall. To do so, we use a formal framework to ensure the quality of data collection and data preprocessing. We utilize neural networks for the classification of single and complex body activities. More specifically, we consider the sequence of body poses as a time-series problem given that they can provide state-of-the-art results on challenging recognition tasks with little data engineering. Deep Learning in the form of Convolutional Neural Network (CNN), Long Short-Term Neural Network (LSTM), and a one-dimensional Convolutional Neural Network Long Short-Term Memory model (CNN-LSTM) are used as benchmarks to classify the activity.


2021 ◽  
Vol 40 (2) ◽  
pp. 1-19
Author(s):  
Ethan Tseng ◽  
Ali Mosleh ◽  
Fahim Mannan ◽  
Karl St-Arnaud ◽  
Avinash Sharma ◽  
...  

Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested.


Author(s):  
Pong-Jeu Lu ◽  
Ming-Chuan Zhang ◽  
Tzu-Cheng Hsu ◽  
Jin Zhang

Application of artificial neural network (ANN)-based method to perform engine condition monitoring and fault diagnosis is evaluated. Back-propagation, feedforward neural nets are employed for constructing engine diagnostic networks. Noise-contained training and testing data are generated using an influence coefficient matrix and the data scatters. The results indicate that under high-level noise conditions ANN fault diagnosis can only achieve a 50–60% success rate. For situations where sensor scatters are comparable to those of the normal engine operation, the success rates for both 4-input and 8-input ANN diagnoses achieve high scores which satisfy the minimum 90% requirement. It is surprising to find that the success rate of the 4-input diagnosis is almost as good as that of the 8-input. Although the ANN-based method possesses certain capability in resisting the influence of input noise, it is found that a preprocessor that can perform sensor data validation is of paramount importance. Auto-associative neural network (AANN) is introduced to reduce the noise level contained. It is shown that the noise can be greatly filtered to result in a higher success rate of diagnosis. This AANN data validation preprocessor can also serve as an instant trend detector which greatly improves the current smoothing methods in trend detection. It is concluded that ANN-based fault diagnostic method is of great potential for future use. However, further investigations using actual engine data have to be done to validate the present findings.


Author(s):  
Yang Zeng

Abstract Due to the flexibility and feasibility of addressing ill-posed problems, the Bayesian method has been widely used in inverse heat conduction problems (IHCPs). However, in the real science and engineering IHCPs, the likelihood function of the Bayesian method is commonly computationally expensive or analytically unavailable. In this study, in order to circumvent this intractable likelihood function, the approximate Bayesian computation (ABC) is expanded to the IHCPs. In ABC, the high dimensional observations in the intractable likelihood function are equalized by their low dimensional summary statistics. Thus, the performance of the ABC depends on the selection of summary statistics. In this study, a machine learning-based ABC (ML-ABC) is proposed to address the complicated selections of the summary statistics. The Auto-Encoder (AE) is a powerful Machine Learning (ML) framework which can compress the observations into very low dimensional summary statistics with little information loss. In addition, in order to accelerate the calculation of the proposed framework, another neural network (NN) is utilized to construct the mapping between the unknowns and the summary statistics. With this mapping, given arbitrary unknowns, the summary statistics can be obtained efficiently without solving the time-consuming forward problem with numerical method. Furthermore, an adaptive nested sampling method (ANSM) is developed to further improve the efficiency of sampling. The performance of the proposed method is demonstrated with two IHCP cases.


Sign in / Sign up

Export Citation Format

Share Document