3-D OBJECT CLASSIFICATION: APPLICATION OF A CONSTRUCTIVE ALGORITHM

1991 ◽  
Vol 02 (04) ◽  
pp. 275-282 ◽  
Author(s):  
Neil Burgess ◽  
Mario Notturno Granieri ◽  
Stefano Patarnello

A system for the classification of real 3-D objects is presented. Ten objects are presented in arbitrary orientation (and position, within limits). The perception of an object is achieved by the use of multiple stereo pairs of images taken from different view positions. Classification of the spectrum of distances between edge-points perceived on an object is achieved using a constructive algorithm. Convergence to zero errors on the set of training examples is guaranteed. The generalization capability is tested on a set of 10–15 novel presentations of each object.

2010 ◽  
Vol 7 (2) ◽  
pp. 366-370 ◽  
Author(s):  
Sheng Xu ◽  
Tao Fang ◽  
Deren Li ◽  
Shiwei Wang

1994 ◽  
Vol 05 (01) ◽  
pp. 59-66 ◽  
Author(s):  
NEIL BURGESS

A constructive algorithm is presented which combines the architecture of Cascade Correlation and the training of perceptron-like hidden units with the specific error-correcting roles of Upstart. Convergence to zero errors is proved for any consistent classification of real-valued pattern vectors. Addition of one extra element to each pattern allows hyper-spherical decision regions and enables convergence on real-valued inputs for existing constructive algorithms. Simulations demonstrate robust convergence and economical construction of hidden units in the benchmark “N-bit parity” and “twin spirals” problems.


2016 ◽  
Author(s):  
Tanel Pärnamaa ◽  
Leopold Parts

High throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high throughput microscopy.


2020 ◽  
Vol 12 (18) ◽  
pp. 3020
Author(s):  
Piotr Szymak ◽  
Paweł Piskur ◽  
Krzysztof Naus

Video image processing and object classification using a Deep Learning Neural Network (DLNN) can significantly increase the autonomy of underwater vehicles. This paper describes the results of a project focused on using DLNN for Object Classification in Underwater Video (OCUV) implemented in a Biomimetic Underwater Vehicle (BUV). The BUV is intended to be used to detect underwater mines, explore shipwrecks or observe the process of corrosion of munitions abandoned on the seabed after World War II. Here, the pretrained DLNNs were used for classification of the following type of objects: fishes, underwater vehicles, divers and obstacles. The results of our research enabled us to estimate the effectiveness of using pretrained DLNNs for classification of different objects under the complex Baltic Sea environment. The Genetic Algorithm (GA) was used to establish tuning parameters of the DLNNs. Three different training methods were compared for AlexNet, then one training method was chosen for fifteen networks and the tests were provided with the description of the final results. The DLNNs were trained on servers with six medium class Graphics Processing Units (GPUs). Finally, the trained DLNN was implemented in the Nvidia JetsonTX2 platform installed on board of the BUV, and one of the network was verified in a real environment.


2019 ◽  
Vol 9 (1) ◽  
pp. 3 ◽  
Author(s):  
Rajesh Amerineni ◽  
Resh S. Gupta ◽  
Lalit Gupta

Two multimodal classification models aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli are introduced. The feature-integrating model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The decision-integrating model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classifier models are implemented using multilayer perceptrons and multivariate statistical classifiers. Experiments involving the classification of noisy and attenuated auditory and visual representations of ten digits are designed to demonstrate the properties of the multimodal classifiers and to compare the performances of multimodal and unimodal classifiers. The experimental results show that the multimodal classification systems exhibit an important aspect of the “inverse effectiveness principle” by yielding significantly higher classification accuracies when compared with those of the unimodal classifiers. Furthermore, the flexibility offered by the generalized models enables the simulations and evaluations of various combinations of multimodal stimuli and classifiers under varying uncertainty conditions.


2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


Sign in / Sign up

Export Citation Format

Share Document