scholarly journals Machine Learning Computer Vision Applications for Spatial AI Object Recognition in Orange County, California

Author(s):  
Kostas Alexandridis

We provide an integrated and systematic automation approach to spatial object recognition and positional detection using AI machine learning and computer vision algorithms for Orange County, California. We describe a comprehensive methodology for multi-sensor, high-resolution field data acquisition, along with post-field processing and pre-analysis processing tasks. We developed a series of algorithmic formulations and workflows that integrate convolutional deep neural network learning with detected object positioning estimation in 360 degree equirectancular photosphere imagery. We provide examples of application processing more than 800 thousand cardinal directions in photosphere images across two areas in Orange County, and present detection results for stop-sign and fire hydrant object recognition. We discuss the efficiency and effectiveness of our approach, along with broader inferences related to the performance and implications of this approach for future technological innovations, including automation of spatial data and public asset inventories, and near real-time AI field data systems.

2021 ◽  
Author(s):  
Kostas Alexandridis

We provide an integrated and systematic automation approach to spatial object recognition and positional detection using AI machine learning and computer vision algorithms for Orange County, California. We describe a comprehensive methodology for multi-sensor, high-resolution field data acquisition, along with post-field processing and pre-analysis processing tasks. We developed a series of algorithmic formulations and workflows that integrate convolutional deep neural network learning with detected object positioning estimation in 360\textdegree~equirectancular photosphere imagery. We provide examples of application processing more than 800 thousand cardinal directions in photosphere images across two areas in Orange County, and present detection results for stop-sign and fire hydrant object recognition. We discuss the efficiency and effectiveness of our approach, along with broader inferences related to the performance and implications of this approach for future technological innovations, including automation of spatial data and public asset inventories, and near real-time AI field data systems.


2021 ◽  
Author(s):  
Kostas Alexandridis

We provide an integrated and systematic automation approach to spatial object recognition and positional detection using AI machine learning and computer vision algorithms for Orange County, California. We describe a comprehensive methodology for multi-sensor, high-resolution field data acquisition, along with post-field processing and pre-analysis processing tasks. We developed a series of algorithmic formulations and workflows that integrate convolutional deep neural network learning with detected object positioning estimation in 360\textdegree~equirectancular photosphere imagery. We provide examples of application processing more than 800 thousand cardinal directions in photosphere images across two areas in Orange County, and present detection results for stop-sign and fire hydrant object recognition. We discuss the efficiency and effectiveness of our approach, along with broader inferences related to the performance and implications of this approach for future technological innovations, including automation of spatial data and public asset inventories, and near real-time AI field data systems.


2013 ◽  
pp. 896-926
Author(s):  
Mehrtash Harandi ◽  
Javid Taheri ◽  
Brian C. Lovell

Recognizing objects based on their appearance (visual recognition) is one of the most significant abilities of many living creatures. In this study, recent advances in the area of automated object recognition are reviewed; the authors specifically look into several learning frameworks to discuss how they can be utilized in solving object recognition paradigms. This includes reinforcement learning, a biologically-inspired machine learning technique to solve sequential decision problems and transductive learning, and a framework where the learner observes query data and potentially exploits its structure for classification. The authors also discuss local and global appearance models for object recognition, as well as how similarities between objects can be learnt and evaluated.


2021 ◽  
pp. 1143-1146
Author(s):  
A.V. Lysenko ◽  
◽  
◽  
M.S. Oznobikhin ◽  
E.A. Kireev ◽  
...  

Abstract. This study discusses the problem of phytoplankton classification using computer vision methods and convolutional neural networks. We created a system for automatic object recognition consisting of two parts: analysis and primary processing of phytoplankton images and development of the neural network based on the obtained information about the images. We developed software that can detect particular objects in images from a light microscope. We trained a convolutional neural network in transfer learning and determined optimal parameters of this neural network and the optimal size of using dataset. To increase accuracy for these groups of classes, we created three neural networks with the same structure. The obtained accuracy in the classification of Baikal phytoplankton by these neural networks was up to 80%.


Author(s):  
Emmanuel Udoh

Computer vision or object recognition complements human or biological vision using techniques from machine learning, statistics, scene reconstruction, indexing and event analysis. Object recognition is an active research area that implements artificial vision in software and hardware. Some application examples are autonomous robots, surveillance, indexing databases of pictures and human computer interaction. This visual aid is beneficial to users, because humans remember information with greater accuracy when it is presented visually than when it originates in writing, speech or in kinesthetic form. Linguistic indexing adds another dimension to computer vision by automatically assigning words or textual descriptions to images. This augments content-based image retrieval (CBIR) that extracts or searches for digital images in large databases. According to Li and Wang (2003), most of the existing CBIR projects are general-purpose image retrieval systems that search images visually similar to a query sketch. Current CBIR systems are incapable of assigning words automatically to images due to the inherent difficulty of recognizing numerous objects at once. This current situation is stimulating several research endeavors that seek to assign text to images, thereby improving image retrieval in large databases. To enhance information processing using object recognition techniques, current research has focused on automatic linguistic indexing of digital images (ALIDI). ALIDI requires a combination of mathematical, statistical, computational, and graphical backgrounds. Many researchers have focused on various aspects of linguistic processing such as CBIR (Ghosal, Ircing, & Khudanpur, 2005; Iqbal & Aggarwal, 2002, Wang, 2001) machine learning techniques (Iqbal & Aggarwal, 2002), digital library (Witen & Bainbridge, 2003) and statistical modeling (Li, Gray, & Olsen, 20004, Li & Wang, 2003). A growing approach is the utilization of statistical models as demonstrated by Li and Wang (2003). It entails building databases of images to be used for supervised learning. A trained system is used to recognize and identify new images with statistical error margin. This statistical modeling approach uses a hidden Markov model to extract representative information about any category of images analyzed. However, in using computer to recognize images with textual description, some of the researchers employ solely text-based approaches. In this article, the focus is on the computational and graphical aspects of ALIDI in a system that uses Web-based access in order to enable wider usage (Ntoulas, Chao, & Cho, 2005). This system uses image composition (primary hue and saturation) in the linguistic indexing of digital images or pictures.


Author(s):  
Mehrtash Harandi ◽  
Javid Taheri ◽  
Brian C. Lovell

Recognizing objects based on their appearance (visual recognition) is one of the most significant abilities of many living creatures. In this study, recent advances in the area of automated object recognition are reviewed; the authors specifically look into several learning frameworks to discuss how they can be utilized in solving object recognition paradigms. This includes reinforcement learning, a biologically-inspired machine learning technique to solve sequential decision problems and transductive learning, and a framework where the learner observes query data and potentially exploits its structure for classification. The authors also discuss local and global appearance models for object recognition, as well as how similarities between objects can be learnt and evaluated.


2018 ◽  
Vol 7 (3.6) ◽  
pp. 229
Author(s):  
Raswitha Bandi ◽  
J Amudhavel

Now a day’s Machine Learning Plays an important role in computer vision, object recognition and image classification. Recognizing objects in images is an interesting thing, this recognization can be done easily by human beings but the computer cannot. The Problem with traditional neural networks is object recognition. So, to avoid difficulties in recognition of objects in images the deep neural networks especially Tensor flow under Keras Library is used and it will improve the Accuracy while recognizing objects. In this paper we present object recognition using Keras Library with backend Tensor flow. 


Author(s):  
Yanting Li ◽  
Junwei Jin ◽  
Liang Zhao ◽  
Huaiguang Wu ◽  
Lijun Sun ◽  
...  

With the development of machine learning and computer vision, classification technology is becoming increasingly important. Due to the advantage in efficiency and effectiveness, collaborative representation-based classifiers (CRC) have been applied to many practical cognitive fields. In this paper, we propose a new neighborhood prior constrained collaborative representation model for pattern classification. Compared with the naive CRC models which approximate the test sample with all the training data globally, our proposed methods emphasize the guidance of the neighborhood priors in the coding process. Two different kinds of neighbor priors and the models’ weighted extensions are explored from the view of sample representation ability and relationships between the samples. Consequently, the contributions of different samples can be distinguished adaptively and the obtained representations can be more discriminative for the recognition. Experimental results on several popular databases can verify the effectiveness of our proposed methods in comparison with other state-of-the-art classifiers.


2018 ◽  
Vol 43 (04) ◽  
pp. 1188-1209 ◽  
Author(s):  
Jay D. Aronson

Citizen video and other publicly available footage can provide evidence of human rights violations and war crimes. The ubiquity of visual data, however, may overwhelm those faced with preserving and analyzing it. This article examines how machine learning and computer vision can be used to make sense of large volumes of video in advocacy and accountability contexts. These technologies can enhance the efficiency and effectiveness of human rights advocacy and accountability efforts, but only if human rights organizations can access the technologies themselves and learn how to use them to promote human rights. As such, computer scientists and software developers working with the human rights community must understand the context in which their products are used and act in solidarity with practitioners. By working together, practitioners and scientists can level the playing field between the human rights community and the entities that perpetrate, tolerate, or seek to cover up violations.


Sign in / Sign up

Export Citation Format

Share Document