geometric transformations
Recently Published Documents


TOTAL DOCUMENTS

319
(FIVE YEARS 101)

H-INDEX

19
(FIVE YEARS 2)

Astrodynamics ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 69-79
Author(s):  
Anran Wang ◽  
Li Wang ◽  
Yinuo Zhang ◽  
Baocheng Hua ◽  
Tao Li ◽  
...  

AbstractTianwen-1 (TW-1) is the first Chinese interplanetary mission to have accomplished orbiting, landing, and patrolling in a single exploration of Mars. After safe landing, it is essential to reconstruct the descent trajectory and determine the landing site of the lander. For this purpose, we processed descent images of the TW-1 optical obstacle-avoidance sensor (OOAS) and digital orthophoto map (DOM) of the landing area using our proposed hybrid-matching method, in which the landing process is divided into two parts. In the first, crater matching is used to obtain the geometric transformations between the OOAS images and DOM to calculate the position of the lander. In the second, feature matching is applied to compute the position of the lander. We calculated the landing site of TW-1 to be 109.9259° E, 25.0659° N with a positional accuracy of 1.56 m and reconstructed the landing trajectory with a horizontal root mean squared error of 1.79 m. These results will facilitate the analyses of the obstacle-avoidance system and optimize the control strategy in the follow-up planetary-exploration missions.


Author(s):  
Paweł Tarasiuk ◽  
Piotr S. Szczepaniak

AbstractThis paper presents a novel method for improving the invariance of convolutional neural networks (CNNs) to selected geometric transformations in order to obtain more efficient image classifiers. A common strategy employed to achieve this aim is to train the network using data augmentation. Such a method alone, however, increases the complexity of the neural network model, as any change in the rotation or size of the input image results in the activation of different CNN feature maps. This problem can be resolved by the proposed novel convolutional neural network models with geometric transformations embedded into the network architecture. The evaluation of the proposed CNN model is performed on the image classification task with the use of diverse representative data sets. The CNN models with embedded geometric transformations are compared to those without the transformations, using different data augmentation setups. As the compared approaches use the same amount of memory to store the parameters, the improved classification score means that the proposed architecture is more optimal.


Sigma ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 71
Author(s):  
Trevila Tamnau ◽  
Stanislaus Amsikan ◽  
Oktovianus Mamoh

Learning mathematics using a cultural approach is commonly known as ethnomatematics. Ethnomatematics is a mathematics learning approach that bridges mathematics learning through local culture. The purpose of this study was to explore the culture of Sonaf L.A.N Taolin and describe the mathematical concepts that exist in the building elements of Sonaf L.A.N Taolin. This type of qualitative research uses an ethnographic approach that aims to explore the mathematical concepts found in Sonaf L.A.N Taolin. The subject of this study was an informant, Raja Insana. The research instruments were human instruments, observation guidelines, and interview guides. Data analysis was carried out in three stages, namely data reduction, data display and conclusion drawing. The results of this study indicate that building elements such as poles, doors, and roofs from Sonaf L.A.N Taolin contain geometry concepts that can be implemented as a medium for learning mathematics on the material: flat shapes, spatial structures, similarities, geometric transformations (reflections).


2021 ◽  
Vol 2091 (1) ◽  
pp. 012054
Author(s):  
A A Timoshenko ◽  
A V Zuev ◽  
E S Mursalimov

Abstract An algorithm has been developed for creating a single raster map of the seabed from photos obtained from vertically downward cameras of autonomous underwater vehicles (AUV) using tile graphics. The images obtained during the movement of AUV are combined into a single scalable photo map, divided into square segments (tiles). This representation of graphical information allows to quickly access the images with specialized tools after lifting the AUV to the surface and reduce the time spent by the operator to analyze the results of the mission. The images were combined using simple geometric transformations based on the data received from the navigation systems of the underwater vehicle and the parameters of its camera. The efficiency of the algorithm was tested on real data taken from a marine expedition.


Author(s):  
Paulo Henrique Siqueira

This paper shows the development of a web environment for the construction of Archimedes and Plato polyhedra in Augmented Reality (AR) and Virtual Reality (VR). In this environment we used the geometric transformations of translation and rotation with the structure of hierarchies of HTML pages, without the use of the coordinates of each polyhedra vertex. The developed environment can be used in classroom to visualize the polyhedra in Augmented Reality, with the possibility of manipulations of the graphical representations by students in the environment created in Virtual Reality. Other studies that can be developed with the polyhedra modeled are areas, volumes and the relation of Euler. Another important content that can be developed is truncation, because seven Archimedes polyhedra are obtained by using truncation of Plato's polyhedrons. With this work, it becomes possible to develop didactic materials with a simple technology, free and with great contribution to improvement of the teaching of Geometry and other areas that use representation of 3D objects.


2021 ◽  
Vol 2096 (1) ◽  
pp. 012146
Author(s):  
A A Timoshenko ◽  
A V Zuev ◽  
E S Mursalimov

Abstract An algorithm has been developed for creating one whole raster photo map of the seabed from images obtained from vertically downward cameras of autonomous underwater vehicles (AUV) using tile graphics. Tile representation of graphical information allows to quickly access the images after lifting the AUV to the surface and reduce the time spent by the operator to analyse the results of the mission. The images were combined using simple geometric transformations based on the data received from the navigation systems of the AUV and the parameters of its camera, so the algorithm can be implemented on the AUV with low-performance onboard computer, as shown in the experiment.


2021 ◽  
Author(s):  
José Saúl González-Campos ◽  
Joan Arnedo-Moreno ◽  
Jórdi Sánchez-Navarro

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Raquel Leon ◽  
Himar Fabelo ◽  
Samuel Ortega ◽  
Juan F. Piñeiro ◽  
Adam Szolna ◽  
...  

AbstractCurrently, intraoperative guidance tools used for brain tumor resection assistance during surgery have several limitations. Hyperspectral (HS) imaging is arising as a novel imaging technique that could offer new capabilities to delineate brain tumor tissue in surgical-time. However, the HS acquisition systems have some limitations regarding spatial and spectral resolution depending on the spectral range to be captured. Image fusion techniques combine information from different sensors to obtain an HS cube with improved spatial and spectral resolution. This paper describes the contributions to HS image fusion using two push-broom HS cameras, covering the visual and near-infrared (VNIR) [400–1000 nm] and near-infrared (NIR) [900–1700 nm] spectral ranges, which are integrated into an intraoperative HS acquisition system developed to delineate brain tumor tissue during neurosurgical procedures. Both HS images were registered using intensity-based and feature-based techniques with different geometric transformations to perform the HS image fusion, obtaining an HS cube with wide spectral range [435–1638 nm]. Four HS datasets were captured to verify the image registration and the fusion process. Moreover, segmentation and classification methods were evaluated to compare the performance results between the use of the VNIR and NIR data, independently, with respect to the fused data. The results reveal that the proposed methodology for fusing VNIR–NIR data improves the classification results up to 21% of accuracy with respect to the use of each data modality independently, depending on the targeted classification problem.


Sign in / Sign up

Export Citation Format

Share Document