object rotation
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 23)

H-INDEX

16
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Judith Bek ◽  
Stacey Humphries ◽  
Ellen Poliakoff ◽  
Nuala Brady

Motor imagery (MI) supports motor learning and performance, having the potential to be a useful tool for neurorehabilitation. However, MI ability may be impacted by ageing and neurodegeneration, which could limit its therapeutic effectiveness. MI is often assessed through a hand laterality task (HLT), whereby laterality judgements are typically slower for hands presented at orientations corresponding to physically more difficult postures (a “biomechanical constraint” effect). Performance is also found to differ between back and palm views of the hand, suggesting the differential involvement of visual and sensorimotor strategies. While older adults are generally found to be slowed and show increased biomechanical effects, few studies have examined the effects of both ageing and Parkinson’s disease (PD).The present study compared healthy younger (YA), healthy older (OA) and PD groups on HLT performance from both palm and back views, as well as an object-based (letter) mental rotation task. OA and PD groups were slower than YA, particularly when judging laterality from the back view, and exhibited increased biomechanical constraint effects for the palm. While response times were generally similar between OA and PD groups, the PD group showed reduced accuracy in the back view. Moreover, object rotation was slower and less accurate only in the PD group. The results indicate that different mechanisms are involved in mental rotation of hands viewed from the back or palm, consistent with previous findings, and demonstrate particular effects of ageing and PD when judging the back view. Alongside findings from studies of explicit MI, this suggests a greater alteration of visual than kinaesthetic MI with ageing and neurodegeneration, with additional impairment of object-based visual imagery in PD. The findings are also discussed in relation to different perspectives in MI and the integration of visual and kinaesthetic representations.


Robotics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 7
Author(s):  
Yannick Roberts ◽  
Amirhossein Jabalameli ◽  
Aman Behal

Motivated by grasp planning applications within cluttered environments, this paper presents a novel approach to performing real-time surface segmentations of never-before-seen objects scattered across a given scene. This approach utilizes an input 2D depth map, where a first principles-based algorithm is utilized to exploit the fact that continuous surfaces are bounded by contours of high gradient. From these regions, the associated object surfaces can be isolated and further adapted for grasp planning. This paper also provides details for extracting the six-DOF pose for an isolated surface and presents the case of leveraging such a pose to execute planar grasping to achieve both force and torque closure. As a consequence of the highly parallel software implementation, the algorithm is shown to outperform prior approaches across all notable metrics and is also shown to be invariant to object rotation, scale, orientation relative to other objects, clutter, and varying degree of noise. This allows for a robust set of operations that could be applied to many areas of robotics research. The algorithm is faster than real time in the sense that it is nearly two times faster than the sensor rate of 30 fps.


2021 ◽  
Author(s):  
◽  
Harrison Le Fevre

<p>The use of robots in the fabrication of complex architectural structures is increasing in popularity. However, architectural robotic workflows still require convoluted and time-consuming programming in order to execute complex fabrication tasks. Additionally, an inability for robots to adapt to different environments further highlights concerns around the robotic manipulator as a primary construction tool. There are four key issues currently present in robotic fabrication for architectural applications. Firstly, an inability to adapt to unknown environments; Secondly, a lack of autonomous decision making; Thirdly, an inability to locate, recognise, and then manipulate objects in the operating environment; Fourthly a lack of error detection if a motion instruction conflicts with environmental constraints.  This research begins to resolve these critical issues by seeking to integrate a feedback loop in a robotic system to improve perception, interaction and manipulation of objects in a robotic working environment. Attempts to achieve intelligence and autonomy in static robotic systems have seen limited success. Primarily, research into these issues has originated from the need to adapt existing robotic processes to architectural applications. The work of Gramazio and Kohler Research, specifically ‘on-site mobile fabrication’ and ‘autonomous robotic stone stacking’, present the current state of the art in intelligent architectural robotic systems and begin to develop solutions to the issues previously outlined. However, the limitations of Gramazio and Kohler’s research, specifically around a lack of perception-controlled grasping, offers an opportunity for this research to begin developing relevant solutions to the outlined issues. This research proposes a system where blocks, of consistent dimensions, are randomly distributed within the robotic working environment. The robot establishes the location and pose (position and orientation) of the blocks through an adaptive inclusion test. The test involves subsampling a point-cloud into a consistent grid; filtering points based on their height above the ground plane in order to establish block surfaces, and matching these surfaces to a CAD model for improved accuracy. The resulting matched surfaces are used to determine four points which define the object rotation plane and centre point. The robot uses the centre point, and the quaternion rotation angle to execute motion and grasping instructions. The robot is instructed to repeat the perception process until the collection of all the blocks within the camera frame is complete, and a preprogrammed wall is built. The implementation of a robotic feedback loop in this way demonstrates both the future potential and success of this research. The research begins to develop pathways through which to integrate new types of technologies such as machine learning and deep learning in order to improve the accuracy, speed and reliability of perception-controlled robotic systems through learned behaviours.</p>


2021 ◽  
Author(s):  
◽  
Harrison Le Fevre

<p>The use of robots in the fabrication of complex architectural structures is increasing in popularity. However, architectural robotic workflows still require convoluted and time-consuming programming in order to execute complex fabrication tasks. Additionally, an inability for robots to adapt to different environments further highlights concerns around the robotic manipulator as a primary construction tool. There are four key issues currently present in robotic fabrication for architectural applications. Firstly, an inability to adapt to unknown environments; Secondly, a lack of autonomous decision making; Thirdly, an inability to locate, recognise, and then manipulate objects in the operating environment; Fourthly a lack of error detection if a motion instruction conflicts with environmental constraints.  This research begins to resolve these critical issues by seeking to integrate a feedback loop in a robotic system to improve perception, interaction and manipulation of objects in a robotic working environment. Attempts to achieve intelligence and autonomy in static robotic systems have seen limited success. Primarily, research into these issues has originated from the need to adapt existing robotic processes to architectural applications. The work of Gramazio and Kohler Research, specifically ‘on-site mobile fabrication’ and ‘autonomous robotic stone stacking’, present the current state of the art in intelligent architectural robotic systems and begin to develop solutions to the issues previously outlined. However, the limitations of Gramazio and Kohler’s research, specifically around a lack of perception-controlled grasping, offers an opportunity for this research to begin developing relevant solutions to the outlined issues. This research proposes a system where blocks, of consistent dimensions, are randomly distributed within the robotic working environment. The robot establishes the location and pose (position and orientation) of the blocks through an adaptive inclusion test. The test involves subsampling a point-cloud into a consistent grid; filtering points based on their height above the ground plane in order to establish block surfaces, and matching these surfaces to a CAD model for improved accuracy. The resulting matched surfaces are used to determine four points which define the object rotation plane and centre point. The robot uses the centre point, and the quaternion rotation angle to execute motion and grasping instructions. The robot is instructed to repeat the perception process until the collection of all the blocks within the camera frame is complete, and a preprogrammed wall is built. The implementation of a robotic feedback loop in this way demonstrates both the future potential and success of this research. The research begins to develop pathways through which to integrate new types of technologies such as machine learning and deep learning in order to improve the accuracy, speed and reliability of perception-controlled robotic systems through learned behaviours.</p>


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2883
Author(s):  
Jie Cao ◽  
Chun Bao ◽  
Qun Hao ◽  
Yang Cheng ◽  
Chenglin Chen

The detection of rotated objects is a meaningful and challenging research work. Although the state-of-the-art deep learning models have feature invariance, especially convolutional neural networks (CNNs), their architectures did not specifically design for rotation invariance. They only slightly compensate for this feature through pooling layers. In this study, we propose a novel network, named LPNet, to solve the problem of object rotation. LPNet improves the detection accuracy by combining retina-like log-polar transformation. Furthermore, LPNet is a plug-and-play architecture for object detection and recognition. It consists of two parts, which we name as encoder and decoder. An encoder extracts images which feature in log-polar coordinates while a decoder eliminates image noise in cartesian coordinates. Moreover, according to the movement of center points, LPNet has stable and sliding modes. LPNet takes the single-shot multibox detector (SSD) network as the baseline network and the visual geometry group (VGG16) as the feature extraction backbone network. The experiment results show that, compared with conventional SSD networks, the mean average precision (mAP) of LPNet increased by 3.4% for regular objects and by 17.6% for rotated objects.


2021 ◽  
Vol 1208 (1) ◽  
pp. 012037
Author(s):  
Aladin Crnkić ◽  
Zinaid Kapić

Abstract The construction of smooth interpolation trajectories in different non-Euclidean spaces finds application in robotics, computer graphics, and many other engineering fields. This paper proposes a method for generating interpolation trajectories on the special orthogonal group SO(3), called the rotation group. Our method is based on a high-dimensional generalization of the Kuramoto model which is a well-known mathematical description of self-organization in large populations of coupled oscillators. We present the method through several simulations and visualize each simulation as trajectories on unit spheres S2. In addition, we applied our method to the specific problem of object rotation interpolation.


2021 ◽  
Vol 11 (2) ◽  
pp. 103-109
Author(s):  
Goh Eg Su ◽  
Ajune Wanis Ismail

Interaction is one of the important topics to be discussed since it includes the interface where the end-user communicates with the augmented reality (AR) system. In handheld AR interface, the traditional interaction techniques are not suitable for some AR applications due to the different attributes of handheld devices that always refer to smartphones and tablets. Currently interaction techniques in handheld AR are known as touch-based technique, mid-air gesture-based technique and device-based technique that can led to a wide discussion in related research areas. However, this paper will focus to discover the device-based interaction technique because it has proven in the previous studies to be more suitable and robust in several aspects. A novel device-based 3D object rotation technique is proposed to solve the current problem in performing 3DOF rotation of 3D object. The goal is to produce a precise and faster 3D object rotation. Therefore, the determination of the rotation amplitudes per second is required before the fully implementation. This paper discusses the implementation in depth and provides a guideline for those who works in related to device-based interaction.


2021 ◽  
Vol 28 (1) ◽  
Author(s):  
Veaceslav Perju ◽  
◽  
Vladislav Cojuhari ◽  

Pattern descriptors invariant to rotation, scaling, and translation represents an important direction in the elaboration of the real time object recognition systems. In this article, the new kinds of object descriptors based on chord transformation are presented. There are described new methods of image presentation - Central and Logarithmic Central Image Chord Transformations (CICT and LCICT). It is shown that the CICToperation makes it possible to achieve invariance to object rotation. In the case of implementation of the LCICT transformation, invariance to changes in the rotation and scale of the object is achieved. The possibilities of implementing the CICTand LCICToperations are discussed. The algorithms of these operations for contour images are presented. The possibilities of integrated implementation of CICT and LCICT operations are considered. A generalized CICT operation for a full (halftone) image is defined. The structures of the coherent optical processors that implement operations of basic and integral image chord transformations are presented.


2020 ◽  
Vol 13 (6) ◽  
pp. 338-348
Author(s):  
Nidhal Abbadi ◽  
◽  
Alyaa Mohsin ◽  

The growing use of digital images in a wide range of applications, and growing the availability of many editing photo software, cause to emerge a challenge to discover the images tampering. In this paper, we proposed a method to detect the most important type of forgery image (copy and move). We suggested many steps to classify the image as forgery or non-forgery image, started with preprocessing (included, convert image to gray image, de-noising, and image resize). Then, the image will be divided into several overlapping blocks. For each block, feature extracted (used it as a matching feature) by using the singular value decomposition (SVD) transformation. According to these features, the pixels were collected in many main groups, and then these groups clustered to many subgroups. The weight for each main group can be determined by comparing the subgroups with each other according to suggested conditions. The number of subgroups and weights are used to classify images to forgery or non-forgery images. The accuracy of detection and classified the forgery images were up to 97%. The suggested method is robust for tampered object rotation, scaling, and change of illumination.


Sign in / Sign up

Export Citation Format

Share Document