landmark location
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 9)

H-INDEX

8
(FIVE YEARS 2)

NeuroSci ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 276-290
Author(s):  
Jennifer Mather

It is always difficult to even advance possible dimensions of consciousness, but Birch et al., 2020 have suggested four possible dimensions and this review discusses the first, perceptual richness, with relation to octopuses. They advance acuity, bandwidth, and categorization power as possible components. It is first necessary to realize that sensory richness does not automatically lead to perceptual richness and this capacity may not be accessed by consciousness. Octopuses do not discriminate light wavelength frequency (color) but rather its plane of polarization, a dimension that we do not understand. Their eyes are laterally placed on the head, leading to monocular vision and head movements that give a sequential rather than simultaneous view of items, possibly consciously planned. Details of control of the rich sensorimotor system of the arms, with 3/5 of the neurons of the nervous system, may normally not be accessed to the brain and thus to consciousness. The chromatophore-based skin appearance system is likely open loop, and not available to the octopus’ vision. Conversely, in a laboratory situation that is not ecologically valid for the octopus, learning about shapes and extents of visual figures was extensive and flexible, likely consciously planned. Similarly, octopuses’ local place in and navigation around space can be guided by light polarization plane and visual landmark location and is learned and monitored. The complex array of chemical cues delivered by water and on surfaces does not fit neatly into the components above and has barely been tested but might easily be described as perceptually rich. The octopus’ curiosity and drive to investigate and gain more information may mean that, apart from richness of any stimulus situation, they are consciously driven to seek out more information. This review suggests that cephalopods may not have a similar type of intelligence as the ‘higher’ vertebrates, they may not have similar dimensions or contents of consciousness, but that such a capacity is present nevertheless.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254814
Author(s):  
Pin-Ling Liu ◽  
Chien-Chi Chang ◽  
Jia-Hua Lin ◽  
Yoshiyuki Kobayashi

To evaluate the postures in ergonomics applications, studies have proposed the use of low-cost, marker-less, and portable depth camera-based motion tracking systems (DCMTSs) as a potential alternative to conventional marker-based motion tracking systems (MMTSs). However, a simple but systematic method for examining the estimation errors of various DCMTSs is lacking. This paper proposes a benchmarking method for assessing the estimation accuracy of depth cameras for full-body landmark location estimation. A novel alignment board was fabricated to align the coordinate systems of the DCMTSs and MMTSs. The data from an MMTS were used as a reference to quantify the error of using a DCMTS to identify target locations in a 3-D space. To demonstrate the proposed method, the full-body landmark location tracking errors were evaluated for a static upright posture using two different DCMTSs. For each landmark, we compared each DCMTS (Kinect system and RealSense system) with an MMTS by calculating the Euclidean distances between symmetrical landmarks. The evaluation trials were performed twice. The agreement between the tracking errors of the two evaluation trials was assessed using intraclass correlation coefficient (ICC). The results indicate that the proposed method can effectively assess the tracking performance of DCMTSs. The average errors (standard deviation) for the Kinect system and RealSense system were 2.80 (1.03) cm and 5.14 (1.49) cm, respectively. The highest average error values were observed in the depth orientation for both DCMTSs. The proposed method achieved high reliability with ICCs of 0.97 and 0.92 for the Kinect system and RealSense system, respectively.


2020 ◽  
Vol 6 (3) ◽  
pp. 56-59
Author(s):  
Andreas Wirtz ◽  
Julian Lam ◽  
Stefan Wesarg

AbstractCephalometric analysis is an important method in orthodontics for the diagnosis and treatment of patients. It is performed manually in clinical practice, therefore automation of this time consuming task would be of great assistance. In order to provide dentists with such tools, a robust and accurate identification of the necessary landmarks is required. However, poor image quality of lateral cephalograms like low contrast or noise make this task difficult. In this paper, an approach for automatic landmark localization is presented and used to find 19 landmarks in lateral cephalometric images. An initial predicting of the individual landmark locations is done by using a 2-D coupled shape model to utilize the spatial relation between landmarks and other anatomical structures. These predictions are refined with a Hough Forest to determine the final landmark location. The approach achieves competitive performance with a successful detection rate of 70.24% on 250 images for the clinically relevant 2mm accuracy range.


2020 ◽  
Vol 2020 ◽  
pp. 1-8 ◽  
Author(s):  
Yuanyuan Xu ◽  
Wan Yan ◽  
Genke Yang ◽  
Jiliang Luo ◽  
Tao Li ◽  
...  

Face detection and alignment in unconstrained environment is always deployed on edge devices which have limited memory storage and low computing power. This paper proposes a one-stage method named CenterFace to simultaneously predict facial box and landmark location with real-time speed and high accuracy. The proposed method also belongs to the anchor-free category. This is achieved by (a) learning face existing possibility by the semantic maps, (b) learning bounding box, offsets, and five landmarks for each position that potentially contains a face. Specifically, the method can run in real time on a single CPU core and 200 FPS using NVIDIA 2080TI for VGA-resolution images and can simultaneously achieve superior accuracy (WIDER FACE Val/Test-Easy: 0.935/0.932, Medium: 0.924/0.921, Hard: 0.875/0.873, and FDDB discontinuous: 0.980 and continuous: 0.732).


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 533 ◽  
Author(s):  
Zhang ◽  
Wang ◽  
Chen ◽  
Jiang ◽  
Lin

GPS (Global Positioning System) navigation in agriculture is facing many challenges, such as weak signals in orchards and the high cost for small plots of farmland. With the reduction of camera cost and the emergence of excellent visual algorithms, visual navigation can solve the above problems. Visual navigation is a navigation technology that uses cameras to sense environmental information as the basis of an aircraft flight. It is mainly divided into five parts: Image acquisition, landmark recognition, route planning, flight control, and obstacle avoidance. Here, landmarks are plant canopy, buildings, mountains, and rivers, with unique geographical characteristics in a place. During visual navigation, landmark location and route tracking are key links. When there are significant color-differences (for example, the differences among red, green, and blue) between a landmark and the background, the landmark can be recognized based on classical visual algorithms. However, in the case of non-significant color-differences (for example, the differences between dark green and vivid green) between a landmark and the background, there are no robust and high-precision methods for landmark identification. In view of the above problem, visual navigation in a maize field is studied. First, the block recognition method based on fine-tuned Inception-V3 is developed; then, the maize canopy landmark is recognized based on the above method; finally, local navigation lines are extracted from the landmarks based on the maize canopy grayscale gradient law. The results show that the accuracy is 0.9501. When the block number is 256, the block recognition method achieves the best segmentation. The average segmentation quality is 0.87, and time is 0.251 s. This study suggests that stable visual semantic navigation can be achieved under the near color background. It will be an important reference for the navigation of plant protection UAV (Unmanned Aerial Vehicle).


2018 ◽  
Vol 2 ◽  
pp. 239821281875709 ◽  
Author(s):  
Anna S. Mitchell ◽  
Rafal Czajkowski ◽  
Ningyu Zhang ◽  
Kate Jeffery ◽  
Andrew J. D. Nelson

Retrosplenial cortex is a region within the posterior neocortical system, heavily interconnected with an array of brain networks, both cortical and subcortical, that is, engaged by a myriad of cognitive tasks. Although there is no consensus as to its precise function, evidence from both human and animal studies clearly points to a role in spatial cognition. However, the spatial processing impairments that follow retrosplenial cortex damage are not straightforward to characterise, leading to difficulties in defining the exact nature of its role. In this article, we review this literature and classify the types of ideas that have been put forward into three broad, somewhat overlapping classes: (1) learning of landmark location, stability and permanence; (2) integration between spatial reference frames; and (3) consolidation and retrieval of spatial knowledge (schemas). We evaluate these models and suggest ways to test them, before briefly discussing whether the spatial function may be a subset of a more general function in episodic memory.


Sign in / Sign up

Export Citation Format

Share Document