complex scene
Recently Published Documents


TOTAL DOCUMENTS

213
(FIVE YEARS 73)

H-INDEX

23
(FIVE YEARS 4)

Medunab ◽  
2022 ◽  
Vol 24 (3) ◽  
pp. 392
Author(s):  
Nick Richards
Keyword(s):  

When Dr Salman sent me some reference photos, I instantly knew I had to choose the photo this painting is based on - although it was a complex scene with several figures, the narrative it presented was very powerful and emotional. I wanted to avoid sentimentality, and I think that the bold impasto technique I used, with thick oil paint and a large brush, helped with this. It was mostly about conveying raw emotion in an expressive way, without including lots of detail. Also making it an easily readable image just using loose brushstrokes, without delineating everything was a real challenge. I hope I succeeded.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Zhichao Xiong

Moving target detection (MTD) is one of the emphases and difficulties in the field of computer vision and image processing. It is the basis of moving target tracking and behavior recognition. We propose two methods are improved and fused, respectively, and the fusion algorithm is applied to the complex scene for MTD, so as to improve the accuracy of MTD in complex and hybrid scenes. Using the main idea of the three-frame difference image method, the background difference method and the interframe difference method are combined to make their advantages complementary to overcome each other’s weaknesses. The experimental results show that the method can be well adapted to the situation of periodic motion interference in the background, and it can adapt to the situation of sudden background changes.


2021 ◽  
Author(s):  
C. Wu ◽  
L. Ou

This study aims to investigate what a reference white really means in a complex scene setting in a virtual environment, specifically whether a reference white is determined by the brightest white object in the entire environment, or is it determined by the brightest white object in the field of view. To achieve the aim, three psychophysical experiments were conducted, one situated in a real room and the other two in virtual reality. Experimental results show that colour appearance in VR is comparable to colour appearance in a real space. Regarding reference white, the brightest white object is not necessarily regarded as the reference white especially when it is located far away from the test colour. The brightest white object needs to be located within the viewing field for the test colour in order to be regarded as the reference white.


2021 ◽  
Vol 189 ◽  
pp. 11-26
Author(s):  
Jihong Lee ◽  
Sang Wook Hong ◽  
Sang Chul Chong
Keyword(s):  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Photchara Ratsamee ◽  
Yasushi Mae ◽  
Kazuto Kamiyama ◽  
Mitsuhiro Horade ◽  
Masaru Kojima ◽  
...  

AbstractPeople with disabilities, such as patients with motor paralysis conditions, lack independence and cannot move most parts of their bodies except for their eyes. Supportive robot technology is highly beneficial in supporting these types of patients. We propose a gaze-informed location-based (or gaze-based) object segmentation, which is a core module of successful patient-robot interaction in an object-search task (i.e., a situation when a robot has to search for and deliver a target object to the patient). We have introduced the concepts of gaze tracing (GT) and gaze blinking (GB), which are integrated into our proposed object segmentation technique, to yield the benefit of an accurate visual segmentation of unknown objects in a complex scene. Gaze tracing information can be used as a clue as to where the target object is located in a scene. Then, gaze blinking can be used to confirm the position of the target object. The effectiveness of our proposed method has been demonstrated using a humanoid robot in experiments with different types of highly cluttered scenes. Based on the limited gaze guidance from the user, we achieved an 85% F-score of unknown object segmentation in an unknown environment.


2021 ◽  
Vol 3 (10) ◽  
pp. 876-884
Author(s):  
Darui Jin ◽  
Ying Chen ◽  
Yi Lu ◽  
Junzhang Chen ◽  
Peng Wang ◽  
...  

2021 ◽  
Author(s):  
Jian Carlo Nocon ◽  
Howard J Gritton ◽  
Nicholas M James ◽  
Xue Han ◽  
Kamal Sen

Cortical representations underlying a wide range of cognitive abilities, which employ both rate and spike timing-based coding, emerge from underlying cortical circuits with a tremendous diversity of cell types. However, cell-type specific contributions to cortical coding are not well-understood. Here, we investigate the role of parvalbumin (PV) neurons in cortical complex scene analysis. Many complex scenes contain sensory stimuli, e.g., natural sounds, images, odors or vibrations, which are highly dynamic in time, competing with stimuli at other locations in space. PV neurons are thought to play a fundamental role in sculpting cortical temporal dynamics; yet their specific role in encoding complex scenes via timing-based codes, and the robustness of such temporal representations to spatial competition, have not been investigated. Here, we address these questions in auditory cortex using a cocktail party-like paradigm; integrating electrophysiology, optogenetic manipulations, and a family of novel spike-distance metrics, to dissect the contributions of PV neurons towards rate and timing-based coding. We find that PV neurons improve cortical discrimination of dynamic naturalistic sounds in a cocktail party-like setting by enhancing rapid temporal modulations in rate and spike timing reproducibility. Moreover, this temporal representation is maintained in the face of competing stimuli at other spatial locations, providing a robust code for complex scene analysis. These findings provide novel insights into the specific contributions of PV neurons in cortical coding of complex scenes.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Shoubao Su ◽  
Wei Zhao ◽  
Chishe Wang

Multirobot motion planning is always one of the critical techniques in edge intelligent systems, which involve a variety of algorithms, such as map modeling, path search, and trajectory optimization and smoothing. To overcome the slow running speed and imbalance of energy consumption, a swarm intelligence solution based on parallel computing is proposed to plan motion paths for multirobot with many task nodes in a complex scene that have multiple irregularly-shaped obstacles, which objective is to find a smooth trajectory under the constraints of the shortest total distance and the energy-balanced consumption for all robots to travel between nodes. In a practical scenario, the imbalance of task allocation will inevitably lead to some robots stopping on the way. Thus, we firstly model a gridded scene as a weighted MTSP (multitraveling salesman problem) in which the weights are the energies of obstacle constraints and path length. Then, a hybridization of particle swarm and ant colony optimization (GPSO-AC) based on a platform of Compute Unified Device Architecture (CUDA) is presented to find the optimal path for the weighted MTSPs. Next, we improve the A ∗ algorithm to generate a weighted obstacle avoidance path on the gridded map, but there are still many sharp turns on it. Therefore, an improved smooth grid path algorithm is proposed by integrating the dynamic constraints in this paper to optimize the trajectory smoothly, to be more in line with the law of robot motion, which can more realistically simulate the multirobot in a real scene. Finally, experimental comparisons with other methods on the designed platform of GPUs demonstrate the applicability of the proposed algorithm in different scenarios, and our method strikes a good balance between energy consumption and optimality, with significantly faster and better performance than other considered approaches, and the effects of the adjustment coefficient q on the performance of the algorithm are also discussed in the experiments.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5850
Author(s):  
Wei Li ◽  
Hongtai Cheng ◽  
Xiaohua Zhang

Recognizing 3D objects and estimating their postures in a complex scene is a challenging task. Sample Consensus Initial Alignment (SAC-IA) is a commonly used point cloud-based method to achieve such a goal. However, its efficiency is low, and it cannot be applied in real-time applications. This paper analyzes the most time-consuming part of the SAC-IA algorithm: sample generation and evaluation. We propose two improvements to increase efficiency. In the initial aligning stage, instead of sampling the key points, the correspondence pairs between model and scene key points are generated in advance and chosen in each iteration, which reduces the redundant correspondence search operations; a geometric filter is proposed to prevent the invalid samples to the evaluation process, which is the most time-consuming operation because it requires transforming and calculating the distance between two point clouds. The introduction of the geometric filter can significantly increase the sample quality and reduce the required sample numbers. Experiments are performed on our own datasets captured by Kinect v2 Camera and on Bologna 1 dataset. The results show that the proposed method can significantly increase (10–30×) the efficiency of the original SAC-IA method without sacrificing accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5103
Author(s):  
Yongzhong Fu ◽  
Xiufeng Li ◽  
Zungang Hu

The CNN (convolutional neural network)-based small target detection techniques for static complex scenes have been applied in many fields, but there are still certain technical challenges. This paper proposes a novel high-resolution small-target detection network named the IIHNet (information interworking high-resolution network) for complex scenes, which is based on information interworking processing technology. The proposed network not only can output a high-resolution presentation of a small target but can also keep the detection network simple and efficient. The key characteristic of the proposed network is that the target features are divided into three categories according to image resolution: high-resolution, medium-resolution, and low-resolution features. The basic features are extracted by convolution at the initial layer of the network. Then, convolution is carried out synchronously in the three resolution channels with information fusion in the horizontal and vertical directions of the network. At the same time, multiple utilizations and augmentations of feature information are carried out in the channel convolution direction. Experimental results show that the proposed network can achieve higher reasoning performance than the other compared networks without any compromise in terms of the detection effect.


Sign in / Sign up

Export Citation Format

Share Document