urban scene
Recently Published Documents


TOTAL DOCUMENTS

258
(FIVE YEARS 85)

H-INDEX

17
(FIVE YEARS 4)

2022 ◽  
Vol 13 (1) ◽  
pp. 77
Author(s):  
Khireddine Dounia ◽  
Aichour Boudjemaa

The ecological processes known to the various manifestations of visual pollution, which is defined as: every element of the physical environment is affected by changes or interventions made by man to the natural and constructed environment, which leads to its distortion and harms the public health of citizens. In order to understand its reasons for reaching a balanced urban scene and thus affecting human health. Where its features appear in various visual and visual aspects of public space, especially roads, due to the misuse of this space, which stems from wrong behaviors in addition to the lack of the planning system,which leads to emptying the architectural image of the city of its content.   Received: 11 October 2021 / Accepted: 20 November 2021 / Published: 5 January 2022


2022 ◽  
pp. 848-862
Author(s):  
Caterina Mele

The term smart city is often synonymous with a sustainable city. The word smart implies the use of digital technology that serves to make processes and services more efficient and to connect the different actors on the urban scene. However, this is no guarantee of sustainability. A city can become sustainable if it changes its metabolism and from linear to circular as in nature's ecosystems. For this to happen, it is necessary to overcome the paradigm of quantitative economic growth based on the infinite substitutability between natural and economic capital. If smart city governance stakeholders primarily pursue profit according to the logic of the free market, the city may be smarter and efficient in the use of energy and resources, but it is not sustainable, often not even inclusive. The challenge of sustainability implies a paradigm shift and the use of digital technologies at the service of the collective good. In this context, after a general analysis of the characteristics of smart cities, the chapter focuses on an Italian case study, Turin Smart City.


2022 ◽  
Vol 88 (1) ◽  
pp. 65-72
Author(s):  
Wanxuan Geng ◽  
Weixun Zhou ◽  
Shuanggen Jin

Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.


2021 ◽  
pp. 1-11
Author(s):  
Zhifan Wang ◽  
Tong Xin ◽  
Shidong Wang ◽  
Haofeng Zhang

 The ubiquitous availability of cost-effective cameras has rendered large scale collection of street view data a straightforward endeavour. Yet, the effective use of these data to assist autonomous driving remains a challenge, especially lack of exploration and exploitation of stereo images with abundant perceptible depth. In this paper, we propose a novel Depth-embedded Instance Segmentation Network (DISNet) which can effectively improve the performance of instance segmentation by incorporating the depth information of stereo images. The proposed network takes binocular images as input to observe the displacement of the object and estimate the corresponding depth perception without additional supervisions. Furthermore, we introduce a new module for computing the depth cost-volume, which can be integrated with the colour cost-volume to jointly capture useful disparities of stereo images. The shared-weights structure of Siamese Network is applied to learn the intrinsic information of stereo images while reducing the computational burden. Extensive experiments have been carried out on publicly available datasets (i.e., Cityscapes and KITTI), and the obtained results clearly demonstrate the superiority in segmenting instances with different depths.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8382
Author(s):  
Hongjae Lee ◽  
Jiyoung Jung

Urban scene modeling is a challenging but essential task for various applications, such as 3D map generation, city digitization, and AR/VR/metaverse applications. To model man-made structures, such as roads and buildings, which are the major components in general urban scenes, we present a clustering-based plane segmentation neural network using 3D point clouds, called hybrid K-means plane segmentation (HKPS). The proposed method segments unorganized 3D point clouds into planes by training the neural network to estimate the appropriate number of planes in the point cloud based on hybrid K-means clustering. We consider both the Euclidean distance and cosine distance to cluster nearby points in the same direction for better plane segmentation results. Our network does not require any labeled information for training. We evaluated the proposed method using the Virtual KITTI dataset and showed that our method outperforms conventional methods in plane segmentation. Our code is publicly available.


2021 ◽  
Author(s):  
Hangzhi Jiang ◽  
Shengcai Liao ◽  
Jinpeng Li ◽  
Véronique Prinet ◽  
Shiming Xiang

2021 ◽  
Vol 40 (6) ◽  
pp. 1-15
Author(s):  
Han Zhang ◽  
Yucong Yao ◽  
Ke Xie ◽  
Chi-Wing Fu ◽  
Hao Zhang ◽  
...  

2021 ◽  
Vol 13 (22) ◽  
pp. 4497
Author(s):  
Jianjun Zou ◽  
Zhenxin Zhang ◽  
Dong Chen ◽  
Qinghua Li ◽  
Lan Sun ◽  
...  

Point cloud registration is the foundation and key step for many vital applications, such as digital city, autonomous driving, passive positioning, and navigation. The difference of spatial objects and the structure complexity of object surfaces are the main challenges for the registration problem. In this paper, we propose a graph attention capsule model (named as GACM) for the efficient registration of terrestrial laser scanning (TLS) point cloud in the urban scene, which fuses graph attention convolution and a three-dimensional (3D) capsule network to extract local point cloud features and obtain 3D feature descriptors. These descriptors can take into account the differences of spatial structure and point density in objects and make the spatial features of ground objects more prominent. During the training progress, we used both matched points and non-matched points to train the model. In the test process of the registration, the points in the neighborhood of each keypoint were sent to the trained network, in order to obtain feature descriptors and calculate the rotation and translation matrix after constructing a K-dimensional (KD) tree and random sample consensus (RANSAC) algorithm. Experiments show that the proposed method achieves more efficient registration results and higher robustness than other frontier registration methods in the pairwise registration of point clouds.


Sign in / Sign up

Export Citation Format

Share Document