scholarly journals Settlement system in the Angara coastal area in the 17th21st centuries in terms of strategic planning

Author(s):  
O. G. Litvinova

One of the fundamental urban planning tasks is currently a study of the settlement system properties. In Russian and foreign historical and urban planning science, settlement is studied according to the hierarchical location of settlements. Small and medium-sized settlements are considered as elementary lower units of large cities, their structure and formation processes are not studied. Accordingly, they are rarely considered in elaborating strategic programs of the regional development. The paper proposes the urban retrospective method, which provides a deep and large-scale analysis of the settlement system in the coastal area of the Angara River.Research is based on the cartographic sources developed by governmental institutions whose the activity depends on statistical data. Here belong Ministry of Internal Affairs, Ministry of Agriculture, Ministry of Railways. The comparative analysis of the sources provides modeling and identification of the settlement system with respect to small settlements in the coastal area of the Angara River in different periods. Significant results include the quantitative data on small settlements, since they are not interesting to urban planners of today.

Author(s):  
Gabriele Scalia

AbstractOver the last few years, machine learning has revolutionized countless areas and fields. Nowadays, AI bears promise for analyzing, extracting knowledge, and driving discovery across many scientific domains such as chemistry, biology, and genomics. However, the specific challenges posed by scientific data demand to adapt machine learning techniques to new requirements. We investigate machine learning-driven scientific data analysis, focusing on a set of key requirements. These include the management of uncertainty for complex data and models, the estimation of system properties starting from low-volume and imprecise collected data, the support to scientific model development through large-scale analysis of experimental data, and the machine learning-driven integration of complementary experimental technologies.


Urban Science ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 42
Author(s):  
Dolores Brandis García

Since the late 20th century major, European cities have exhibited large projects driven by neoliberal urban planning policies whose aim is to enhance their position on the global market. By locating these projects in central city areas, they also heighten and reinforce their privileged situation within the city as a whole, thus contributing to deepening the centre–periphery rift. The starting point for this study is the significance and scope of large projects in metropolitan cities’ urban planning agendas since the final decade of the 20th century. The aim of this article is to demonstrate the correlation between the various opposing conservative and progressive urban policies, and the projects put forward, for the city of Madrid. A study of documentary sources and the strategies deployed by public and private agents are interpreted in the light of a process during which the city has had a succession of alternating governments defending opposing urban development models. This analysis allows us to conclude that the predominant large-scale projects proposed under conservative policies have contributed to deepening the centre–periphery rift appreciated in the city.


2020 ◽  
pp. 1-26
Author(s):  
Qinwen Hu ◽  
Muhammad Rizwan Asghar ◽  
Nevil Brownlee

HTTPS refers to an application-specific implementation that runs HyperText Transfer Protocol (HTTP) on top of Secure Socket Layer (SSL) or Transport Layer Security (TLS). HTTPS is used to provide encrypted communication and secure identification of web servers and clients, for different purposes such as online banking and e-commerce. However, many HTTPS vulnerabilities have been disclosed in recent years. Although many studies have pointed out that these vulnerabilities can lead to serious consequences, domain administrators seem to ignore them. In this study, we evaluate the HTTPS security level of Alexa’s top 1 million domains from two perspectives. First, we explore which popular sites are still affected by those well-known security issues. Our results show that less than 0.1% of HTTPS-enabled servers in the measured domains are still vulnerable to known attacks including Rivest Cipher 4 (RC4), Compression Ratio Info-Leak Mass Exploitation (CRIME), Padding Oracle On Downgraded Legacy Encryption (POODLE), Factoring RSA Export Keys (FREAK), Logjam, and Decrypting Rivest–Shamir–Adleman (RSA) using Obsolete and Weakened eNcryption (DROWN). Second, we assess the security level of the digital certificates used by each measured HTTPS domain. Our results highlight that less than 0.52% domains use the expired certificate, 0.42% HTTPS certificates contain different hostnames, and 2.59% HTTPS domains use a self-signed certificate. The domains we investigate in our study cover 5 regions (including ARIN, RIPE NCC, APNIC, LACNIC, and AFRINIC) and 61 different categories such as online shopping websites, banking websites, educational websites, and government websites. Although our results show that the problem still exists, we find that changes have been taking place when HTTPS vulnerabilities were discovered. Through this three-year study, we found that more attention has been paid to the use and configuration of HTTPS. For example, more and more domains begin to enable the HTTPS protocol to ensure a secure communication channel between users and websites. From the first measurement, we observed that many domains are still using TLS 1.0 and 1.1, SSL 2.0, and SSL 3.0 protocols to support user clients that use outdated systems. As the previous studies revealed security risks of using these protocols, in the subsequent studies, we found that the majority of domains updated their TLS protocol on time. Our 2020 results suggest that most HTTPS domains use the TLS 1.2 protocol and show that some HTTPS domains are still vulnerable to the existing known attacks. As academics and industry professionals continue to disclose attacks against HTTPS and recommend the secure configuration of HTTPS, we found that the number of vulnerable domain is gradually decreasing every year.


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


Sign in / Sign up

Export Citation Format

Share Document