random construction
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 7)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Cong Xie ◽  
Jian Yang ◽  
Hai Tian ◽  
Danfeng Zhao ◽  
Tongzhou Han

2021 ◽  
Vol 30 (1) ◽  
pp. 182-194
Author(s):  
Abdalrahman Qubaa ◽  
Saja Al-Hamdani

Unmanned aerial vehicles (UAVs) or drones have made great progress in aerial surveys to research and discover heritage sites and archaeological areas, particularly after having developed their technical capabilities to carry various sensors onboard, whether they are conventional cameras, multispectral cameras, and thermal sensors. The objective of this research is to use the drone technology and k-mean clustering algorithm for the first time in Nineveh Governorate in Iraq to reveal the extent of civil excesses and random construction, as well as the looting and theft that occur in the archaeological areas. DJI Phantom 4 Pro drone was used, in addition to using the specialized Pix4D program to process drone images and make mosaics for them. Multiple flights were performed using a drone to survey multiple locations throughout the area and compare them with satellite images during different years. Drone’s data classification was implemented using a k-means clustering algorithm. The results of the data classification for three different time periods indicated that the percentage of archaeological lands decreased from 90.31% in 2004 to 25.29% in 2018. Where the work revealed the extent of the archaeological area’s great violations. The study also emphasized the importance of directing authorities of local antiquities to ensure the use of drone’s technology to obtain statistical and methodological reports periodically to assess archaeological damage and to avoid overtaking, stolen and looted of these sites.


Author(s):  
Mohammad Wishah Abedel Kareem Wishah

The research aimed to identify the population growth rates, urban expansion and the factors that affected that expansion in the city of Salt, in addition to identifying the causes of population growth and rapid urban expansion in the city of Salt, the research followed the historical approach and the inductive approach. The research reached a number of results, the most important of which is that the city of Salt has witnessed an abnormal boom in the increase in its population as a result of the wars in Palestine and the Syrian refugee, which has affected the growth of its population and the increase in its area and its urban expansion, and this has led to the multiplicity of the city's jobs and interference in residential, artisan and industrial land uses Commercial and agricultural areas, as the population of the city of Salt developed from (61159) in 1994, while in 2015 the population reached (99890), which in turn contributed to increasing the growth of the city's size and area, and this population increase has resulted in an unorganized urban expansion towards the main streets and mountain slopes and led to the emergence of random construction and overlapping land uses, in addition to that the research confirmed that the urban expansion in the study period (1994-2015) expanded horizontally more than perpendicular to the expense of agricultural lands.


Author(s):  
Jack Brimberg ◽  
Zvi Drezner

In this paper we present two new approaches for finding good starting solutions to the planar p-median problem. Both methods rely on a discrete approximation of the continuous model that restricts the facility locations to the given set of demand points. The first method adapts the first phase of a greedy random construction algorithm proposed for the minimum sum of squares clustering problem. The second one implements a simple descent procedure based on vertex exchange. The resulting solution is then used as a starting point in a local search heuristic that iterates between the well-known Cooper's alternating locate-allocate method and a transfer follow-up step with a new and more effective selection rule. Extensive computational experiments show that (1) using good starting solutions can significantly improve the performance of local search, and (2) using a hybrid algorithm that combines good starting solutions with a "deep" local search can be an effective strategy for solving a diversity of planar p-median problems.


2019 ◽  
Vol 88 (4) ◽  
pp. 711-725 ◽  
Author(s):  
Alessandro Neri ◽  
Anna-Lena Horlemann-Trautmann

2019 ◽  
Author(s):  
David Conlon

Ramsey's Theorem is among the most well-known results in combinatorics. The theorem states that any two-edge-coloring of a sufficiently large complete graph contains a large monochromatic complete subgraph. Indeed, any two-edge-coloring of a complete graph with N=4t−o(t) vertices contains a monochromatic copy of Kt. On the other hand, a probabilistic argument yields that there exists a two-edge-coloring of a complete graph with N=2t/2+o(t) with no monochromatic copy of Kt. Much attention has been paid to improving these classical bounds but only improvements to lower order terms have been obtained so far. A natural question in this setting is to ask whether every two-edge-coloring of a sufficiently large complete graph contains a monochromatic copy of Kt that can be extended in many ways to a monochromatic copy of Kt+1. Specifically, Erdős, Faudree, Rousseau and Schelp in the 1970's asked whether every two-edge-coloring of KN contains a monochromatic copy of Kt that can be extended in at least (1−ok(1))2−tN ways to a monochromatic copy of Kt+1. A random two-edge-coloring of KN witnesses that this would be best possible. While the intuition coming from random constructions can be misleading, for example, a famous construction by Thomason shows the existence of a two-edge-coloring of a complete graph with fewer monochromatic copies of Kt than a random two-edge-coloring, this paper confirms that the intuition coming from a random construction is correct in this case. In particular, the author answers this question of Erdős et al. in the affirmative. The question can be phrased in the language of Ramsey theory as a problem on determining the Ramsey number of book graphs. A book graph B(k)t is a graph obtained from Kt by adding k new vertices and joining each new vertex to all the vertices of Kt. The main result of the paper asserts that every two-edge-coloring of a complete graph with N=2kt+ok(t) vertices contains a monochromatic copy of B(k)t.


2014 ◽  
Vol 51 (01) ◽  
pp. 92-105 ◽  
Author(s):  
Mark Huber ◽  
Sarah Schott

Computing the value of a high-dimensional integral can often be reduced to the problem of finding the ratio between the measures of two sets. Monte Carlo methods are often used to approximate this ratio, but often one set will be exponentially larger than the other, which leads to an exponentially large variance. A standard method of dealing with this problem is to interpolate between the sets with a sequence of nested sets where neighboring sets have relative measures bounded above by a constant. Choosing such a well-balanced sequence can rarely be done without extensive study of a problem. Here a new approach that automatically obtains such sets is presented. These well-balanced sets allow for faster approximation algorithms for integrals and sums using fewer samples, and better tempering and annealing Markov chains for generating random samples. Applications, such as finding the partition function of the Ising model and normalizing constants for posterior distributions in Bayesian methods, are discussed.


2014 ◽  
Vol 51 (1) ◽  
pp. 92-105 ◽  
Author(s):  
Mark Huber ◽  
Sarah Schott

Computing the value of a high-dimensional integral can often be reduced to the problem of finding the ratio between the measures of two sets. Monte Carlo methods are often used to approximate this ratio, but often one set will be exponentially larger than the other, which leads to an exponentially large variance. A standard method of dealing with this problem is to interpolate between the sets with a sequence of nested sets where neighboring sets have relative measures bounded above by a constant. Choosing such a well-balanced sequence can rarely be done without extensive study of a problem. Here a new approach that automatically obtains such sets is presented. These well-balanced sets allow for faster approximation algorithms for integrals and sums using fewer samples, and better tempering and annealing Markov chains for generating random samples. Applications, such as finding the partition function of the Ising model and normalizing constants for posterior distributions in Bayesian methods, are discussed.


2010 ◽  
Vol 9 ◽  
pp. 117693511000900 ◽  
Author(s):  
Mike L. Smith ◽  
Andy G. Lynch

Microarray technologies have been an increasingly important tool in cancer research in the last decade, and a number of initiatives have sought to stress the importance of the provision and sharing of raw microarray data. Illumina BeadArrays provide a particular problem in this regard, as their random construction simultaneously adds value to analysis of the raw data and obstructs the sharing of those data. We present a compression scheme for raw Illumina BeadArray data, designed to ease the burdens of sharing and storing such data, that is implemented in the BeadDataPackR BioConductor package ( http://bioconductor.org/packages/release/bioc/html/BeadDataPackR.html ). It offers two key advantages over off-the-peg compression tools. First it uses knowledge of the data formats to achieve greater compression than other approaches, and second it does not need to be decompressed for analysis, but rather the values held within can be directly accessed.


Sign in / Sign up

Export Citation Format

Share Document