scholarly journals THE IMPLEMENTATION OF BIG DATA ANALYSIS IN REGULATING ONLINE SHORT-TERM RENTAL BUSINESS: A CASE OF AIRBNB IN BEIJING

Author(s):  
J. Li ◽  
F. Biljecki

Abstract. With the fast expansion and controversial impacts of short-term rental platforms such as Airbnb, many cities have called for regulating this new business model. This research aims to establish an approach to understand the impact of Airbnb (and similar services) through big data analysis and provide insights potentially useful for its regulation. The paper reveals how Airbnb is influencing Beijing’s neighbourhood housing prices through machine learning and GIS. Machine learning models are developed to analyse the relationship between Airbnb activities in a neighbourhood and prevailing housing prices. The model of the best fit is then used to analyse the neighbourhood price sensitivity in view of increasing Airbnb activities. The results show that the sensitivity is variable: there are neighbourhoods that are likely to be more price sensitive to Airbnb activities, but also neighbourhoods that are likely to be price robust. Finally, the paper gives policy recommendations for regulating short-term rental businesses based on neighbourhood’s price sensitivity.

2021 ◽  
Author(s):  
Bohdan Polishchuk ◽  
Andrii Berko ◽  
Lyubomyr Chyrun ◽  
Myroslava Bublyk ◽  
Vadim Schuchmann

2021 ◽  
Author(s):  
Jinhui Yu ◽  
Xinyu Luan ◽  
Yu Sun

Because of the differences in the structure and content of each website, it is often difficult for international applicants to obtain the application information of each school in time. They need to spend a lot of time manually collecting and sorting information. Especially when the information of the school may be constantly updated, the information may become very inaccurate for international applicants. we designed a tool including three main steps to solve the problem: crawling links, processing web pages, and building my pages. In compiling languages, we mainly use Python and store the crawled data in JSON format [4]. In the process of crawling links, we mainly used beautiful soup to parse HTML and designed crawler. In this paper, we use Python language to design a system. First, we use the crawler method to fetch all the links related to the admission information on the school's official website. Then we traverse these links, and use the noise_remove [5] method to process their corresponding page contents, so as to further narrow the scope of effective information and save these processed contents in the JSON files. Finally, we use the Flask framework to integrate these contents into my front-end page conveniently and efficiently, so that it has the complete function of integrating and displaying information.


Sign in / Sign up

Export Citation Format

Share Document