scholarly journals Spatial and Temporal Human Settlement Growth Differentiation with Symbolic Machine Learning for Verifying Spatial Policy Targets: Assiut Governorate, Egypt as a Case Study

2020 ◽  
Vol 12 (22) ◽  
pp. 3799
Author(s):  
Mahmood Abdelkader ◽  
Richard Sliuzas ◽  
Luc Boerboom ◽  
Ahmed Elseicy ◽  
Jaap Zevenbergen

Since 2005, Egypt has a new land-use development policy to control unplanned human settlement growth and prevent outlying growth. This study assesses the impact of this policy shift on settlement growth in Assiut Governorate, Egypt, between 1999 and 2020. With symbolic machine learning, we extract built-up areas from Landsat images of 2005, 2010, 2015, and 2020 and a Landscape Expansion Index with a new QGIS plugin tool (Growth Classifier) developed to classify settlement growth types. The base year, 1999, was produced by the national remote sensing agency. After extracting the built-up areas from the Landsat images, eight settlement growth types (infill, expansion, edge-ribbon, linear branch, isolated cluster, proximate cluster, isolated scattered, and proximate scattered) were identified for four periods (1999:2005, 2005:2010, 2010:2015, and 2015:2020). The results show that prior to the policy shift of 2005, the growth rate for 1999–2005 was 11% p.a. In all subsequent periods, the growth rate exceeded the target rate of 1% p.a., though by varying amounts. The observed settlement growth rates were 5% (2005:2010), 7.4% (2010:2015), and 5.3% (2015:2020). Although the settlements in Assiut grew primarily through expansion and infill, with the latter growing in importance during the last two later periods, outlying growth is also evident. Using four class metrics (number of patches, patch density, mean patch area, and largest patch index) for the eight growth types, all types showed a fluctuated trend between all periods, except for expansion, which always tends to increase. To date, the policy to control human settlement expansion and outlying growth has been unsuccessful.

2020 ◽  
Author(s):  
Afreen Khan ◽  
Swaleha Zubair

UNSTRUCTURED Objective: Recent Coronavirus Disease 2019 (COVID-19) pandemic has inflicted the whole world critically. Despite the fact that India has not been listed amongst the top ten highly affected countries, one cannot rule out COVID-19 associated complications in the near future. The accumulative testing facilities has resulted in exponential increase in COVID-19 infection cases. In figures, the number of positive cases have risen up to 33,614 as of 30 April, 2020. Keeping into consideration the serious consequences of pandemic, we aim to establish correlations between the numerous features which was acquired from the various Indian-based COVID datasets, and the impact of the containment of the pandemic on the current state of Indian population using machine learning approach. We aim to build the COVID-19 severity model employing logistic function which determines the inflection point and help in prediction of the future number of confirmed cases. Methods: An empirical study was performed on the COVID-19 patient status in India. We performed the study commencing from 30 January, 2020 to 30 April, 2020 for the analysis. We applied the machine learning (ML) approach to gain the insights about COVID-19 incidences in India. Several diverse exploratory data analysis ML tools and techniques were applied to establish a correlation amongst the various features. Also, the acute stage of the disease was mapped in order to build a robust model. Results: We collected five different datasets to execute the study. The data sets were integrated extract the essential details. We found that men were more prone to get infected of the coronavirus disease as compared to women. Also, the age group was the middle-young age of patients. On 92-days based analysis, we found a trending pattern of number of confirmed, recovered, deceased and active cases of COVID-19 in India. The as-developed growth model provided an inflection point of 85.0 days. It also predicted the number of confirmed cases as 48,958.0 in the future i.e. after 30th April. Growth rate of 13.06 percent was obtained. We achieved statistically significant correlations amongst growth rate and predicted COVID-19 confirmed cases. Conclusion: This study demonstrated the effective application of exploratory data analysis and machine learning in building a mathematical severity model for COVID-19 in India.


2020 ◽  
Vol 39 (5) ◽  
pp. 6579-6590
Author(s):  
Sandy Çağlıyor ◽  
Başar Öztayşi ◽  
Selime Sezgin

The motion picture industry is one of the largest industries worldwide and has significant importance in the global economy. Considering the high stakes and high risks in the industry, forecast models and decision support systems are gaining importance. Several attempts have been made to estimate the theatrical performance of a movie before or at the early stages of its release. Nevertheless, these models are mostly used for predicting domestic performances and the industry still struggles to predict box office performances in overseas markets. In this study, the aim is to design a forecast model using different machine learning algorithms to estimate the theatrical success of US movies in Turkey. From various sources, a dataset of 1559 movies is constructed. Firstly, independent variables are grouped as pre-release, distributor type, and international distribution based on their characteristic. The number of attendances is discretized into three classes. Four popular machine learning algorithms, artificial neural networks, decision tree regression and gradient boosting tree and random forest are employed, and the impact of each group is observed by compared by the performance models. Then the number of target classes is increased into five and eight and results are compared with the previously developed models in the literature.


2019 ◽  
pp. 79-91 ◽  
Author(s):  
V. S. Nazarov ◽  
S. S. Lazaryan ◽  
I. V. Nikonov ◽  
A. I. Votinov

The article assesses the impact of various factors on the growth rate of international trade. Many experts interpreted the cross-border flows of goods decline against the backdrop of a growing global economy as an alarming sign that indicates a slowdown in the processes of globalization. To determine the reasons for the dynamics of international trade, the decompositions of its growth rate were carried out and allowed to single out the effect of the dollar exchange rate, the commodities prices and global value chains on the change in the volume of trade. As a result, it was discovered that the most part of the dynamics of international trade is due to fluctuations in the exchange rate of the dollar and prices for basic commodity groups. The negative contribution of trade within global value chains in 2014 was also revealed. During the investigated period (2000—2014), such a picture was observed only in the crisis periods, which may indicate the beginning of structural changes in the world trade.


2019 ◽  
Vol 19 (25) ◽  
pp. 2301-2317 ◽  
Author(s):  
Ruirui Liang ◽  
Jiayang Xie ◽  
Chi Zhang ◽  
Mengying Zhang ◽  
Hai Huang ◽  
...  

In recent years, the successful implementation of human genome project has made people realize that genetic, environmental and lifestyle factors should be combined together to study cancer due to the complexity and various forms of the disease. The increasing availability and growth rate of ‘big data’ derived from various omics, opens a new window for study and therapy of cancer. In this paper, we will introduce the application of machine learning methods in handling cancer big data including the use of artificial neural networks, support vector machines, ensemble learning and naïve Bayes classifiers.


Author(s):  
Francisco Pozo-Martin ◽  
Heide Weishaar ◽  
Florin Cristea ◽  
Johanna Hanefeld ◽  
Thurid Bahr ◽  
...  

AbstractWe estimated the impact of a comprehensive set of non-pharmeceutical interventions on the COVID-19 epidemic growth rate across the 37 member states of the Organisation for Economic Co-operation and Development during the early phase of the COVID-19 pandemic and between October and December 2020. For this task, we conducted a data-driven, longitudinal analysis using a multilevel modelling approach with both maximum likelihood and Bayesian estimation. We found that during the early phase of the epidemic: implementing restrictions on gatherings of more than 100 people, between 11 and 100 people, and 10 people or less was associated with a respective average reduction of 2.58%, 2.78% and 2.81% in the daily growth rate in weekly confirmed cases; requiring closing for some sectors or for all but essential workplaces with an average reduction of 1.51% and 1.78%; requiring closing of some school levels or all school levels with an average reduction of 1.12% or 1.65%; recommending mask wearing with an average reduction of 0.45%, requiring mask wearing country-wide in specific public spaces or in specific geographical areas within the country with an average reduction of 0.44%, requiring mask-wearing country-wide in all public places or all public places where social distancing is not possible with an average reduction of 0.96%; and number of tests per thousand population with an average reduction of 0.02% per unit increase. Between October and December 2020 work closing requirements and testing policy were significant predictors of the epidemic growth rate. These findings provide evidence to support policy decision-making regarding which NPIs to implement to control the spread of the COVID-19 pandemic.


2021 ◽  
Vol 51 (4) ◽  
pp. 75-81
Author(s):  
Ahad Mirza Baig ◽  
Alkida Balliu ◽  
Peter Davies ◽  
Michal Dory

Rachid Guerraoui was the rst keynote speaker, and he got things o to a great start by discussing the broad relevance of the research done in our community relative to both industry and academia. He rst argued that, in some sense, the fact that distributed computing is so pervasive nowadays could end up sti ing progress in our community by inducing people to work on marginal problems, and becoming isolated. His rst suggestion was to try to understand and incorporate new ideas coming from applied elds into our research, and argued that this has been historically very successful. He illustrated this point via the distributed payment problem, which appears in the context of blockchains, in particular Bitcoin, but then turned out to be very theoretically interesting; furthermore, the theoretical understanding of the problem inspired new practical protocols. He then went further to discuss new directions in distributed computing, such as the COVID tracing problem, and new challenges in Byzantine-resilient distributed machine learning. Another source of innovation Rachid suggested was hardware innovations, which he illustrated with work studying the impact of RDMA-based primitives on fundamental problems in distributed computing. The talk concluded with a very lively discussion.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Qingsong Xi ◽  
Qiyu Yang ◽  
Meng Wang ◽  
Bo Huang ◽  
Bo Zhang ◽  
...  

Abstract Background To minimize the rate of in vitro fertilization (IVF)- associated multiple-embryo gestation, significant efforts have been made. Previous studies related to machine learning in IVF mainly focused on selecting the top-quality embryos to improve outcomes, however, in patients with sub-optimal prognosis or with medium- or inferior-quality embryos, the selection between SET and DET could be perplexing. Methods This was an application study including 9211 patients with 10,076 embryos treated during 2016 to 2018, in Tongji Hospital, Wuhan, China. A hierarchical model was established using the machine learning system XGBoost, to learn embryo implantation potential and the impact of double embryos transfer (DET) simultaneously. The performance of the model was evaluated with the AUC of the ROC curve. Multiple regression analyses were also conducted on the 19 selected features to demonstrate the differences between feature importance for prediction and statistical relationship with outcomes. Results For a single embryo transfer (SET) pregnancy, the following variables remained significant: age, attempts at IVF, estradiol level on hCG day, and endometrial thickness. For DET pregnancy, age, attempts at IVF, endometrial thickness, and the newly added P1 + P2 remained significant. For DET twin risk, age, attempts at IVF, 2PN/ MII, and P1 × P2 remained significant. The algorithm was repeated 30 times, and averaged AUC of 0.7945, 0.8385, and 0.7229 were achieved for SET pregnancy, DET pregnancy, and DET twin risk, respectively. The trend of predictive and observed rates both in pregnancy and twin risk was basically identical. XGBoost outperformed the other two algorithms: logistic regression and classification and regression tree. Conclusion Artificial intelligence based on determinant-weighting analysis could offer an individualized embryo selection strategy for any given patient, and predict clinical pregnancy rate and twin risk, therefore optimizing clinical outcomes.


Sign in / Sign up

Export Citation Format

Share Document