scholarly journals Urban Planning Using a Geospatial Approach: A Case Study of Libya

Author(s):  
Bahareh Kalantar ◽  
Husam A.H. Al-najjar ◽  
Hossein Mojaddadi Rizeei ◽  
Maruwan S.A.B. Amazeeq ◽  
Mohammed Oludare Idrees ◽  
...  

Large scale developmental projects firstly require the selection of one or more cities to be developed. In Libya, the selection process is done by selected organizations, which is highly influenced by human judgement that can be inconsiderate of socioeconomic and environmental factors. In this study, we propose an automated selection process, which takes into consideration only the important factors for city (cities) selection. Specifically, a geospatial decision-making tool, free of human bias, is proposed based on the fuzzy overlay (FO) and technique for order performance by similarity to ideal solution (TOPSIS) techniques for development projects in Libya. In this work, a dataset of 17 evaluation criteria (GIS factors) across five urban conditioning factors were prepared. The dataset served as input to the FO model to calculate weights (importance) for each criterion. A support vector machine (SVM) classifier was then trained to refine weights from the FO model. TOPSIS was then applied on the refined results to rank the cities for development. Experimental results indicate promising overall accuracy and kappa statistics. Our findings also show that highest and lowest success rates are 0.94 and 0.79, respectively, while highest and lowest prediction rates are 0.884 and 0.673, respectively.

2019 ◽  
Vol 20 (S15) ◽  
Author(s):  
Fei Guo ◽  
Quan Zou ◽  
Guang Yang ◽  
Dan Wang ◽  
Jijun Tang ◽  
...  

Abstract Background Protein-protein interaction plays a key role in a multitude of biological processes, such as signal transduction, de novo drug design, immune responses, and enzymatic activities. Gaining insights of various binding abilities can deepen our understanding of the interaction. It is of great interest to understand how proteins in a complex interact with each other. Many efficient methods have been developed for identifying protein-protein interface. Results In this paper, we obtain the local information on protein-protein interface, through multi-scale local average block and hexagon structure construction. Given a pair of proteins, we use a trained support vector regression (SVR) model to select best configurations. On Benchmark v4.0, our method achieves average Irmsd value of 3.28Å and overall Fnat value of 63%, which improves upon Irmsd of 3.89Å and Fnat of 49% for ZRANK, and Irmsd of 3.99Å and Fnat of 46% for ClusPro. On CAPRI targets, our method achieves average Irmsd value of 3.45Å and overall Fnat value of 46%, which improves upon Irmsd of 4.18Å and Fnat of 40% for ZRANK, and Irmsd of 5.12Å and Fnat of 32% for ClusPro. The success rates by our method, FRODOCK 2.0, InterEvDock and SnapDock on Benchmark v4.0 are 41.5%, 29.0%, 29.4% and 37.0%, respectively. Conclusion Experiments show that our method performs better than some state-of-the-art methods, based on the prediction quality improved in terms of CAPRI evaluation criteria. All these results demonstrate that our method is a valuable technological tool for identifying protein-protein interface.


Author(s):  
Yizheng Zhao ◽  
Ghadah Alghamdi ◽  
Renate A. Schmidt ◽  
Hao Feng ◽  
Giorgos Stoilos ◽  
...  

This paper explores how the logical difference between two ontologies can be tracked using a forgetting-based or uniform interpolation (UI)-based approach. The idea is that rather than computing all entailments of one ontology not entailed by the other ontology, which would be computationally infeasible, only the strongest entailments not entailed in the other ontology are computed. To overcome drawbacks of existing forgetting/uniform interpolation tools we introduce a new forgetting method designed for the task of computing the logical difference between different versions of large-scale ontologies. The method is sound and terminating, and can compute uniform interpolants for ALC-ontologies as large as SNOMED CT and NCIt. Our evaluation shows that the method can achieve considerably better success rates (>90%) and provides a feasible approach to computing the logical difference in large-scale ontologies, as a case study on different versions of SNOMED CT and NCIt ontologies shows.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Xinke Zhan ◽  
Zhuhong You ◽  
Changqing Yu ◽  
Liping Li ◽  
Jie Pan

Identifying the drug-target interactions (DTIs) plays an essential role in new drug development. However, there still has the limited knowledge of DTIs and a significant number of unknown DTI pairs. Moreover, the traditional experimental methods have inevitable disadvantages such as high cost and time-consuming. Therefore, developing computational methods for predicting DTIs is attracting more and more attention. In this study, we report a novel computational approach for predicting DTI using GIST feature, position-specific scoring matrix (PSSM), and rotation forest (RF). Specifically, each target protein is first converted into a PSSM for retaining evolutionary information. Then, the GIST feature is extracted from PSSM and substructure fingerprint information is adopted to extract the feature of the drug. Finally, combining each protein and drug features to form a new drug-target pair, which is employed as input feature for RF classifier. In the experiment, the proposed method achieves high average accuracies of 89.25%, 85.93%, 82.36%, and 73.89% on enzyme, ion channel, G protein-coupled receptors (GPCRs), and nuclear receptor, respectively. For further evaluating the prediction performance of the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the same golden standard dataset. These promising results illustrate that the proposed method is more effective and stable than other methods. We expect the proposed method to be a useful tool for predicting large-scale DTIs.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
José L. Hernández-Ramos ◽  
Georgios Karopoulos ◽  
Dimitris Geneiatakis ◽  
Tania Martin ◽  
Georgios Kambourakis ◽  
...  

During 2021, different worldwide initiatives have been established for the development of digital vaccination certificates to alleviate the restrictions associated with the COVID-19 pandemic to vaccinated individuals. Although diverse technologies can be considered for the deployment of such certificates, the use of blockchain has been suggested as a promising approach due to its decentralization and transparency features. However, the proposed solutions often lack realistic experimental evaluation that could help to determine possible practical challenges for the deployment of a blockchain platform for this purpose. To fill this gap, this work introduces a scalable, blockchain-based platform for the secure sharing of COVID-19 or other disease vaccination certificates. As an indicative use case, we emulate a large-scale deployment by considering the countries of the European Union. The platform is evaluated through extensive experiments measuring computing resource usage, network response time, and bandwidth. Based on the results, the proposed scheme shows satisfactory performance across all major evaluation criteria, suggesting that it can set the pace for real implementations. Vis-à-vis the related work, the proposed platform is novel, especially through the prism of a large-scale, full-fledged implementation and its assessment.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4335
Author(s):  
Goran Šeketa ◽  
Lovro Pavlaković ◽  
Dominik Džaja ◽  
Igor Lacković ◽  
Ratko Magjarević

Automatic fall detection systems ensure that elderly people get prompt assistance after experiencing a fall. Fall detection systems based on accelerometer measurements are widely used because of their portability and low cost. However, the ability of these systems to differentiate falls from Activities of Daily Living (ADL) is still not acceptable for everyday usage at a large scale. More work is still needed to raise the performance of these systems. In our research, we explored an essential but often neglected part of accelerometer-based fall detection systems—data segmentation. The aim of our work was to explore how different configurations of windows for data segmentation affect detection accuracy of a fall detection system and to find the best-performing configuration. For this purpose, we designed a testing environment for fall detection based on a Support Vector Machine (SVM) classifier and evaluated the influence of the number and duration of segmentation windows on the overall detection accuracy. Thereby, an event-centered approach for data segmentation was used, where windows are set relative to a potential fall event detected in the input data. Fall and ADL data records from three publicly available datasets were utilized for the test. We found that a configuration of three sequential windows (pre-impact, impact, and post-impact) provided the highest detection accuracy on all three datasets. The best results were obtained when either a 0.5 s or a 1 s long impact window was used, combined with pre- and post-impact windows of 3.5 s or 3.75 s.


Author(s):  
Christian Merkenschlager ◽  
Stephanie Koller ◽  
Christoph Beck ◽  
Elke Hertig

AbstractWithin the scope of urban climate modeling, weather analogs are used to downscale large-scale reanalysis-based information to station time series. Two novel approaches of weather analogs are introduced which allow a day-by-day comparison with observations within the validation period and which are easily adaptable to future periods for projections. Both methods affect the first level of analogy which is usually based on selection of circulation patterns. First, the time series were bias corrected and detrended before subsamples were determined for each specific day of interest. Subsequently, the normal vector of the standardized regression planes (NVEC) or the center of gravity (COG) of the normalized absolute circulation patterns was used to determine a point within an artificial coordinate system for each day. The day(s) which exhibit(s) the least absolute distance(s) between the artificial points of the day of interest and the days of the subsample is/are used as analog or subsample for the second level of analogy, respectively. Here, the second level of analogy is a second selection process based on the comparison of gridded temperature data between the analog subsample and the day of interest. After the analog selection process, the trends of the observation were added to the analog time series. With respect to air temperature and the exceedance of the 90th temperature quantile, the present study compares the performance of both analog methods with an already existing analog method and a multiple linear regression. Results show that both novel analog approaches can keep up with existing methods. One shortcoming of the methods presented here is that they are limited to local or small regional applications. In contrast, less pre-processing and the small domain size of the circulation patterns lead to low computational costs.


2021 ◽  
Author(s):  
Hu Liu ◽  
Yan Jiang ◽  
Rafal Misa ◽  
Junhai Gao ◽  
Mingyu Xia ◽  
...  

Abstract Underground mining activity has existed for more than 100 years in Nansi lake. Coal mining not only plays a supporting role in local social and economic development but also has a significant impact on the ecological environment in the region. Landsat series remote sensing data (1988~2019) are used to research the impact of coal mining on the ecological environment in Nansi lake. Then Support Vector Machine (SVM) classifier is applied to extract the water area of the upstream lake from 1988 to 2019, and ecological environment and spatiotemporal variation characteristics are analyzed by Remote Sensing Ecology Index (RSEI). The results illustrate that the water area change is associated with annual precipitation. Compared with 2009, the ecological quality of the lake is worse in 2019, and then the reason for this change is due to large-scale underground mining. Therefore, the coal mines from the natural reserve may be closed or limited to the mining boundary for protecting the lake's ecological environment.


Author(s):  
Dian Puspita Hapsari ◽  
Imam Utoyo ◽  
Santi Wulan Purnami

Data classification has several problems one of which is a large amount of data that will reduce computing time. SVM is a reliable linear classifier for linear or non-linear data, for large-scale data, there are computational time constraints. The Fractional gradient descent method is an unconstrained optimization algorithm to train classifiers with support vector machines that have convex problems. Compared to the classic integer-order model, a model built with fractional calculus has a significant advantage to accelerate computing time. In this research, it is to conduct investigate the current state of this new optimization method fractional derivatives that can be implemented in the classifier algorithm. The results of the SVM Classifier with fractional gradient descent optimization, it reaches a convergence point of approximately 50 iterations smaller than SVM-SGD. The process of updating or fixing the model is smaller in fractional because the multiplier value is less than 1 or in the form of fractions. The SVM-Fractional SGD algorithm is proven to be an effective method for rainfall forecast decisions.


The patient’s heart disease status is obtained by using a heart disease detection model. That is used for the medical experts. In order to predict the heart disease, the existing technique use optimal classifier. Even though the existing technique achieved the better result, it has some disadvantages. In order to improve those drawbacks, the suggested technique utilizes the effective method for heart disease prediction. At first the input information is preprocessed and then the preprocessed result is forwarded to the feature selection process. For the feature selection process a proficient feature selection is used over the high dimensional medical data. Hybrid Fish Bee optimization algorithm (HFSBEE) is utilized. Thus, the proposed algorithm parallelizes the two algorithms such that the local behavior of artificial bee colony algorithm and global search of fish swarm optimization are effectively used to find the optimal solution. Classification process is performed by the transformation of medical dataset to the Multi kernel support vector machine (MKSVM). The process of our proposed technique is calculated based on the accuracy, sensitivity, specificity, precision, recall and F-measure. Here, for test analysis, the some datasets used i.e. Cleveland, Hungarian and Switzerland etc., that are given based on the UCI machine learning repository. The experimental outcome show that our presented technique is went better than the accuracy of 97.68%. This is for the Cleveland dataset when related with existing hybrid kernel support vector machine (HKSVM) method achieved 96.03% and optimal rough fuzzy classifier obtained 62.25%. The implementation of the proposed method is done by MATLAB platform.


Author(s):  
DEJAN GJORGJEVIKJ ◽  
GJORGJI MADJAROV ◽  
SAŠO DŽEROSKI

Multi-label learning (MLL) problems abound in many areas, including text categorization, protein function classification, and semantic annotation of multimedia. Issues that severely limit the applicability of many current machine learning approaches to MLL are the large-scale problem, which have a strong impact on the computational complexity of learning. These problems are especially pronounced for approaches that transform MLL problems into a set of binary classification problems for which Support Vector Machines (SVMs) are used. On the other hand, the most efficient approaches to MLL, based on decision trees, have clearly lower predictive performance. We propose a hybrid decision tree architecture, where the leaves do not give multi-label predictions directly, but rather utilize local SVM-based classifiers giving multi-label predictions. A binary relevance architecture is employed in the leaves, where a binary SVM classifier is built for each of the labels relevant to that particular leaf. We use a broad range of multi-label datasets with a variety of evaluation measures to evaluate the proposed method against related and state-of-the-art methods, both in terms of predictive performance and time complexity. Our hybrid architecture on almost every large classification problem outperforms the competing approaches in terms of the predictive performance, while its computational efficiency is significantly improved as a result of the integrated decision tree.


Sign in / Sign up

Export Citation Format

Share Document