DEVELOPMENT OF THE MODULE FOR DATA MANAGEMENT ON OBJECTS FOR ONLINE MAP OF A CITY

2021 ◽  
Vol 53 (1) ◽  
pp. 18-27
Author(s):  
ALENA A. BUROVA ◽  
◽  
SERGEY S. BUROV ◽  
DANILA S. PARYGIN ◽  
ANTON A. FINOGEEV ◽  
...  

The solution of a wide range of tasks aimed at ensuring the sustainable functioning of urban infrastructure requires a comprehensive forecast of the development of the situation. The operation of distributed technical systems in an urbanized area is directly related to multiple subject - object interactions, since system components are most often located close to the almost continuous activity of people related to their maintenance, or just engaged in their daily routine. Therefore, the construction of urban processes models requires an extensive information base about the area infrastructure objects. First of all, this is associated with the aggregation and unification of data from disparate sources, and further verification, addition and updating of the original data is required. The paper describes the stages of development of a data management software module that parses data from a file with OSM XML format, transforms them into the necessary structure for working within a single spatial modeling platform Live.UrbanBasis.com and makes procedural generation of missing data. The approach to additional data generation of infrastructure objects is demonstrated using the example of entrances, floors, and apartments. The paper reveals the approaches and technologies used to work with the data of the OpenStreetMap web mapping project.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Molecules ◽  
2021 ◽  
Vol 26 (2) ◽  
pp. 278
Author(s):  
Jennifer Lagoutte-Renosi ◽  
Bernard Royer ◽  
Vahideh Rabani ◽  
Siamak Davani

Ticagrelor is an antiplatelet agent which is extensively metabolized in an active metabolite: AR-C124910XX. Ticagrelor antagonizes P2Y12 receptors, but recently, this effect on the central nervous system has been linked to the development of dyspnea. Ticagrelor-related dyspnea has been linked to persistently high plasma concentrations of ticagrelor. Therefore, there is a need to develop a simple, rapid, and sensitive method for simultaneous determination of ticagrelor and its active metabolite in human plasma to further investigate the link between concentrations of ticagrelor, its active metabolite, and side effects in routine practice. We present here a new method of quantifying both molecules, suitable for routine practice, validated according to the latest Food and Drug Administration (FDA) guidelines, with a good accuracy and precision (<15% respectively), except for the lower limit of quantification (<20%). We further describe its successful application to plasma samples for a population pharmacokinetics study. The simplicity and rapidity, the wide range of the calibration curve (2–5000 µg/L for ticagrelor and its metabolite), and high throughput make a broad spectrum of applications possible for our method, which can easily be implemented for research, or in daily routine practice such as therapeutic drug monitoring to prevent overdosage and occurrence of adverse events in patients.


Author(s):  
Damian Clarke ◽  
Joseph P. Romano ◽  
Michael Wolf

When considering multiple-hypothesis tests simultaneously, standard statistical techniques will lead to overrejection of null hypotheses unless the multiplicity of the testing framework is explicitly considered. In this article, we discuss the Romano–Wolf multiple-hypothesis correction and document its implementation in Stata. The Romano–Wolf correction (asymptotically) controls the familywise error rate, that is, the probability of rejecting at least one true null hypothesis among a family of hypotheses under test. This correction is considerably more powerful than earlier multiple-testing procedures, such as the Bonferroni and Holm corrections, given that it takes into account the dependence structure of the test statistics by resampling from the original data. We describe a command, rwolf, that implements this correction and provide several examples based on a wide range of models. We document and discuss the performance gains from using rwolf over other multiple-testing procedures that control the familywise error rate.


2021 ◽  
Vol 9 ◽  
Author(s):  
Mehran Sahandi Far ◽  
Michael Stolz ◽  
Jona M. Fischer ◽  
Simon B. Eickhoff ◽  
Juergen Dukart

Health-related data being collected by smartphones offer a promising complementary approach to in-clinic assessments. Despite recent contributions, the trade-off between privacy, optimization, stability and research-grade data quality is not well met by existing platforms. Here we introduce the JTrack platform as a secure, reliable and extendable open-source solution for remote monitoring in daily-life and digital-phenotyping. JTrack is an open-source (released under open-source Apache 2.0 licenses) platform for remote assessment of digital biomarkers (DB) in neurological, psychiatric and other indications. JTrack is developed and maintained to comply with security, privacy and the General Data Protection Regulation (GDPR) requirements. A wide range of anonymized measurements from motion-sensors, social and physical activities and geolocation information can be collected in either active or passive modes by using JTrack Android-based smartphone application. JTrack also provides an online study management dashboard to monitor data collection across studies. To facilitate scaling, reproducibility, data management and sharing we integrated DataLad as a data management infrastructure. Smartphone-based Digital Biomarker data may provide valuable insight into daily-life behaviour in health and disease. As illustrated using sample data, JTrack provides as an easy and reliable open-source solution for collection of such information.


2020 ◽  
Vol 6 ◽  
Author(s):  
Christoph Steinbeck ◽  
Oliver Koepler ◽  
Felix Bach ◽  
Sonja Herres-Pawlis ◽  
Nicole Jung ◽  
...  

The vision of NFDI4Chem is the digitalisation of all key steps in chemical research to support scientists in their efforts to collect, store, process, analyse, disclose and re-use research data. Measures to promote Open Science and Research Data Management (RDM) in agreement with the FAIR data principles are fundamental aims of NFDI4Chem to serve the chemistry community with a holistic concept for access to research data. To this end, the overarching objective is the development and maintenance of a national research data infrastructure for the research domain of chemistry in Germany, and to enable innovative and easy to use services and novel scientific approaches based on re-use of research data. NFDI4Chem intends to represent all disciplines of chemistry in academia. We aim to collaborate closely with thematically related consortia. In the initial phase, NFDI4Chem focuses on data related to molecules and reactions including data for their experimental and theoretical characterisation. This overarching goal is achieved by working towards a number of key objectives: Key Objective 1: Establish a virtual environment of federated repositories for storing, disclosing, searching and re-using research data across distributed data sources. Connect existing data repositories and, based on a requirements analysis, establish domain-specific research data repositories for the national research community, and link them to international repositories. Key Objective 2: Initiate international community processes to establish minimum information (MI) standards for data and machine-readable metadata as well as open data standards in key areas of chemistry. Identify and recommend open data standards in key areas of chemistry, in order to support the FAIR principles for research data. Finally, develop standards, if there is a lack. Key Objective 3: Foster cultural and digital change towards Smart Laboratory Environments by promoting the use of digital tools in all stages of research and promote subsequent Research Data Management (RDM) at all levels of academia, beginning in undergraduate studies curricula. Key Objective 4: Engage with the chemistry community in Germany through a wide range of measures to create awareness for and foster the adoption of FAIR data management. Initiate processes to integrate RDM and data science into curricula. Offer a wide range of training opportunities for researchers. Key Objective 5: Explore synergies with other consortia and promote cross-cutting development within the NFDI. Key Objective 6: Provide a legally reliable framework of policies and guidelines for FAIR and open RDM.


Author(s):  
I D Carpenter ◽  
P G Maropoulos

The selection of tools and cutting data is a central activity in process planning and is often liable to an element of subjectivity. It is further complicated by the wide range of choice presented by the various operation types and the huge portfolio of cutters and inserts available from many different tool manufacturers. This paper describes a procedure to select consistently and efficiently tools for rough and finish milling operations performed on a computer numerical controlled (CNC) machining centre. A wide range of milling operations is considered, including faces, square shoulders, slots, T-slots, pockets, holes and profiles. An initial set of feasible tools is generated that satisfy the constraints of the tool type, the operation geometry, the insert geometry and carbide grade, the workpiece material and the machine tool capacity. Each tool consists of a holder and one or more indexable carbide inserts. Aggressive cutting data are generated for each feasible tool using a rapid search procedure in the permissible depth/width/feed space for good chip control. The cutting data are further refined by a set of technological constraints, which include tool life, surface finish, machine power and available spindle speeds and feeds. The overall cutting data optimization criterion is selected by the user from minimum cost, maximum production rate or predefined tool life. A new optimization criterion, called ‘harshness’, allows the user to influence the chip thickness that is achieved for any given cutter. Any feasible tools that fail to satisfy all the constraints and optimization criteria are discarded.


2021 ◽  
Author(s):  
Mehran Sahandi Far ◽  
Michael Stolz ◽  
Jona Marcus Fischer ◽  
Simon B Eickhoff ◽  
Juergen Dukart

BACKGROUND Health-related data being collected by smartphones offer a promising complementary approach to in-clinic assessments. OBJECTIVE Here we introduce the JuTrack platform as a secure, reliable and extendable open-source solution for remote monitoring in daily-life and digital phenotyping. METHODS JuTrack consists of an Android-based smartphone application and a web-based project management dashboard. A wide range of anonymized measurements from motion-sensors, social and physical activities and geolocation information can be collected in either active or passive modes. The dashboard also provides management tools to monitor and manage data collection across studies. To facilitate scaling, reproducibility, data management and sharing we integrated DataLad as a data management infrastructure. JuTrack was developed to comply with security, privacy and the General Data Protection Regulation (GDPR) requirements. RESULTS JuTrack is an open-source (released under open-source Apache 2.0 licenses) platform for remote assessment of digital biomarkers (DB) in neurological, psychiatric and other indications. The main components of the JuTrack platform and examples of data being collected using JuTrack are presented here. CONCLUSIONS Smartphone-based Digital Biomarker data may provide valuable insight into daily life behaviour in health and disease. JuTrack provides an easy and reliable open-source solution for collection of such data.


Author(s):  
Tianhang Zheng ◽  
Changyou Chen ◽  
Kui Ren

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.


2020 ◽  
Vol 9 (2) ◽  
pp. 137 ◽  
Author(s):  
Muhammad Rizwan ◽  
Wanggen Wan ◽  
Luc Gwiazdzinski

Location-based social networks (LBSNs) have rapidly prevailed in China with the increase in smart devices use, which has provided a wide range of opportunities to analyze urban behavior in terms of the use of LBSNs. In a LBSN, users socialize by sharing their location (also referred to as “geolocation”) in the form of a tweet (also referred to as a “check-in”), which contains information in the form of, but is not limited to, text, audio, video, etc., which records the visited place, movement patterns, and activities performed (e.g., eating, living, working, or leisure). Understanding the user’s activities and behavior in space and time using LBSN datasets can be achieved by archiving the daily activities, movement patterns, and social media behavior patterns, thus representing the user’s daily routine. The current research observing and analyzing urban activities behavior was often supported by the volunteered sharing of geolocation and the activity performed in space and time. The objective of this research was to observe the spatiotemporal and directional trends and the distribution differences of urban activities at the city and district levels using LBSN data. The density was estimated, and the spatiotemporal trend of activities was observed, using kernel density estimation (KDE); for spatial regression analysis, geographically weighted regression (GWR) analysis was used to observe the relationship between different activities in the study area. Finally, for the directional analysis, to observe the principle orientation and direction, and the spatiotemporal movement and extension trends, a standard deviational ellipse (SDE) analysis was used. The results of the study show that women were more inclined to use social media compared with men. However, the activities of male users were different during weekdays and weekends compared to those of female users. The results of the directional analysis at the district level reflect the change in the trajectory and spatiotemporal dynamics of activities. The directional analysis at the district level reveals its fine spatial structure in comparison to the whole city level. Therefore, LBSN can be considered as a supplementary and reliable source of social media big data for observing urban activities and behavior within a city in space and time.


1994 ◽  
Vol 67 (1) ◽  
pp. 62-75 ◽  
Author(s):  
Michael Rivkin ◽  
Arnold Kholodenko

Abstract An innovative flexible faced mechanical shaft seal using common elastomeric materials was designed and tested to determine its friction coefficient at a wide range of temperatures and speeds, its rate of heat generation, and its feasibility for use in the process industry. The new seal was constructed using an elastomeric rotating element stretched over the sleeve to at least 20 percent of its original length and an unlapped silicon carbide stationary annular ring. It was found that the main advantage of the elastomeric seal is its ability to maintain stable lubrication with a fluid film considerably thinner than that of traditional hard face seals, and consequently achieve negligible net leakage. This is particularly significant with respect to control of volatile organic carbon emissions. An experimental device was designed for precise measurement of the friction coefficient as well as the long term friction behavior of seal pairs in a wide range of liquid pressure and temperature. The original data were obtained for friction coefficient of EPDM, HNBR, FKM, and TFE/P type elastomers in contact with silicon carbide in the temperature range 15–110°C, linear speeds 0–12 m/s, water pressure 0.15–0.40 MPa, and effective contact pressure 0.8–1.2 MPa. Experiments showed that the friction coefficient constantly grows, typically from 0.05 to 0.15 at sliding speeds of 2–12 m/s, with temperature increases from 15 to 70°C. The temperature behavior of the friction coefficient above 70°C greatly depends on the elastomer. For high temperature elastomers, such as FKM, the friction coefficient may decrease slightly at 70°C; whereas, for EPDM, it continues to increase as temperature increases.


Sign in / Sign up

Export Citation Format

Share Document