scholarly journals Surveying Tasks Solution Automation in the Framework of Mining and Geological Information System Creation at PJSC "Uralkali"

2021 ◽  
Vol 21 (3) ◽  
pp. 131-136
Author(s):  
Sergey N. Kutovoy ◽  
Anatoliy V. Kataev ◽  
Denis A. Vasenin ◽  
Ilya A. Batalov ◽  
Denis I. Svintsov

The results of work on the automation of the solution of engineering problems facing the specialists of mine surveying services of the mines of PJSC "Uralkali" are presented. The developed software modules are fully integrated into the corporate mining and geological information system of PJSC "Uralkali" and are grouped into specialized software systems - automated workstations. These complexes are installed at the workplaces of various mining specialists, from the heads of technical departments to employees of departments at mines. In total, 21 software systems were developed, of which three workstations were created for the specialists of the company's mine surveying service. For the mine surveying departments at the mines, an automated workstation "Local mine surveyor" was developed and put into commercial operation, for the department of capital surveying and geodetic works - an automated workstation "Capital mine surveying", for employees of the department of the chief mine surveyor of PJSC "Uralkali" - an automated workstation "Chief surveyor". The software modules that are part of the automated workstations of the specialists of the mine surveying service allow in an automated mode to solve a wide range of engineering problems, due to the requirements of the current regulatory documents. Among them, one can single out such tasks as: processing the results of instrumental survey of underground and surface objects and, on its basis, replenishment of mining and graphic documentation in digital form (2D and 3D); mining planning and design; preparation, editing and printing of standard technical documentation (payroll, tables, reports and graphics); solving issues of safe mining; analysis of the implementation of planned and design indicators of the mining enterprise, etc.

Author(s):  
S. Voronkova

The article discusses ways to obtain information about risk factors and the health status of the population. The article describes a new information system «labor Medicine», which allows to organize the collection of a wide range of data for further analysis and application in the activities of various Executive authorities, public organizations, foundations, legal entities and citizens. It is proposed to improve this system by expanding the types of information collected, creating a passport for health promotion organizations, as well as integration with systems that are being implemented in the Russian Federation for managing the health of the working-age population in the context of state policy in the field of Informatization.


2020 ◽  
Vol 5 ◽  
pp. 59-66
Author(s):  
Y.M. Iskanderov ◽  

Aim. The use of intelligent agents in modeling an integrated information system of transport logistics makes it possible to achieve a qualitatively new level of design of control systems in supply chains. Materials and methods. The article presents an original approach that implements the possibilities of using multi-agent technologies in the interests of modeling the processes of functioning of an integrated information system of transport logistics. It is shown that the multi-agent infrastructure is actually a semantic shell of the information system, refl ecting the rules of doing business and the interaction of its participants in the supply chains. The characteristic of the model of the class of an intelligent agent, which is basic for solving problems of management of transport and technological processes, is given. Results. The procedures of functioning of the model of integration of information resources of the participants of the transport services market on the basis of intelligent agents are considered. The presented procedures provide a wide range of network interaction operations in supply chains, including traffi c and network structure “fl exible” control, mutual exchange of content and service information, as well as their distributed processing, and information security. Conclusions. The proposed approach showed that the use of intelligent agents in modeling the functioning of an integrated information system makes it possible to take into account the peculiarities of transport and technological processes in supply chains, such as the integration of heterogeneous enterprises, their distributed organization, an open dynamic structure, standardization of products, interfaces and protocols.


Author(s):  
N.A. Mironov ◽  
E.A. Maryshev ◽  
N.A. Divueva

The article discusses the issues of improving the examination system of competitive applications for state support in the form of grants of the President of the Russian Federation on the basis of an integrated information system that includes the information support system of the Grants Council of the President of the Russian Federation and the information system of the Federal Roster of Scientific and Technological Experts and containing information about experts, applications and expert examination results. In order to improve the principles of transparency and openness of support programs and competition winners, to ensure the objectivity of the competitive selection of projects, a number of organizational and technical solutions are proposed in the application examination system based on an integrated information system. The new and proposed new approaches to the organizational and technical support of the examination of competitive applications for state support in the form of grants of the President of the Russian Federation to young Russian scientists made it possible, by attracting a wide range of scientific and technological communities, to conduct examination of more than five thousand applications with high quality and deadlines set by the Ministry of Education and Science of Russia.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 930
Author(s):  
Fahimeh Hadavimoghaddam ◽  
Mehdi Ostadhassan ◽  
Ehsan Heidaryan ◽  
Mohammad Ali Sadri ◽  
Inna Chapanova ◽  
...  

Dead oil viscosity is a critical parameter to solve numerous reservoir engineering problems and one of the most unreliable properties to predict with classical black oil correlations. Determination of dead oil viscosity by experiments is expensive and time-consuming, which means developing an accurate and quick prediction model is required. This paper implements six machine learning models: random forest (RF), lightgbm, XGBoost, multilayer perceptron (MLP) neural network, stochastic real-valued (SRV) and SuperLearner to predict dead oil viscosity. More than 2000 pressure–volume–temperature (PVT) data were used for developing and testing these models. A huge range of viscosity data were used, from light intermediate to heavy oil. In this study, we give insight into the performance of different functional forms that have been used in the literature to formulate dead oil viscosity. The results show that the functional form f(γAPI,T), has the best performance, and additional correlating parameters might be unnecessary. Furthermore, SuperLearner outperformed other machine learning (ML) algorithms as well as common correlations that are based on the metric analysis. The SuperLearner model can potentially replace the empirical models for viscosity predictions on a wide range of viscosities (any oil type). Ultimately, the proposed model is capable of simulating the true physical trend of the dead oil viscosity with variations of oil API gravity, temperature and shear rate.


2015 ◽  
Author(s):  
Αιμιλία Ψαρούλη

Recent developments in the fields of bioanalytical chemistry and microelectronics have resulted in a growing trend of transferring the classical analytical methods from the laboratory bench to the field through the development of portable devices or microsystems based on biosensors. Biosensors are self-contained integrated devices capable to provide analytical information using biological recognition molecules in direct spatial contact with a transducer. Biosensors using antibodies or antigens as biological recognition elements are termed as immunosensors and they are based on the same principle as the classical solid-phase immunoassays.The aim of this thesis was to develop and evaluate an optical immunosensor based on Mach-Zehnder Interferometry and integrated on silicon substrate for the immunochemical determination of clinical analytes. The optical sensor developed is fabricated entirely by mainstream silicon technology by the Optical Biosensors group of the Institute of Nanoscience and Nanotechnology of NCSR “Demokritos” and combines arrays of ten sensors in a single silicon chip. Each sensor consists of an integrated on silicon light source that emits a broad spectrum in visible-near ultraviolet range and it is coupled to an integrated silicon nitride waveguide which has been patterned into Mach-Zehnder interferometer. The signal is recorded either through a photodetector monolithically integrated onto the same silicon chip (fully integrated configuration) or through an external spectrometer (semi-integrated configuration). In the fully integrated configuration, the signal recorded is the total photocurrent across the whole spectral range, while in semi-integrated configuration the whole transmission spectrum is continuously recorded and is mathematically transformed (Fourier Transform) to phase shift. As in the classical Mach-Zehnder interferometers, the waveguide in the proposed sensor is split into two arms, the sensing one which is appropriately modified with recognition biomolecule and the reference arm that is covered by a protective layer. The specific binding of the analyte with the immobilized onto the surface recognition biomolecule causes an effective refractive index change at the surface of the sensing arm thus affecting the phase of the waveguided light with respect to the reference arm. Thus, when the two arms converge again, an interference spectrum is generated that is altered during bioreaction providing the ability of monitoring in real-time and without using labels. The main difference of the sensor developed with respect to classical Mach-Zehnder interferometers is that the light source is monolithically integrated on the same silicon substrate with the waveguides and the waveguided light is not monochromatic, but broad spectrum.At first in this study, the method for chemical activation of biofunctionalization of chips was optimized. It was found that the highest signals were obtained when chips where activated by (3-aminopropyl)triethoxysilane and deposition of biomolecules solutions using a microarray spotter. Then, a comparison of the two sensor configurations, i.e. the fully and the semi-integrated configuration was performed using a model binding assay namely the streptavidin-biotin reaction. Semi-integrated configuration provided higher detection sensitivities mainly due to lower between-sensor signal variation in the same chip and between different chips. Thus, this configuration was selected for further evaluation with respect to the determination of analytes of clinical interest and especially of immunochemical determination of C-reactive protein in human serum samples. CRP is a marker of inflammation widely used in everyday clinical practice for diagnosis and therapy monitoring of inflammatory situations. Nevertheless, CRP has been also proposed as a prognostic marker of myocardial infraction and three risk levels have been established; low risk for serum CRP concentrations < 1 μg/mL; medium risk for concentrations in the range 1-3 μg/mL; and high risk for concentrations >3 μg/mL. In the frame of the present thesis, enzyme immunoassays for the determination of CRP in microtitration plates both competitive and non-competitive were developed in order to select the most appropriate reagents and define the immunoassay conditions. Then both assay format were transferred and evaluated on the sensor. It was found that the non-competitive format offered higher responses and ability for regeneration of immobilized onto the sensor antibody against CRP and was therefore selected for the final sensor evaluation. The assay developed following the competitive format was sensitive and accurate as was demonstrated through recovery and dilution linearity experiments, and provided for analysis of samples with a wide range of CRP concentrations since it was immune to the presence of serum. In addition, the CRP values determined with the immunosensor developed in serum samples from unknown donors were in good agreement with those determined for the same samples by commercially available kits and instruments showing the reliability of the determinations performed with the immunosensor developed and its potential for analysis of clinical samples.


2015 ◽  
Vol 54 (05) ◽  
pp. 447-454 ◽  
Author(s):  
U. Mansmann ◽  
D. Lindoerfer

SummaryBackground: Patient registries are an important instrument in medical research. Often their structure is complex and their implementation uses composite software systems to meet the wide spectrum of challenges.Objectives: For the implementation of a registry, there is a wide range of commercial, open source, and self-developed systems available and a minimal standard for the critical appraisal of their architecture is needed.Methods: We performed a systematic review of the literature to define a catalogue of relevant criteria to construct a minimal appraisal standard.Results: The CIPROS list is developed based on 64 papers which were found by our systematic review. The list covers twelve sections and contains 72 items.Conclusions: The CIPROS list supports developers to assess requirements on existing systems and strengthens the reporting of patient registry software system descriptions. It can be a first step to create standards for patient registry software system assessments.


2014 ◽  
Vol 17 (3) ◽  
Author(s):  
Emiliano Reynares ◽  
María Laura Caliusco ◽  
Maria Rosa Galli

The wide applicability of mapping business rules expressions to ontology statements have been recently recognized. Some of the most important applications are: (1) using of on- tology reasoners to prove the consistency of business domain information, (2) generation of an ontology intended to be used in the analysis stage of a software development process, and (3) the possibility of encapsulate the declarative specification of business knowledge into information software systems by means of an implemented ontology. The Semantics of Business Vocabulary and Business Rules (SBVR) supports that approach by provid- ing business people with a linguistic way to semantically describe business concepts and specify business rules in an independent way of any information system design. Although previous work have presented some proposals, an exhaustive and automatable approach for them is still lacking. This work presents a broad and detailed set of transformations that allows the automatable generation of an ontology implemented in OWL 2 from the SBVR specifications of a business domain. Such transformations are rooted on the struc- tural specification of both standards and are depicted through a case study. A real case validation example was performed, approaching the feasibility of the mappings by the quality assessment of the developed ontology.


2013 ◽  
Vol 56 (1) ◽  
pp. 50-64 ◽  
Author(s):  
C. V. C. Truong ◽  
Z. Duchev ◽  
E. Groeneveld

Abstract. In recent years, software packages for the management of biological data have rapidly been developing. However, currently, there is no general information system available for managing molecular data derived from both Sanger sequencing and microsatellite genotyping projects. A prerequisite to implementing such a system is to design a general data model which can be deployed to a wide range of labs without modification or customization. Thus, this paper aims to (1) suggest a uniform solution to efficiently store data items required in different labs, (2) describe procedures for representing data streams and data items (3) and construct a formalized data framework. As a result, the data framework has been used to develop an integrated information system for small labs conducting biodiversity studies.


Author(s):  
Clifford Nangle ◽  
Stuart McTaggart ◽  
Margaret MacLeod ◽  
Jackie Caldwell ◽  
Marion Bennie

ABSTRACT ObjectivesThe Prescribing Information System (PIS) datamart, hosted by NHS National Services Scotland receives around 90 million electronic prescription messages per year from GP practices across Scotland. Prescription messages contain information including drug name, quantity and strength stored as coded, machine readable, data while prescription dose instructions are unstructured free text and difficult to interpret and analyse in volume. The aim, using Natural Language Processing (NLP), was to extract drug dose amount, unit and frequency metadata from freely typed text in dose instructions to support calculating the intended number of days’ treatment. This then allows comparison with actual prescription frequency, treatment adherence and the impact upon prescribing safety and effectiveness. ApproachAn NLP algorithm was developed using the Ciao implementation of Prolog to extract dose amount, unit and frequency metadata from dose instructions held in the PIS datamart for drugs used in the treatment of gastrointestinal, cardiovascular and respiratory disease. Accuracy estimates were obtained by randomly sampling 0.1% of the distinct dose instructions from source records, comparing these with metadata extracted by the algorithm and an iterative approach was used to modify the algorithm to increase accuracy and coverage. ResultsThe NLP algorithm was applied to 39,943,465 prescription instructions issued in 2014, consisting of 575,340 distinct dose instructions. For drugs used in the gastrointestinal, cardiovascular and respiratory systems (i.e. chapters 1, 2 and 3 of the British National Formulary (BNF)) the NLP algorithm successfully extracted drug dose amount, unit and frequency metadata from 95.1%, 98.5% and 97.4% of prescriptions respectively. However, instructions containing terms such as ‘as directed’ or ‘as required’ reduce the usability of the metadata by making it difficult to calculate the total dose intended for a specific time period as 7.9%, 0.9% and 27.9% of dose instructions contained terms meaning ‘as required’ while 3.2%, 3.7% and 4.0% contained terms meaning ‘as directed’, for drugs used in BNF chapters 1, 2 and 3 respectively. ConclusionThe NLP algorithm developed can extract dose, unit and frequency metadata from text found in prescriptions issued to treat a wide range of conditions and this information may be used to support calculating treatment durations, medicines adherence and cumulative drug exposure. The presence of terms such as ‘as required’ and ‘as directed’ has a negative impact on the usability of the metadata and further work is required to determine the level of impact this has on calculating treatment durations and cumulative drug exposure.


Sign in / Sign up

Export Citation Format

Share Document