graphical tool
Recently Published Documents


TOTAL DOCUMENTS

367
(FIVE YEARS 94)

H-INDEX

23
(FIVE YEARS 4)

2021 ◽  
Vol 6 (4) ◽  
pp. 116-122
Author(s):  
Tatyana I. Sinitsyna ◽  
Andrey N. Gorbunov

Background. Workovers (WO) are the main EOR tool at Krasnoleninskoye reservoirs. Therefore, the issue of increasing the reliability of technological and economic performance when planning various types of workovers is urgent. This is due to the complexity of selecting well candidates, the lack of a comprehensive methodology for assessing the short-term and long-term potential of wells, large WO scopes, as well as declining WO performance associated with the reduction of reserves, deterioration of the energy state of the reservoirs, and advancement of the injected water front. The purpose of the study is to create mathematical tools that will reduce the time of well-candidates selection for various types of workovers and to improve the WO quality for entire field. The paper describes methods of automated selection of well candidates that were successfully applied in the conditions of the field of interest, namely graphical and mathematical tools. The mathematical one has been created based on the correlation-regression analysis of the actual implementation of stimulation methods in various geological-field conditions in Microsoft Excel 2010 with Visual Basic for Applications (VBA). The graphical tool has been generated on the basis of all historical field data verified and processed using methods of primary statistical analysis in RN-KIN software. The study resulted in a technique that was selected and tested in the conditions of Krasnoleninskoye oil and gas condensate field. The process of introducing the developed approaches to the search for well candidates for various types of workovers in the field was accompanied by updating, analysis of results, and cyclic training of the system. A methodological approach has been developed, including the combination of several methods for selecting well candidates for various types of workovers. A combination of statistical and graphical methods made it possible to significantly improve the reliability of WO candidates selection and therefore to reduce the share of uneconomic workovers by 12 % in the period from 2017 to 2020. As part of the study, a script has been developed that automatically computes the rank of a well-candidate which significantly reduces time costs and allows to quickly evaluate the “best” workover candidates.


Author(s):  
Bruna Strapazzon do Couto ◽  
Miguel Afonso Sellitto

The purpose of this study is to choose an order dispatching rule and measure the work-in-process and lead-time in the production process of a conveyor chain manufacturer. The main strategic issue for the manufacturer is dependability, which requires meeting deadlines and managing internal lead-times. The study integrates two techniques, workload control (WLC) and an analytical hierarchy process (AHP), respectively systems for production planning and control, and multi-criteria decision support, both widely used in handling manufacturing strategic issues. The research method is a field experiment. Supported by the AHP and according to strategic criteria, practitioners selected the early due date rule (the order with the closest due date comes first) to release 231 orders. Then, employing a methodology designed to support WLC applications, the study measured key parameters that provide information regarding the overall performance of the manufacturer, the input rate, work-in-process, lead-time, throughput performance, and the level of safety stock. Using the model and a graphical tool derived from queuing theory, the throughput diagram, the study provides evidence that, although the manufacturing process is satisfactorily balanced and achieves acceptable performance, the level of safety stock is small and should be increased to prevent starvation on the shop floor.


Metals ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1953
Author(s):  
Thibault Quatravaux ◽  
Jose Barros ◽  
Pascal Gardin ◽  
Gabriel Lucena

The blast-furnace operating diagram proposed by Rist was revised to direct reduction and was specifically applied to the Midrex NGTM process. The use of this graphical tool in the study of an industrial process highlighted the staggered nature of the reduction in the shaft furnace with, in particular, the existence of a prereduction zone in the upper part where metallization is thermodynamically impossible. A sensitivity study also showed the impact of the in situ reforming rate on the ability of the gas to completely reduce iron oxides. Finally, we graphically defined the minimum quality required for the top gas to produce direct-reduced iron.


2021 ◽  
Author(s):  
Duarte Vital ◽  
Pedro Mariano ◽  
Susana Marta Almeidaz ◽  
Pedro Santana

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Aditya Shekhar Nittala ◽  
Andreas Karrenbauer ◽  
Arshad Khan ◽  
Tobias Kraus ◽  
Jürgen Steimle

AbstractElectro-physiological sensing devices are becoming increasingly common in diverse applications. However, designing such sensors in compact form factors and for high-quality signal acquisition is a challenging task even for experts, is typically done using heuristics, and requires extensive training. Our work proposes a computational approach for designing multi-modal electro-physiological sensors. By employing an optimization-based approach alongside an integrated predictive model for multiple modalities, compact sensors can be created which offer an optimal trade-off between high signal quality and small device size. The task is assisted by a graphical tool that allows to easily specify design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. They demonstrate that generated designs can achieve an optimal balance between the size of the sensor and its signal acquisition capability, outperforming expert generated solutions.


2021 ◽  
Vol 2084 (1) ◽  
pp. 012026
Author(s):  
Sarah Yusoff ◽  
Nur Hidayah Md Noh ◽  
Norulhidayah Isa

Abstract This study aims to explore the students’ level of readiness in taking up job opportunities in big data analytics and determine the contributing factors to students’ readiness. In addition, the crucial factors that need to be resolved are identified. This job field requires some significant criteria such as, willing to work as a team, self-effort, and specialised skills such as data visualisations and data storytelling, big data analysis, and basic knowledge on tools for big data analytics. Intellipaat.com, a platform that offers various professional online training courses, has ranked position in big data analytics and data science as the highest paying jobs in 2019. However, from 2019 onwards, Malaysia has been predicted to suffer a shortfall of data analysis professionals of up to 7,000-15,000. Our educational institutions are being encouraged to create more graduates to meet this need. The question arises on whether students are prepared and willing to work in this sector once they graduate. An online survey was constructed and distributed to all UiTM students enrolled in various bachelor’s degrees and master’s programmes. One hundred and thirty-nine students participated in this survey. A graphical tool for data tabulation was presented using a box-and-whisker plot. Additionally, correlation analysis and multiple regression were used to determine the relationship and factors that can contribute the students’ readiness for job opportunities in big data analytics. The results from the box-and-whisker plot have discovered an excellent sign of students’ readiness towards job opportunities in big data analytics. Correlation analyses has shown a weak to moderate relationship among factors and multiple linear regression analyses revealed the data visualisation including storytelling skill (DVSS) and teamwork (TW) have significantly given some impacts on the students’ opportunity in big data analytics career. The results of this study are expected to provide insights into students’ readiness for job opportunities in big data analytics.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
L. F. Signorini ◽  
T. Almozlino ◽  
R. Sharan

Abstract Background ANAT is a Cytoscape plugin for the inference of functional protein–protein interaction networks in yeast and human. It is a flexible graphical tool for scientists to explore and elucidate the protein–protein interaction pathways of a process under study. Results Here we present ANAT3.0, which comes with updated PPI network databases of 544,455 (human) and 155,504 (yeast) interactions, and a new machine-learning layer for refined network elucidation. Together they improve network reconstruction to more than twofold increase in the quality of reconstructing known signaling pathways from KEGG. Conclusions ANAT3.0 includes improved network reconstruction algorithms and more comprehensive protein–protein interaction networks than previous versions. ANAT is available for download on the Cytoscape Appstore and at https://www.cs.tau.ac.il/~bnet/ANAT/.


Mathematics ◽  
2021 ◽  
Vol 9 (20) ◽  
pp. 2597
Author(s):  
Gábor Kusper ◽  
Csaba Biró ◽  
Benedek Nagy

In this paper, we introduce the notion of resolvable networks. A resolvable network is a digraph of subnetworks, where subnetworks may overlap, and the inner structure of subnetworks are not interesting from the viewpoint of the network. There are two special subnetworks, Source and Sink, with the following properties: there is no incoming edge to Source, and there is no outgoing edge from Sink. Any resolvable network can be represented by a satisfiability problem in Boolean logic (shortly, SAT problem), and any SAT problem can be represented by a resolvable network. Because of that, the resolution operation is valid also for resolvable networks. We can use resolution to find out or refine the inner structure of subnetworks. We give also a pessimistic and an optimistic interpretation of subnetworks. In the pessimistic case, we assume that inside a subnetwork, all communication possibilities are represented as part of the resolvable network. In the optimistic case, we assume that each subnetwork is strongly connected. We show that any SAT problem can be visualized using the pessimistic interpretation. We show that transitivity is very limited in the pessimistic interpretation, and in this case, transitivity corresponds to resolution of clauses. In the optimistic interpretation of subnetworks, we have transitivity without any further condition, but not all SAT problems can be represented in this case; however, any such network can be represented as a SAT problem. The newly introduced graphical concept allows to use terminology and tools from directed graphs in the field of SAT and also to give graphical representations of various concepts of satisfiability problems. A resolvable network is also a suitable data structure to study, for example, wireless sensor networks. The visualization power of resolvable networks is demonstrated on some pigeon hole SAT problems. Another important application field could be modeling the communication network of an information bank. Here, a subnetwork represents a dataset of a user which is secured by a proxy. Any communication should be done through the proxy, and this constraint can be checked using our model.


2021 ◽  
Vol 7 ◽  
pp. e740
Author(s):  
Luis Naranjo-Zeledón ◽  
Mario Chacón-Rivas ◽  
Jesús Peral ◽  
Antonio Ferrández

Different fields such as linguistics, teaching, and computing have demonstrated special interest in the study of sign languages (SL). However, the processes of teaching and learning these languages turn complex since it is unusual to find people teaching these languages that are fluent in both SL and the native language of the students. The teachings from deaf individuals become unique. Nonetheless, it is important for the student to lean on supportive mechanisms while being in the process of learning an SL. Bidirectional communication between deaf and hearing people through SL is a hot topic to achieve a higher level of inclusion. However, all the processes that convey teaching and learning SL turn difficult and complex since it is unusual to find SL teachers that are fluent also in the native language of the students, making it harder to provide computer teaching tools for different SL. Moreover, the main aspects that a second language learner of an SL finds difficult are phonology, non-manual components, and the use of space (the latter two are specific to SL, not to spoken languages). This proposal appears to be the first of the kind to favor the Costa Rican Sign Language (LESCO, for its Spanish acronym), as well as any other SL. Our research focus stands on reinforcing the learning process of final-user hearing people through a modular architectural design of a learning environment, relying on the concept of phonological proximity within a graphical tool with a high degree of usability. The aim of incorporating phonological proximity is to assist individuals in learning signs with similar handshapes. This architecture separates the logic and processing aspects from those associated with the access and generation of data, which makes it portable to other SL in the future. The methodology used consisted of defining 26 phonological parameters (13 for each hand), thus characterizing each sign appropriately. Then, a similarity formula was applied to compare each pair of signs. With these pre-calculations, the tool displays each sign and its top ten most similar signs. A SUS usability test and an open qualitative question were applied, as well as a numerical evaluation to a group of learners, to validate the proposal. In order to reach our research aims, we have analyzed previous work on proposals for teaching tools meant for the student to practice SL, as well as previous work on the importance of phonological proximity in this teaching process. This previous work justifies the necessity of our proposal, whose benefits have been proved through the experimentation conducted by different users on the usability and usefulness of the tool. To meet these needs, homonymous words (signs with the same starting handshape) and paronyms (signs with highly similar handshape), have been included to explore their impact on learning. It allows the possibility to apply the same perspective of our existing line of research to other SL in the future.


2021 ◽  
pp. 0272989X2110446
Author(s):  
Anu Mishra ◽  
Robyn L. McClelland ◽  
Lurdes Y. T. Inoue ◽  
Kathleen F. Kerr

Background An established risk model may demonstrate miscalibration, meaning predicted risks do not accurately capture event rates. In some instances, investigators can identify and address the cause of miscalibration. In other circumstances, it may be appropriate to recalibrate the risk model. Existing recalibration methods do not account for settings in which the risk score will be used for risk-based clinical decision making. Methods We propose 2 new methods for risk model recalibration when the intended purpose of the risk model is to prescribe an intervention to high-risk individuals. Our measure of risk model clinical utility is standardized net benefit. The first method is a weighted strategy that prioritizes good calibration at or around the critical risk threshold. The second method uses constrained optimization to produce a recalibrated risk model with maximum possible net benefit, thereby prioritizing good calibration around the critical risk threshold. We also propose a graphical tool for assessing the potential for recalibration to improve the net benefit of a risk model. We illustrate these methods by recalibrating the American College of Cardiology (ACC)–American Heart Association (AHA) atherosclerotic cardiovascular disease (ASCVD) risk score within the Multi-Ethnic Study of Atherosclerosis (MESA) cohort. Results New methods are implemented in the R package ClinicalUtilityRecal. Recalibrating the ACC-AHA-ASCVD risk score for a MESA subcohort results in higher estimated net benefit using the proposed methods compared with existing methods, with improved calibration in the most clinically impactful regions of risk. Conclusion The proposed methods target good calibration for critical risks and can improve the net benefit of a risk model. We recommend constrained optimization when the risk model net benefit is paramount. The weighted approach can be considered when good calibration over an interval of risks is important.


Sign in / Sign up

Export Citation Format

Share Document