workflow system
Recently Published Documents


TOTAL DOCUMENTS

390
(FIVE YEARS 52)

H-INDEX

23
(FIVE YEARS 3)

2021 ◽  
Vol 11 (1) ◽  
pp. 20
Author(s):  
Mete Ercan Pakdil ◽  
Rahmi Nurhan Çelik

Geospatial data and related technologies have become an increasingly important aspect of data analysis processes, with their prominent role in most of them. Serverless paradigm have become the most popular and frequently used technology within cloud computing. This paper reviews the serverless paradigm and examines how it could be leveraged for geospatial data processes by using open standards in the geospatial community. We propose a system design and architecture to handle complex geospatial data processing jobs with minimum human intervention and resource consumption using serverless technologies. In order to define and execute workflows in the system, we also propose new models for both workflow and task definitions models. Moreover, the proposed system has new Open Geospatial Consortium (OGC) Application Programming Interface (API) Processes specification-based web services to provide interoperability with other geospatial applications with the anticipation that it will be more commonly used in the future. We implemented the proposed system on one of the public cloud providers as a proof of concept and evaluated it with sample geospatial workflows and cloud architecture best practices.


2021 ◽  
Author(s):  
◽  
David Stirling

<p>Client honeypots are devices for detecting malicious servers on a network. They interact with potentially malicious servers and analyse the Web pages returned to assess whether these pages contain an attack. This type of attack is termed a 'drive-by-download'. Low-interaction client honeypots operate a signature-based approach to detecting known malicious code. High- interaction client honeypots run client applications in full operating systems that are usually hosted by a virtual machine. The operating systems are either internally or externally monitored for anomalous behaviour. In recent years there have been a growing number of client honeypot systems being developed, but there is little interoperability between systems because each has its own custom operational scripts and data formats. By creating interoperability through standard interfaces we could more easily share usage of client honeypots and the data collected. Another problem is providing a simple means of managing an installation of client honeypots. Work ows are a popular technology for allowing end-users to co-ordinate e-science experiments, so these work ow systems can potentially be utilised for client honeypot management. To formulate requirements for management we ran moderate-scale scans of the .nz domain over several months using a manual script-based approach. The main requirements were a system that is user-oriented, loosely-coupled, and integrated with Grid computing|allowing for resource sharing across organisations. Our system design uses Grid services (extensions to Web services) to wrap client honeypots, a manager component acts as a broker for user access, and workflows orchestrate the Grid services. Our prototype wraps our case study - Capture-HPC -with these services, using the Taverna workflow system, and a Web portal for user access. When evaluating our experiences we found that while our system design met our requirements, currently a Java-based application operating on our Web services provides some advantages over our Taverna approach - particularly for modifying workflows, maintainability, and dealing with  failure. The Taverna workflows, however, are better suited for the data analysis phase and have some usability advantages. Workflow languages such as Taverna are still relatively immature, so improvements are likely to be made. Both of these approaches are significantly easier to manage and deploy than the previous manual script-based method.</p>


2021 ◽  
Author(s):  
◽  
David Stirling

<p>Client honeypots are devices for detecting malicious servers on a network. They interact with potentially malicious servers and analyse the Web pages returned to assess whether these pages contain an attack. This type of attack is termed a 'drive-by-download'. Low-interaction client honeypots operate a signature-based approach to detecting known malicious code. High- interaction client honeypots run client applications in full operating systems that are usually hosted by a virtual machine. The operating systems are either internally or externally monitored for anomalous behaviour. In recent years there have been a growing number of client honeypot systems being developed, but there is little interoperability between systems because each has its own custom operational scripts and data formats. By creating interoperability through standard interfaces we could more easily share usage of client honeypots and the data collected. Another problem is providing a simple means of managing an installation of client honeypots. Work ows are a popular technology for allowing end-users to co-ordinate e-science experiments, so these work ow systems can potentially be utilised for client honeypot management. To formulate requirements for management we ran moderate-scale scans of the .nz domain over several months using a manual script-based approach. The main requirements were a system that is user-oriented, loosely-coupled, and integrated with Grid computing|allowing for resource sharing across organisations. Our system design uses Grid services (extensions to Web services) to wrap client honeypots, a manager component acts as a broker for user access, and workflows orchestrate the Grid services. Our prototype wraps our case study - Capture-HPC -with these services, using the Taverna workflow system, and a Web portal for user access. When evaluating our experiences we found that while our system design met our requirements, currently a Java-based application operating on our Web services provides some advantages over our Taverna approach - particularly for modifying workflows, maintainability, and dealing with  failure. The Taverna workflows, however, are better suited for the data analysis phase and have some usability advantages. Workflow languages such as Taverna are still relatively immature, so improvements are likely to be made. Both of these approaches are significantly easier to manage and deploy than the previous manual script-based method.</p>


2021 ◽  
Vol 7 ◽  
pp. e747
Author(s):  
Mazen Farid ◽  
Rohaya Latip ◽  
Masnida Hussin ◽  
Nor Asilah Wati Abdul Hamid

Background Recent technological developments have enabled the execution of more scientific solutions on cloud platforms. Cloud-based scientific workflows are subject to various risks, such as security breaches and unauthorized access to resources. By attacking side channels or virtual machines, attackers may destroy servers, causing interruption and delay or incorrect output. Although cloud-based scientific workflows are often used for vital computational-intensive tasks, their failure can come at a great cost. Methodology To increase workflow reliability, we propose the Fault and Intrusion-tolerant Workflow Scheduling algorithm (FITSW). The proposed workflow system uses task executors consisting of many virtual machines to carry out workflow tasks. FITSW duplicates each sub-task three times, uses an intermediate data decision-making mechanism, and then employs a deadline partitioning method to determine sub-deadlines for each sub-task. This way, dynamism is achieved in task scheduling using the resource flow. The proposed technique generates or recycles task executors, keeps the workflow clean, and improves efficiency. Experiments were conducted on WorkflowSim to evaluate the effectiveness of FITSW using metrics such as task completion rate, success rate and completion time. Results The results show that FITSW not only raises the success rate by about 12%, it also improves the task completion rate by 6.2% and minimizes the completion time by about 15.6% in comparison with intrusion tolerant scientific workflow ITSW system.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fahdi Kanavati ◽  
Masayuki Tsuneki

AbstractGastric diffuse-type adenocarcinoma represents a disproportionately high percentage of cases of gastric cancers occurring in the young, and its relative incidence seems to be on the rise. Usually it affects the body of the stomach, and it presents shorter duration and worse prognosis compared with the differentiated (intestinal) type adenocarcinoma. The main difficulty encountered in the differential diagnosis of gastric adenocarcinomas occurs with the diffuse-type. As the cancer cells of diffuse-type adenocarcinoma are often single and inconspicuous in a background desmoplaia and inflammation, it can often be mistaken for a wide variety of non-neoplastic lesions including gastritis or reactive endothelial cells seen in granulation tissue. In this study we trained deep learning models to classify gastric diffuse-type adenocarcinoma from WSIs. We evaluated the models on five test sets obtained from distinct sources, achieving receiver operator curve (ROC) area under the curves (AUCs) in the range of 0.95–0.99. The highly promising results demonstrate the potential of AI-based computational pathology for aiding pathologists in their diagnostic workflow system.


2021 ◽  
Author(s):  
Karina Sari ◽  
Herfran Rhama Priwanza ◽  
Sandi Rizky Kharisma ◽  
Rangga Saputra

Abstract Mahakam is a mature gas and oil field that has been in operation since 1966, covering an area of approximately 1500 square kilometers. It is located in East Kalimantan Province, Indonesia and has 7 operating fields. Tunu, Tambora and Handil are fields within the swamp shallow water (Delta), whereas Bekapai, Peciko, Sisi Nubi and South Mahakam are offshore fields with water depths ranging from 45 to 80 meters. The diverse setting of environments requires different methods of site preparation, construction, drilling and logistic. The drilling industrialization necessitates agile and complex well preparation especially in the Deltaic environment, with around 70 wells drilled with three swamp barge rigs each year. In recent drilling development in both Tunu and Handil fields, more shallow wells were drilled. These wells were drilled in the swamp with heavy sedimentation and/or sand banks which necessitated a large amount of dredging and required months of preparation whereas the drilling operation took up to 3 days per wells. The entire well preparation process requires planning, monitoring, and the participation of many team in different entities. Each entity has its own version of well planning database, resulting in data disagreement and lack of data integrity. Thousands of emails are being send and meetings are being organized to guarantee that operations runs well. Due to lack of trustworthy data, personnel movement or team reorganization, it has become serious issues. In 2016, company decided to start the digitalization efforts, by approaching various service company who provides the well planning software. It needed customization to match the corporate needs. However since the digitalization has not yet commonly used by most company, it was then not user friendly, thus several individuals were hesitant to utilize it. An internal team created an application in early 2019. As the business requirement & working flowchart, the team decided to have a clean and mobile-ready yet less complicated form that also enables team collaboration during the design. This ensures that all users, employee from any generation (X, Y, and Z) able to use and enter valid information. Equipped with map visualization, the related entities will be able to have better quick analysis on the condition surrounding wellhead position. The application also implements an adjustable workflow system that able to follow the dynamic of organization structure, ensure each of well planning task is assigned to the correct team. Push notifications are also an important element in this application for keeping the entire team up to date. The application also featured a discussion board and file sharing function, allowing each team to exchange information or files. The manual email exchange has been minimized, and the meeting hour has been reduced significantly. The errors are simply identified and fixed in a single integrated database. The application is continuously improved from well planning only in its early stages into well design to accommodate the whole drilling industrialization process.


GigaScience ◽  
2021 ◽  
Vol 10 (10) ◽  
Author(s):  
Vinay S Swamy ◽  
Temesgen D Fufa ◽  
Robert B Hufnagel ◽  
David M McGaughey

Abstract Background: The development of highly scalable single-cell transcriptome technology has resulted in the creation of thousands of datasets, &gt;30 in the retina alone. Analyzing the transcriptomes between different projects is highly desirable because this would allow for better assessment of which biological effects are consistent across independent studies. However it is difficult to compare and contrast data across different projects because there are substantial batch effects from computational processing, single-cell technology utilized, and the natural biological variation. While many single-cell transcriptome-specific batch correction methods purport to remove the technical noise, it is difficult to ascertain which method functions best. Results: We developed a lightweight R package (scPOP, single-cell Pick Optimal Parameters) that brings in batch integration methods and uses a simple heuristic to balance batch merging and cell type/cluster purity. We use this package along with a Snakefile-based workflow system to demonstrate how to optimally merge 766,615 cells from 33 retina datsets and 3 species to create a massive ocular single-cell transcriptome meta-atlas. Conclusions: This provides a model for how to efficiently create meta-atlases for tissues and cells of interest.


BMC Surgery ◽  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Daiki Wada ◽  
Koichi Hayakawa ◽  
Fukuki Saito ◽  
Kazuhisa Yoshiya ◽  
Yasushi Nakamori ◽  
...  

Abstract Background A novel trauma workflow system called the hybrid emergency room (Hybrid ER), which combines a sliding CT scanner system with interventional radiology features (IVR-CT), was initially instituted in our emergency department in 2011. Use of the Hybrid ER enables CT diagnosis and emergency therapeutic interventions without transferring the patient to another room. We describe an illustrative case of severe multiple blunt trauma that included injuries to the brain and torso to highlight the ability to perform multiple procedures in the Hybrid ER. Case presentation A 46-year-old man sustained multiple injuries after falling from height. An early CT scan performed in the Hybrid ER revealed grade IIIa thoracic aortic injury, left lung contusion, and right subdural haematoma and subarachnoid haemorrhage. Without relocating the patient, all definitive procedures, including trepanation, total pneumonectomy, and thoracic endovascular aneurysm repair were performed in the Hybrid ER. At 5.72 h after definitive surgery was begun, the patient was transferred to the intensive care unit. Conclusions The Hybrid ER has the potential to facilitate the performance of multiple definitive procedures in combination to treat severe multiple blunt trauma including injuries to the brain and torso. Emergency departments with more than one resuscitation room would benefit from a Hybrid ER to treat complex emergency cases.


Sign in / Sign up

Export Citation Format

Share Document