The Tacit Dimension of User Tasks: Elicitation and Contextual Representation

Author(s):  
Jeannette Hemmecke ◽  
Chris Stary
2015 ◽  
Vol 19 (2) ◽  
pp. 351-371 ◽  
Author(s):  
Paul Ihuoma Oluikpe

Purpose – The purpose of this paper is to explore the knowledge processes that interplay in the social construction and appropriation of knowledge and to test these constructs empirically in project teams. Design/methodology/approach – Literature research and quantitative survey were used. The research identified project success, faster completion times, operational efficiency, innovation and generation of new knowledge as dominating project management expectations in the past ten years. It studied how these projects construct and appropriate knowledge within project teams to achieve these five objectives. Using a quantitative approach, data were sought from 1,000 respondents out of a population of 10,000 from 11 project management areas in eight world regions to test the conceptual model in real-world scenarios. The data gathered were analyzed using quantitative analysis tools and techniques such as reliability, correlation and regression. Findings – There is a lingering difficulty within organizations on how to translate tacit knowledge into action. The transfer and utilization of tacit knowledge was shown to be embedded and nested within relationships. Innovation in projects was found to be mostly linked to replication and codification of knowledge (explicit dimension) as opposed to interpretation and assimilation (tacit dimension). Arriving at a mutual interpretation of project details and requirements does not depend on canonical (formal documentation) methods but mostly on non-canonical (informal) and relational processes embedded within the team. Originality/value – This work studies, in empirical and geographical detail, the social interplay of knowledge and provided evidence relative to the appropriation of knowledge in the project organizational form, which can be extrapolated to wider contexts. The work scoped the inter-relational nature of knowledge and provided further evidence on the nebulous nature of tacit/intangible knowledge. It also proved further that organizations mostly rely on explicit knowledge to drive organizational results, as it is easily actionable and measurable.


2016 ◽  
Vol 14 (1) ◽  
pp. 23-42
Author(s):  
Andrew Watt ◽  
Deiniol Skillicorn ◽  
Jediah Clark ◽  
Rachel Evans ◽  
Paul Hewlett ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
C. Saravanakumar ◽  
M. Geetha ◽  
S. Manoj Kumar ◽  
S. Manikandan ◽  
C. Arun ◽  
...  

Cloud computing models use virtual machine (VM) clusters for protecting resources from failure with backup capability. Cloud user tasks are scheduled by selecting suitable resources for executing the task in the VM cluster. Existing VM clustering processes suffer from issues like preconfiguration, downtime, complex backup process, and disaster management. VM infrastructure provides the high availability resources with dynamic and on-demand configuration. The proposed methodology supports VM clustering process to place and allocate VM based on the requesting task size with bandwidth level to enhance the efficiency and availability. The proposed clustering process is classified as preclustering and postclustering based on the migration. Task and bandwidth classification process classifies tasks with adequate bandwidth for execution in a VM cluster. The mapping of bandwidth to VM is done based on the availability of the VM in the cluster. The VM clustering process uses different performance parameters like lifetime of VM, utilization of VM, bucket size, and task execution time. The main objective of the proposed VM clustering is that it maps the task with suitable VM with bandwidth for achieving high availability and reliability. It reduces task execution and allocated time when compared to existing algorithms.


Author(s):  
Tobias M. Rasse ◽  
Réka Hollandi ◽  
Péter Horváth

AbstractVarious pre-trained deep learning models for the segmentation of bioimages have been made available as ‘developer-to-end-user’ solutions. They usually require neither knowledge of machine learning nor coding skills, are optimized for ease of use, and deployability on laptops. However, testing these tools individually is tedious and success is uncertain.Here, we present the ‘Op’en ‘Se’gmentation ‘F’ramework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts’ knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. All analyst tasks are optimized for deployment on Linux workstations or GPU clusters, all user tasks may be performed on any laptop in ImageJ.OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and post-processing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and post-processing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data.We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows.Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little, the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.


2019 ◽  
Vol 4 (1) ◽  
Author(s):  
Victoria Hasenstab ◽  
Manuel Pietzonka

By working in different projects and different teams over years, employees acquire tacit knowledge unconsciously. It is represented through experiences and is intangible. This knowledge is embodied in our routines. Therefore, it is difficult to verbalize tacit knowledge. This paper introduces a practical approach for companies to use their tacit knowledge in order to become a learning organization. The results of a semi-standardized face-to-face-interview survey with participants (n=10) show to what extent a self-reflection can contribute to uncover and share tacit knowledge in an IT-organization. The answers of the participants were recorded, utilized, coded and analyzed qualitatively. The results show that the intervention can encourage the process of uncovering tacit knowledge. It is possible for the employees to see the past project problems from different perspectives via self-reflection. Thereby they are able to uncover the tacit dimension of their experience and gain new insights.


Atmosphere ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1266
Author(s):  
Jing Qin ◽  
Liang Chen ◽  
Jian Xu ◽  
Wenqi Ren

In this paper, we propose a novel method to remove haze from a single hazy input image based on the sparse representation. In our method, the sparse representation is proposed to be used as a contextual regularization tool, which can reduce the block artifacts and halos produced by only using dark channel prior without soft matting as the transmission is not always constant in a local patch. A novel way to use dictionary is proposed to smooth an image and generate the sharp dehazed result. Experimental results demonstrate that our proposed method performs favorably against the state-of-the-art dehazing methods and produces high-quality dehazed and vivid color results.


2018 ◽  
Vol 33 (4) ◽  
pp. 739-763 ◽  
Author(s):  
Hua Liao ◽  
Weihua Dong ◽  
Haosheng Huang ◽  
Georg Gartner ◽  
Huiping Liu

2021 ◽  
Vol 27 (2) ◽  
Author(s):  
H. Hamza ◽  
A.F.D Kana ◽  
M.Y. Tanko ◽  
S. Aliyu

Cloud computing is a model that aims to deliver a reliable, customizable and scalable computing environment for end-users. Cloud computing is one of the most widely used technologies embraced by sectors and academia, offering a versatile and effective way to store and retrieve documents. The performance and efficiency of cloud computing services always depend upon the performance of the execution of user tasks submitted to the cloud system. Scheduling of user tasks plays a significant role in improving the performance of cloud services. Accordingly, many dependent task scheduling algorithms have been proposed to improve the performance of cloud services and resource utilization; however, most of the techniques for determining which task should be scheduled next are inefficient. This research provided an enhanced algorithm for scheduling dependent tasks in cloud that aims at improving the overall performance of the system. The Dependent tasks were represented as a directed acyclic graph (DAG) and the number of dependent tasks and their total running time were used as a heuristic for determining which path should be explored first. Best first search approach based on the defined heuristic was used to traverse the graph to determine which task should be scheduled next. The results of the simulation using WorkflowSim toolkit showed an average improvement of 18% and 19% on waiting time and turnaround time were achieved respectively.


Sign in / Sign up

Export Citation Format

Share Document