Mapping organizational complexity: a network-based approach to behavior through machine learning tools

Author(s):  
Sabrina Bagnato ◽  
Antonina Barreca ◽  
Roberta Costantini ◽  
Francesca Quintiliani

The current uncertain, dynamic scenario calls for a systemic perspective when referring to organizational complexity and behavior. Our research contributes to the analysis of organizational complexity through multidimensional behavioral mapping. Our method uses machine learning tools to detect the interconnections between the different behaviors of a person in his/her operating context. First, the research project dealt with prototyping a model to read the organizational behavior, the related detection tool, and a data analysis methodology. It used machine learning tools and ended with a data visualization phase. We set our model to read the organizational behavior by comparing the literature benchmark theories with our field experience. The model was organized around 4 areas and 16 behaviors. These were the basis for singling out the indicators and the questionnaire items. The data analysis methodology aimed at detecting the interconnections between behaviors. We designed it by joining univariate analysis with a multivariate technique based on the application of machine learning tools. This led to a high-resolution network map through three specific steps: (a) creating a multidimensional topology based on a Kohonen Map (a type of unsupervised learning artificial neural network) to geometrically represent behavioral relationships; (b) implementing k-means clustering for identifying which areas of the map have behavior similarity or affinity factors; and (c) locating people and the various identified clusters within the map. The research highlighted the validity of machine learning tools to detect the multidimensionality of organizational behavior. Therefore, we could delineate the networking of the observed elements and visualize an otherwise unattainable complexity through multimedia and interactive reporting. Application in the field of research consisted of the design and development of a prototype integrated with our LMS platform via a plugin. Field experimentation confirmed the effectiveness of the method for creating professional growth and development paths. Furthermore, this experimentation allowed us to obtain significant data by applying our model to several sectors, namely pharmaceutical, TLC, banking, automotive, machinery, and services.

2020 ◽  
Vol 30 (3) ◽  
pp. 112-126
Author(s):  
S. V. Palmov

Data analysis carried out by machine learning tools has covered almost all areas of human activity. This is due to a large amount of data that needs to be processed in order, for example, to predict the occurrence of specific events (an emergency, a customer contacting the organization’s technical support, a natural disaster, etc.) or to formulate recommendations regarding interaction with a certain group of people (personalized offers for the customer, a person’s reaction to advertising, etc.). The paper deals with the possibilities of the Multitool analytical system, created based on the machine learning method «decision tree», in terms of building predictive models that are suitable for solving data analysis problems in practical use. For this purpose, a series of ten experiments was conducted, in which the results generated by the system were evaluated in terms of their reliability and robustness using five criteria: arithmetic mean, standard deviation, variance, probability, and F-measure. As a result, it was found that Multitool, despite its limited functionality, allows creating predictive models of sufficient quality and suitable for practical use.


Author(s):  
Ricardo Vilalta ◽  
Tomasz Stepinski

Spacecrafts orbiting a selected suite of planets and moons of our solar system are continuously sending long sequences of data back to Earth. The availability of such data provides an opportunity to invoke tools from machine learning and pattern recognition to extract patterns that can help to understand geological processes shaping planetary surfaces. Due to the marked interest of the scientific community on this particular planet, we base our current discussion on Mars, where there are presently three spacecrafts in orbit (e.g., NASA’s Mars Odyssey Orbiter, Mars Reconnaissance Orbiter, ESA’s Mars Express). Despite the abundance of available data describing Martian surface, only a small fraction of the data is being analyzed in detail because current techniques for data analysis of planetary surfaces rely on a simple visual inspection and descriptive characterization of surface landforms (Wilhelms, 1990). The demand for automated analysis of Mars surface has prompted the use of machine learning and pattern recognition tools to generate geomorphic maps, which are thematic maps of landforms (or topographical expressions). Examples of landforms are craters, valley networks, hills, basins, etc. Machine learning can play a vital role in automating the process of geomorphic mapping. A learning system can be employed to either fully automate the process of discovering meaningful landform classes using clustering techniques; or it can be used instead to predict the class of unlabeled landforms (after an expert has manually labeled a representative sample of the landforms) using classification techniques. The impact of these techniques on the analysis of Mars topography can be of immense value due to the sheer size of the Martian surface that remains unmapped. While it is now clear that machine learning can greatly help in automating the detailed analysis of Mars’ surface (Stepinski et al., 2007; Stepinski et al., 2006; Bue and Stepinski, 2006; Stepinski and Vilalta, 2005), an interesting problem, however, arises when an automated data analysis has produced a novel classification of a specific site’s landforms. The problem lies on the interpretation of this new classification as compared to traditionally derived classifications generated through visual inspection by domain experts. Is the new classification novel in all senses? Is the new classification only partially novel, with many landforms matching existing classifications? This article discusses how to assess the value of clusters generated by machine learning tools as applied to the analysis of Mars’ surface.


Author(s):  
Khalid K. Al-jabery ◽  
Tayo Obafemi-Ajayi ◽  
Gayla R. Olbricht ◽  
Donald C. Wunsch II

Author(s):  
ROBERTO TAGLIAFERRI ◽  
FRANCESCO IORIO ◽  
FRANCESCO NAPOLITANO ◽  
GIANCARLO RAICONI ◽  
GENNARO MIELE

2018 ◽  
Vol 1 (2) ◽  
pp. 58
Author(s):  
Setia Budi ◽  
Ria Dila Syahfitri

The rate of stroke incidence is about 200 per 100,000 people throughout the world. This study aims to determine the Relation Suffer Stroke With Independence Level In Neurology Polyclinic TK II DR Ak Gani Palembang Year Hospital 2017. The research method used is descriptive quantitative with cross sectional design that is done by interviewing techniques with questionnaires on 42 respondents with Accidental sampling technique. This research was conducted in August 2017. Data analysis used is univariate data analysis and bivariate data analysis with one way anova test result. The results of univariate analysis showed that the duration of the respondents suffering from stroke was between 2.10 years to 3.38 years. Also found that most respondents were at the level of independence f; independent, except bathing, dressing, moving, and one other function with a total of 12 respondents. The results showed that there was a significant relationship between the long suffering stroke with the level of independence with the value of p value 0.025. For that the need for rehabilitation to patients and families of patients in order to help improve the independence of stroke patients in doing their daily activities. Keywords : Long Suffer Stroke, Level of Independence


2019 ◽  
Vol 7 (4) ◽  
pp. 184-190
Author(s):  
Himani Maheshwari ◽  
Pooja Goswami ◽  
Isha Rana

Author(s):  
Rommel Estores ◽  
Pascal Vercruysse ◽  
Karl Villareal ◽  
Eric Barbian ◽  
Ralph Sanchez ◽  
...  

Abstract The failure analysis community working on highly integrated mixed signal circuitry is entering an era where simultaneously System-On-Chip technologies, denser metallization schemes, on-chip dissipation techniques and intelligent packages are being introduced. These innovations bring a great deal of defect accessibility challenges to the failure analyst. To contend in this era while aiming for higher efficiency and effectiveness, the failure analysis environment must undergo a disruptive evolution. The success or failure of an analysis will be determined by the careful selection of tools, data and techniques in the applied analysis flow. A comprehensive approach is required where hardware, software, data analysis, traditional FA techniques and expertise are complementary combined [1]. This document demonstrates this through the incorporation of advanced scan diagnosis methods in the overall analysis flow for digital functionality failures and supporting the enhanced failure analysis methodology. For the testing and diagnosis of the presented cases, compact but powerful scan test FA Lab hardware with its diagnosis software was used [2]. It can therefore easily be combined with the traditional FA techniques to provide stimulus for dynamic fault localizations [3]. The system combines scan chain information, failure data and layout information into one viewing environment which provides real analysis power for the failure analyst. Comprehensive data analysis is performed to identify failing cells/nets, provide a better overview of the failure and the interactions to isolate the fault further to a smaller area, or to analyze subtle behavior patterns to find and rationalize possible faults that are otherwise not detected. Three sample cases will be discussed in this document to demonstrate specific strengths and advantages of this enhanced FA methodology.


2020 ◽  
Vol 13 (5) ◽  
pp. 1020-1030
Author(s):  
Pradeep S. ◽  
Jagadish S. Kallimani

Background: With the advent of data analysis and machine learning, there is a growing impetus of analyzing and generating models on historic data. The data comes in numerous forms and shapes with an abundance of challenges. The most sorted form of data for analysis is the numerical data. With the plethora of algorithms and tools it is quite manageable to deal with such data. Another form of data is of categorical nature, which is subdivided into, ordinal (order wise) and nominal (number wise). This data can be broadly classified as Sequential and Non-Sequential. Sequential data analysis is easier to preprocess using algorithms. Objective: The challenge of applying machine learning algorithms on categorical data of nonsequential nature is dealt in this paper. Methods: Upon implementing several data analysis algorithms on such data, we end up getting a biased result, which makes it impossible to generate a reliable predictive model. In this paper, we will address this problem by walking through a handful of techniques which during our research helped us in dealing with a large categorical data of non-sequential nature. In subsequent sections, we will discuss the possible implementable solutions and shortfalls of these techniques. Results: The methods are applied to sample datasets available in public domain and the results with respect to accuracy of classification are satisfactory. Conclusion: The best pre-processing technique we observed in our research is one hot encoding, which facilitates breaking down the categorical features into binary and feeding it into an Algorithm to predict the outcome. The example that we took is not abstract but it is a real – time production services dataset, which had many complex variations of categorical features. Our Future work includes creating a robust model on such data and deploying it into industry standard applications.


Sign in / Sign up

Export Citation Format

Share Document