scholarly journals Adoption of Network Analysis Techniques to Understand the Training Process in Brazil

AWARI ◽  
2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Higor Alexandre Duarte Mascarenhas ◽  
Thiago Magela Rodrigues Dias ◽  
Patrícia Mascarenhas Dias

The migration of Brazilians has become more and more frequent nowadays, with the main purpose of obtaining better living conditions. Studies indicate that one of the main reasons for migration is the search for training at a high level of training. Therefore, in this scenario, this research has as main objective to analyze the exodus of Brazilian students during their academic formation process, from data extracted from their curricula registered in the Lattes Platform with the adoption of network analysis techniques. The Lattes Platform was used for referring to one of the main Brazilian academic repositories, and for having relevant information for this research. Therefore, the LattesDataXplorer framework was used for the extraction and treatment of the data. Subsequently, the data set of individuals with a doctorate completed were selected because they are individuals with a higher level of education and who maintain a constant update of their curricula. Once this was done, data was enriched with geolocation and information from the institutions where they trained, in order to obtain results from distances covered by doctors. As a way of visualizing data, network analysis was used, and metrics were used to obtain an overview of how the Brazilian scientific exodus occurs. A high concentration of doctors is perceived in cities with a higher concentration of universities that have postgraduate programs at the master's and doctoral level, as well as being characterized by having higher incomes per capita.


2021 ◽  
Vol 11 (5) ◽  
pp. 2232
Author(s):  
Francesca Noardo ◽  
Ken Arroyo Ohori ◽  
Thomas Krijnen ◽  
Jantien Stoter

Industry Foundation Classes (IFC) is a complete, wide and complex open standard data model to represent Building Information Models. Big efforts are being made by the standardization organization buildingSMART, to develop and maintain this standard in collaboration with researchers, companies and institutions. However, when trying to use IFC models from practice for automatic analysis, some issues emerge, as a consequence of a misalignment between what is prescribed by, or available in, the standard with the data sets that are produced in practice. In this study, a sample of models produced by practitioners for aims different from their explicit use within automatic processing tools is inspected and analyzed. The aim is to find common patterns in data set from practice and their possible discrepancies with the standard, in order to find ways to address such discrepancies in a next step. In particular, it is noticeable that the overall quality of the models requires specific additional care by the modellers before relying on them for automatic analysis, and a high level of variability is present concerning the storage of some relevant information (such as georeferencing).



Author(s):  
Maarten Trekels ◽  
Matt Woodburn ◽  
Deborah L Paul ◽  
Sharon Grant ◽  
Kate Webbink ◽  
...  

Data standards allow us to aggregate, compare, compute and communicate data from a wide variety of origins. However, for historical reasons, data are most likely to be stored in many different formats and conform to different models. Every data set might contain a huge amount of information, but it becomes tremendously difficult to compare them without a common way to represent the data. That is when standards development jumps in. Developing a standard is a formidable process, often involving many stakeholders. Typically the initial blueprint of a standard is created by a limited number of people who have a clear view of their use cases. However, as development continues, additional stakeholders participate in the process. As a result, conflicting opinions and interests will influence the development of the standard. Compromises need to be made and the standard might look very different from the initial concept. In order to address the needs of the community, a high level of engagement in the development process is encouraged. However, this does not necessarily increase the usability of the standard. To mitigate this, there is a need to test the standard during the early stages of development. In order to facilitate this, we explored the use of Wikibase to create an initial implementation of the standard. Wikibase is the underlying technology that drives Wikidata. The software is open-source and can be customized for creating collaborative knowledge bases. In addition to containing an RDF (Resource Description Framework) triple store under the hood, it provides users with an easy-to-use graphical user interface (see Fig. 1). This facilitates the use of an implementation of a standard by non-technical users. The Wikibase remains fully flexible in the way data are represented and no data model is enforced. This allows users to map their data onto the standard without any restrictions. Retrieving information from RDF data can be done through the SPARQL query language (W3C 2020). The software package has also a built-in SPARQL endpoint, allowing users to extract the relevant information: Does the standard cover all use cases envisioned? Are parts of the standard underdeveloped? Are the controlled vocabularies sufficient to describe the data? Does the standard cover all use cases envisioned? Are parts of the standard underdeveloped? Are the controlled vocabularies sufficient to describe the data? This strategy was applied during the development of the TDWG Collection Description standard. After completing a rough version of the standard, the different terms that were defined in the first version were transferred to a Wikibase instance running on WBStack (Addshore 2020). Initially, collection data were entered manually, which revealed several issues. The Wikibase allowed us to easily define controlled vocabularies and expand them as needed. The feedback reported from users then flowed back to the further development of the standard. Currently we envisage creating automated scripts that will import data en masse from collections. Using the SPARQL query interface, it will then be straightforward to ensure that data can be extracted from the Wikibase to support the envisaged use cases.



2020 ◽  
Vol 7 (1) ◽  
pp. 41-49
Author(s):  
Ajat Sudrajat

Patient satisfaction at the hospital is a benchmark that is a benchmark for patients in getting health care. Each hospital must run a variety of strategies so that patients feel satisfied with health services, one of the strategies is through a good corporate image and trust, where a good corporate image can increase trust. So that affecting patient satisfaction Mitra Medika Narom Hospital Kabupaten Bekasi.             This research was conducted with descriptive and verification methods, namely knowing, analyzing, explaining and testing hypotheses, and making conclusions and suggestions. The sample in this study amounted to 240 respondents using the Eksplanary Survey method. Data analysis techniques used are ordinal scale techniques and path analysis using the Method of Successive Interval (MSI) tool, Microsoft Excel 2016 computer programs and SPSS 16.             The results of this study reveal that the company's image at the Mitra Medika Narom Hospital in Kabupaten Bekasi is in the agreed criteria, meaning that Mitra Medika Narom Hospital has built and made a good company image so that it is better known to all people. Furthermore, trust in Mitra Medika Narom Hospital in Kabupaten Bekasi is in the agreed criteria, meaning that Mitra Medika Narom Hospital has succeeded in building a good and optimal Trust so that patients trust Mitra Medika Narom Hospital to obtain health services. Then the patient satisfaction at the Mitra Medika Narom Hospital in Kabupaten Bekasi is in the agreed criteria, meaning that the patients as respondents feel a high level of satisfaction after completing treatment at the Mitra Medika Narom Hospital. There is a positive, strong and two-way correlation between company image and trust variables of 0.646. There is a partial influence of company image on patient satisfaction at Mitra Medika Narom Hospital significantly by 11.98%. There is a partial influence of trust on patient satisfaction at Mitra Medika Narom Hospital significantly by 25.08%. Then there is a simultan influence of corporate image and trust on patient satisfaction at Mitra Medika Narom Hospital positively and significantly by 37.06% while the remaining 62.94% is contributed by other variables not examined



2020 ◽  

BACKGROUND: This paper deals with territorial distribution of the alcohol and drug addictions mortality at a level of the districts of the Slovak Republic. AIM: The aim of the paper is to explore the relations within the administrative territorial division of the Slovak Republic, that is, between the individual districts and hence, to reveal possibly hidden relation in alcohol and drug mortality. METHODS: The analysis is divided and executed into the two fragments – one belongs to the female sex, the other one belongs to the male sex. The standardised mortality rate is computed according to a sequence of the mathematical relations. The Euclidean distance is employed to compute the similarity within each pair of a whole data set. The cluster analysis examines is performed. The clusters are created by means of the mutual distances of the districts. The data is collected from the database of the Statistical Office of the Slovak Republic for all the districts of the Slovak Republic. The covered time span begins in the year 1996 and ends in the year 2015. RESULTS: The most substantial point is that the Slovak Republic possesses the regional disparities in a field of mortality expressed by the standardised mortality rate computed particularly for the diagnoses assigned to the alcohol and drug addictions at a considerably high level. However, the female sex and the male sex have the different outcome. The Bratislava III District keeps absolutely the most extreme position. It forms an own cluster for the both sexes too. The Topoľčany District bears a similar extreme position from a point of view of the male sex. All the Bratislava districts keep their mutual notable dissimilarity. Contrariwise, evaluation of a development of the regional disparities among the districts looks like notably heterogeneously. CONCLUSIONS: There are considerable regional discrepancies throughout the districts of the Slovak Republic. Hence, it is necessary to create a common platform how to proceed with the solution of this issue.



Author(s):  
Gabriele Pieke

Art history has its own demands for recording visual representations. Objectivity and authenticity are the twin pillars of recording artistic data. As such, techniques relevant to epigraphic study, such as making line drawings, may not always be the best approach to an art historical study, which addresses, for example, questions about natural context and materiality of the artwork, the semantic, syntactic, and chronological relation between image and text, work procedures, work zones, and workshop traditions, and interactions with formal structures and beholders. Issues critical to collecting data for an art historical analysis include recording all relevant information without overcrowding the data set, creating neutral (i.e., not subjective) photographic images, collecting accurate color data, and, most critically, firsthand empirical study of the original artwork. A call for greater communication in Egyptology between epigraphy/palaeography and art history is reinforced by drawing attention to images as tools of communication and the close connection between the written word and figural art in ancient Egypt.



2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.



2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii464-iii464
Author(s):  
Dharmendra Ganesan ◽  
Nor Faizal Ahmad Bahuri ◽  
Revathi Rajagopal ◽  
Jasmine Loh PY ◽  
Kein Seong Mun ◽  
...  

Abstract The University of Malaya Medical Centre, Kuala Lumpur had acquired a intraoperative MRI (iMRI) brain suite via a public private initiative in September 2015. The MRI brain suite has a SIEMENS 1.5T system with NORAS coil system and NORAS head clamps in a two room solution. We would like to retrospectively review the cranial paediatric neuro-oncology cases that had surgery in this facility from September 2015 till December 2019. We would like to discuss our experience with regard to the clear benefits and the challenges in using such technology to aid in the surgery. The challenges include the physical setting up the paediatric case preoperatively, the preparation and performing the intraoperative scan, the interpretation of intraoperative images and making a decision and the utilisation of the new MRI data set to assist in the navigation to locate the residue safely. Also discuss the utility of the intraoperative images in the decision of subsequent adjuvant management. The use of iMRI also has other technical challenges such as ensuring the perimeter around the patient is free of ferromagnetic material, the process of transfer of the patient to the scanner and as a consequence increased duration of the surgery. CONCLUSION: Many elements in the use of iMRI has a learning curve and it improves with exposure and experience. In some areas only a high level of vigilance and SOP (Standard operating procedure) is required to minimize mishaps. Currently, the iMRI gives the best means of determining extent of resection before concluding the surgery.





Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.



2021 ◽  
Vol 11 (5) ◽  
pp. 2166
Author(s):  
Van Bui ◽  
Tung Lam Pham ◽  
Huy Nguyen ◽  
Yeong Min Jang

In the last decade, predictive maintenance has attracted a lot of attention in industrial factories because of its wide use of the Internet of Things and artificial intelligence algorithms for data management. However, in the early phases where the abnormal and faulty machines rarely appeared in factories, there were limited sets of machine fault samples. With limited fault samples, it is difficult to perform a training process for fault classification due to the imbalance of input data. Therefore, data augmentation was required to increase the accuracy of the learning model. However, there were limited methods to generate and evaluate the data applied for data analysis. In this paper, we introduce a method of using the generative adversarial network as the fault signal augmentation method to enrich the dataset. The enhanced data set could increase the accuracy of the machine fault detection model in the training process. We also performed fault detection using a variety of preprocessing approaches and classified the models to evaluate the similarities between the generated data and authentic data. The generated fault data has high similarity with the original data and it significantly improves the accuracy of the model. The accuracy of fault machine detection reaches 99.41% with 20% original fault machine data set and 93.1% with 0% original fault machine data set (only use generate data only). Based on this, we concluded that the generated data could be used to mix with original data and improve the model performance.



Sign in / Sign up

Export Citation Format

Share Document