Encyclopedia with Semantic Computing and Robotic Intelligence
Latest Publications


TOTAL DOCUMENTS

50
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 1)

Published By World Scientific

2529-7392, 2529-7376

Author(s):  
Cristiane C. Gattaz ◽  
Roberto C. Bernardes ◽  
Paulo E. Cruvinel

This paper proposes a new methodology based on action research for the implementation of a business, system and technology model to assist and facilitate the collaborative use of resources and expertise, as well as to adjust one task force based on knowledge sharing and management. A case study is presented to illustrate the results of implementing the Digital Knowledge Ecosystem framework in a research and development (R&D) network of aerial application of pesticides for pest control, using the action research approach. Results include the properties of self-management, open innovation, self-organization of the institutionally linked groups and the adaptation of a new tool for collaboration, which can improve competitiveness. Its relevance may be measured by its benefits of capturing the sharing dynamics, processing and propagating information within the networks, allowing cooperation between organizations and measuring collective intelligence action and learning, as well as promoting survival such as minimum interaction rules, individual autonomy and organizational structure demand flexibility. Such arrangement proved to allow nonlinear methods replacing attempts at objectivity, linear thought and control, and the design of risks in social computing system. The conclusions showed the opportunity to apply such model to other sectors related to agriculture and innovation and observe the challenge regarding the managerial indicators for future command and control of existing R&D network knowledge management operations for future research.


Author(s):  
Israel José dos Santos Felipe ◽  
Wesley Mendes-Da-Silva ◽  
Cristiane Chaves Gattaz

This paper proposes a current research agenda on crowdfunding from two different perspectives, mass media and geography. It is believed that these two elements must exert some kind of influence on the dynamics of the investments made in that market. Semantic analysis of mass news can be a useful tool for investors to assess their exposure to risk as well as help predict financial returns. Geography, on the other hand, can be used on the origin of the capital contributions and, therefore, present information on the location and regional characteristics of the investors.


2018 ◽  
Vol 02 (02) ◽  
pp. 1850019 ◽  
Author(s):  
Sukanya Manna

Microblogging platforms like Twitter, in the recent years, have become one of the important sources of information for a wide spectrum of users. As a result, these platforms have become great resources to provide support for emergency management. During any crisis, it is necessary to sieve through a huge amount of social media texts within a short span of time to extract meaningful information from them. Extraction of emergency-specific information, such as topic keywords or landmarks or geo-locations of sites, from these texts plays a significant role in building an application for emergency management. This paper thus highlights different aspects of automatic analysis of tweets to help in developing such an application. Hence, it focuses on: (1) identification of crisis-related tweets using machine learning, (2) exploration of topic model implementations and looking at its effectiveness on short messages (as short as 140 characters); and performing an exploratory data analysis on short texts related to crises collected from Twitter, and looking at different visualizations to understand the commonality and differences between topics and different crisis-related data, and (3) providing a proof of concept for identifying and retrieving different geo-locations from tweets and extracting the GPS coordinates from this data to approximately plot them in a map.


2018 ◽  
Vol 02 (02) ◽  
pp. 1850015 ◽  
Author(s):  
Joseph R. Barr ◽  
Joseph Cavanaugh

It is not unusual that efforts to validate a statistical model exceed those used to build the model. Multiple techniques are used to validate, compare and contrast among competing statistical models: Some are concerned with a model’s ability to predict new data while others are concerned with model descriptiveness of the data. Without claiming to provide a comprehensive view of the landscape, in this paper we will touch on both aspects of model validation. There is much more to the subject and the reader is referred to any of the many classical statistical texts including the revised two volumes of Bickel and Docksum (2016), the one by Hastie, Tibshirani, and Friedman [The Elements of Statistical Learning: Data Mining, Inference, and Predication, 2nd edn. (Springer, 2009)], and several others listed in the bibliography.


2018 ◽  
Vol 02 (02) ◽  
pp. 1850017
Author(s):  
Mira Kim ◽  
Masahiro Hayakawa

Graph is a widely used scheme for representing complex datasets in terms of graphical illustration comprised of nodes and edges. Diffusion is a paradigm of propagating or transmitting substances or knowledge pieces from nodes to nodes. Diffusion analysis takes a slightly different approach from the reachability-based graph analysis; it takes the phenomenon as diffusion problems. In this paper, we present a technical survey on literatures of diffusion analysis.


2018 ◽  
Vol 02 (02) ◽  
pp. 1850013
Author(s):  
Charles C. N. Wang ◽  
Yun-Lung Chung ◽  
I-Seng Chang ◽  
Jeffrey J. P. Tsai

There have been an enormous number of publications on cancer research. These unstructured cancer-related articles are of great value for cancer diagnostics, treatment, and prevention. The aim of this study is to introduce a recommendation system. It combines text mining (LDA) and semantic computing (GloVe) to understand the meaning of user needs and to increase the recommendation accuracy.


2018 ◽  
Vol 02 (02) ◽  
pp. 1750001
Author(s):  
Mira Kim ◽  
David Ostrowski

An Ontology defines a common vocabulary across diverse platforms, supporting machine-interpretable semantics of the domain concepts along with associated relations. Ontologies support the definitions of a class which defines concepts in a domain and exist as the central focus. Proprietary Ontologies allow for internally developed machine-usable content to be leveraged for greater purpose. In this context, Industrial Ontologies can be considered first and foremost, an integration technology, supporting connections between any number of disparate data sources. By this application, it is possible to leverage proprietary and public ontologies thus supporting a Federated Ontology. This paper explores the current methods and technologies of federated Ontologies. To this goal, we summarize the current state of publically available ontologies examining how they are currently utilized, their application challenges and a realistic assessment of their potential.


2018 ◽  
Vol 02 (02) ◽  
pp. 1730001
Author(s):  
Shirley Y. Coleman ◽  
Ron S. Kenett

Designing a new Analytics programF requires not only identifying needed courses, but also tying the courses together into a cohesive curriculum with an overriding theme. Such a theme helps to determine the proper sequencing of courses and create a coherent linkage between different courses often taught by faculty staff from different domains. It is common to see a program with some courses taught by computer science faculty, other courses taught by faculty and staff from the statistics department, and others from operations research, economics, information systems, marketing or other disciplines. Applying an overriding theme not only helps students organize their learning and course planning, but it also helps the teaching faculty in designing their materials and choosing terminology. The InfoQ framework introduced by Kenett and Shmueli provides a theme that focuses the attention of faculty and students on the important question of the value of data and its analysis with flexibility that accommodates a wide range of data analysis topics. In this chapter, we review a number of programs focused on analytics and data science content from an InfoQ perspective. Our goal is to show, with examples, how the InfoQ dimensions are addressed in existing programs and help identify best practices for designing and improving such programs. We base our assessment on information derived from the program’s web site.


2018 ◽  
Vol 02 (02) ◽  
pp. 1850014
Author(s):  
Joseph R. Barr ◽  
Shelemyahu Zacks

Goodness-of-fit is used for the evaluation a model. They are commonly used to compare among competing models. The material is mostly classic. For more on the subject the reader is referred to the References including the two revised volumes Bickel and Docksum (2016).


2018 ◽  
Vol 02 (02) ◽  
pp. 1850016
Author(s):  
Jennifer Jin ◽  
Masahiro Hayakawa

Online Analytical Processing (OLAP) is an effective approach to analyzing various complex business problems, and graph is considered as a common scheme to represent the business datasets. Network analysis is a broad analytics scheme for exploring the connectivity and deriving useful analytics results. However, network analysis for graph-based OLAP presents a set of more specific analytics methods by utilizing graph model, network property, and OLAP principles. In this paper, we present a comprehensive survey on network analysis conducted on graph model for the purpose of OLAP, and we summarize the current research focus, paradigms, and the future needs on the target technology.


Sign in / Sign up

Export Citation Format

Share Document