Determining institutional discrepancies for ERM software applications

Author(s):  
Özlem (Gökkurt) Bayram ◽  
Fahrettin Özdemirci ◽  
M. Taylan Güvercin
2019 ◽  
Vol 54 (6) ◽  
Author(s):  
Sawsan Ali Hamid ◽  
Rana Alauldeen Abdalrahman ◽  
Inam Abdullah Lafta ◽  
Israa Al Barazanchi

Recently, web services have presented a new and evolving model for constructing the distributed system. The meteoric growth of the Web over the last few years proves the efficacy of using simple protocols over the Internet as the basis for a large number of web services and applications. Web service is a modern technology of web, which can be defined as software applications with a programmatic interface based on Internet protocol. Web services became common in the applications of the web by the help of Universal, Description, Discovery and Integration; Web Service Description Language and Simple Object Access Protocol. The architecture of web services refers to a collection of conceptual components in which common sets of standard can be defined among interoperating components. Nevertheless, the existing Web service's architecture is not impervious to some challenges, such as security problems, and the quality of services. Against this backdrop, the present study will provide an overview of these issues. Therefore, it aims to propose web services architecture model to support distributed system in terms of application and issues.


2007 ◽  
Vol 2 (1) ◽  
pp. 33-48
Author(s):  
Graciela Brusa ◽  
María Laura Caliusco ◽  
Omar Chiotti

Nowadays, organizational innovation constitutes the government challenges for providing better and more efficient services to citizens, enterprises or other public offices. E–government seems to be an excellent opportunity to work on this way. The applications that support front-end services delivered to users have to access information systems of multiple government areas. This is a significant problem for e-government back-office since multiple platforms and technologies coexist. Moreover, in the back-office there is a great volume of data that is implicit in the software applications that support administration activities. In this context, the main requirement is to make available the data managed in the back-office for the e-government users in a fast and precise way, without misunderstanding. To this aim, it is necessary to provide an infrastructure that make explicit the knowledge stored in different government areas and deliver this knowledge to the users. This paper presents an approach on how ontological engineering techniques can be applied to solving the problems of content discovery, aggregation, and sharing in the e-government back-office. This approach is constituted by a specific process to develop an ontology in the public sector and an ontology-based architecture. In order to present the process characteristics, a case study applied to a local government domain is analyzed. This domain is the budget and financial information of Santa Fe Province (Argentine).


2021 ◽  
Vol 10 (6) ◽  
pp. 234
Author(s):  
Ishmael Mugari ◽  
Emeka E. Obioha

There has been a significant focus on predictive policing systems, as law enforcement agents embrace modern technology to forecast criminal activity. Most developed nations have implemented predictive policing, albeit with mixed reactions over its effectiveness. Whilst at its inception, predictive policing involved simple heuristics and algorithms, it has increased in sophistication in the ever-changing technological environment. This paper, which is based on a literature survey, examines predictive policing over the last decade (2010 to 2020). The paper examines how various nations have implemented predictive policing and also documents the impediments to predictive policing. The paper reveals that despite the adoption of predictive software applications such as PredPol, Risk Terrain Modelling, HunchLab, PreMap, PRECOBS, Crime Anticipation System, and Azevea, there are several impediments that have militated against the effectiveness of predictive policing, and these include low predictive accuracy, limited scope of crimes that can be predicted, high cost of predictive policing software, flawed data input, and the biased nature of some predictive software applications. Despite these challenges, the paper reveals that there is consensus by the majority of the researchers on the importance of predictive algorithms on the policing landscape.


2021 ◽  
pp. 193229682098557
Author(s):  
Alysha M. De Livera ◽  
Jonathan E. Shaw ◽  
Neale Cohen ◽  
Anne Reutens ◽  
Agus Salim

Motivation: Continuous glucose monitoring (CGM) systems are an essential part of novel technology in diabetes management and care. CGM studies have become increasingly popular among researchers, healthcare professionals, and people with diabetes due to the large amount of useful information that can be collected using CGM systems. The analysis of the data from these studies for research purposes, however, remains a challenge due to the characteristics and large volume of the data. Results: Currently, there are no publicly available interactive software applications that can perform statistical analyses and visualization of data from CGM studies. With the rapidly increasing popularity of CGM studies, such an application is becoming necessary for anyone who works with these large CGM datasets, in particular for those with little background in programming or statistics. CGMStatsAnalyser is a publicly available, user-friendly, web-based application, which can be used to interactively visualize, summarize, and statistically analyze voluminous and complex CGM datasets together with the subject characteristics with ease.


2015 ◽  
Vol 64 (1/2) ◽  
pp. 82-100 ◽  
Author(s):  
Michael Calaresu ◽  
Ali Shiri

Purpose – The purpose of this article is to explore and conceptualize the Semantic Web as a term that has been widely mentioned in the literature of library and information science. More specifically, its aim is to shed light on the evolution of the Web and to highlight a previously proposed means of attempting to improve automated manipulation of Web-based data in the context of a rapidly expanding base of both users and digital content. Design/methodology/approach – The conceptual analysis presented in this paper adopts a three-dimensional model for the discussion of Semantic Web. The first dimension focuses on Semantic Web’s basic nature, purpose and history, as well as the current state and limitations of modern search systems and related software agents. The second dimension focuses on critical knowledge structures such as taxonomies, thesauri and ontologies which are understood as fundamental elements in the creation of a Semantic Web architecture. In the third dimension, an alternative conceptual model is proposed, one, which unlike more commonly prevalent Semantic Web models, offers a greater emphasis on describing the proposed structure from an interpretive viewpoint, rather than a technical one. This paper adopts an interpretive, historical and conceptual approach to the notion of the Semantic Web by reviewing the literature and by analyzing the developments associated with the Web over the past three decades. It proposes a simplified conceptual model for easy understanding. Findings – The paper provides a conceptual model of the Semantic Web that encompasses four key strata, namely, the body of human users, the body of software applications facilitating creation and consumption of documents, the body of documents themselves and a proposed layer that would improve automated manipulation of Web-based data by the software applications. Research limitations/implications – This paper will facilitate a better conceptual understanding of the Semantic Web, and thereby contribute, in a small way, to the larger body of discourse surrounding it. The conceptual model will provide a reference point for education and research purposes. Originality/value – This paper provides an original analysis of both conceptual and technical aspects of Semantic Web. The proposed conceptual model provides a new perspective on this subject.


2012 ◽  
Vol 268-270 ◽  
pp. 916-920
Author(s):  
Zheng Shun Wang ◽  
Wen Jia Han

In this thesis, the process of electromagnetic drying cylinder was analyzed creating by the dryer finite element model using ANSYS. The conduction thermal analysis, the applied load and solved showed the results of three major components. Which create a finite element model of the process, mainly the preprocessor using ANSYS software to create or import geometric models from other software applications, and then add the material properties. The last of the geometric model meshing and solving process need to enter solvers according to the actual situation. The setting is applied to the thermal load and conditions. Then it is proceed to the finite element solution operator. It final usually the Post 1, or Post2 view results, and based on our experience to judge correctly


Sign in / Sign up

Export Citation Format

Share Document