scholarly journals Enterprise Mashups: A New Approach for Business Solutions

Lámpsakos ◽  
2012 ◽  
pp. 39 ◽  
Author(s):  
Mario Paredes-Valverde ◽  
Gine Alor-Hernández

A mashup is a Web application that integrates content from different providers in order to create a new service which is not offered by the content provider. The development of this kind of applications involves activities such as accessing heterogeneous sources, combining data from different data sources and building graphical interfaces. This factor limits non-experienced computer users to develop these applications. However, nowadays there are enterpriseoriented tools that allow non-experienced user for building mashups in order to respond business needs in an easy and rapid way. Due to this, the enterprise mashup approach has been widely adopted by a large number of enterprises. This paper presents an overview of the enterprise mashup approach, as well as a review of four enterprise-oriented tools which provide a set of features that allows non-expertise users developing mashups into an enterprise. Finally, we present the challenges to be addressed by enterprise-oriented mashup tools in order to provide an easier and faster way of developing mashups.

2020 ◽  
Vol 19 (10) ◽  
pp. 1602-1618 ◽  
Author(s):  
Thibault Robin ◽  
Julien Mariethoz ◽  
Frédérique Lisacek

A key point in achieving accurate intact glycopeptide identification is the definition of the glycan composition file that is used to match experimental with theoretical masses by a glycoproteomics search engine. At present, these files are mainly built from searching the literature and/or querying data sources focused on posttranslational modifications. Most glycoproteomics search engines include a default composition file that is readily used when processing MS data. We introduce here a glycan composition visualizing and comparative tool associated with the GlyConnect database and called GlyConnect Compozitor. It offers a web interface through which the database can be queried to bring out contextual information relative to a set of glycan compositions. The tool takes advantage of compositions being related to one another through shared monosaccharide counts and outputs interactive graphs summarizing information searched in the database. These results provide a guide for selecting or deselecting compositions in a file in order to reflect the context of a study as closely as possible. They also confirm the consistency of a set of compositions based on the content of the GlyConnect database. As part of the tool collection of the Glycomics@ExPASy initiative, Compozitor is hosted at https://glyconnect.expasy.org/compozitor/ where it can be run as a web application. It is also directly accessible from the GlyConnect database.


2020 ◽  
Author(s):  
Alexander E. Zarebski ◽  
Louis du Plessis ◽  
Kris V. Parag ◽  
Oliver G. Pybus

Inferring the dynamics of pathogen transmission during an outbreak is an important problem in both infectious disease epidemiology and phylodynamics. In mathematical epidemiology, estimates are often informed by time-series of infected cases while in phylodynamics genetic sequences sampled through time are the primary data source. Each data type provides different, and potentially complementary, insights into transmission. However inference methods are typically highly specialised and field-specific. Recent studies have recognised the benefits of combining data sources, which include improved estimates of the transmission rate and number of infected individuals. However, the methods they employ are either computationally prohibitive or require intensive simulation, limiting their real-time utility. We present a novel birth-death phylogenetic model, called TimTam which can be informed by both phylogenetic and epidemiological data. Moreover, we derive a tractable analytic approximation of the TimTam likelihood, the computational complexity of which is linear in the size of the data set. Using the TimTam we show how key parameters of transmission dynamics and the number of unreported infections can be estimated accurately using these heterogeneous data sources. The approximate likelihood facilitates inference on large data sets, an important consideration as such data become increasingly common due to improving sequencing capability.


Author(s):  
G. G. Pessoa ◽  
R. C. Santos ◽  
A. C. Carrilho ◽  
M. Galo ◽  
A. Amorim

<p><strong>Abstract.</strong> Images and LiDAR point clouds are the two major data sources used by the photogrammetry and remote sensing community. Although different, the synergy between these two data sources has motivated exploration of the potential for combining data in various applications, especially for classification and extraction of information in urban environments. Despite the efforts of the scientific community, integrating LiDAR data and images remains a challenging task. For this reason, the development of Unmanned Aerial Vehicles (UAVs) along with the integration and synchronization of positioning receivers, inertial systems and off-the-shelf imaging sensors has enabled the exploitation of the high-density photogrammetric point cloud (PPC) as an alternative, obviating the need to integrate LiDAR and optical images. This study therefore aims to compare the results of PPC classification in urban scenes considering radiometric-only, geometric-only and combined radiometric and geometric data applied to the Random Forest algorithm. For this study the following classes were considered: buildings, asphalt, trees, grass, bare soil, sidewalks and power lines, which encompass the most common objects in urban scenes. The classification procedure was performed considering radiometric features (Green band, Red band, NIR band, NDVI and Saturation) and geometric features (Height – nDSM, Linearity, Planarity, Scatter, Anisotropy, Omnivariance and Eigenentropy). The quantitative analyses were performed by means of the classification error matrix using the following metrics: overall accuracy, recall and precision. The quantitative analyses present overall accuracy of 0.80, 0.74 and 0.98 for classification considering radiometric, geometric and both data combined, respectively.</p>


Author(s):  
Nouha Arfaoui ◽  
Jalel Akaichi

The healthcare industry generates huge amount of data underused for decision making needs because of the absence of specific design mastered by healthcare actors and the lack of collaboration and information exchange between the institutions. In this work, a new approach is proposed to design the schema of a Hospital Data Warehouse (HDW). It starts by generating the schemas of the Hospital Data Mart (HDM) one for each department taking into consideration the requirements of the healthcare staffs and the existing data sources. Then, it merges them to build the schema of HDW. The bottom-up approach is suitable because the healthcare departments are separately. To merge the schemas, a new schema integration methodology is used. It starts by extracting the similar elements of the schemas and the conflicts and presents them as mapping rules. Then, it transforms the rules into queries and applies them to merge the schemas.


2010 ◽  
pp. 2298-2309
Author(s):  
Justin Meza ◽  
Qin Zhu

Knowledge is the fact or knowing something from experience or via association. Knowledge organization is the systematic management and organization of knowledge (Hodge, 2000). With the advent of Web 2.0, Mashups have become a hot new thing on the Web. A mashup is a Web site or a Web application that combines content from more than one source and delivers it in an integrated way (Fichter, 2006). In this article, we will first explore the concept of mashups and look at the components of a mashup. We will provide an overview of various mashups on the Internet. We will look at literature about knowledge and the knowledge organization. Then, we will elaborate on our experiment of a mashup in an enterprise environment. We will describe how we mixed the content from two sets of sources and created a new source: a novel way of organizing and displaying HP Labs Technical Reports. The findings from our project will be included and some best practices for creating enterprise mashups will be given. The future of enterprise mashups will be discussed as well.


2015 ◽  
Vol 54 (06) ◽  
pp. 488-499 ◽  
Author(s):  
S. Denaxas ◽  
C. P. Friedman ◽  
A. Geissbuhler ◽  
H. Hemingway ◽  
D. Kalra ◽  
...  

SummaryThis article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper “Combining Health Data Uses to Ignite Health System Learning” written by John D. Ainsworth and Iain E. Buchan [1]. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the paper of Ainsworth and Buchan. In subsequent issues the discussion can continue through letters to the editor.With these comments on the paper “Combining Health Data Uses to Ignite Health System Learning”, written by John D. Ainsworth and Iain E. Buchan [1], the journal seeks to stimulate a broad discussion on new ways for combining data sources for the reuse of health data in order to identify new opportunities for health system learning. An international group of experts has been invited by the editor of Methods to comment on this paper. Each of the invited commentaries forms one section of this paper.


Author(s):  
Chris P. Archibald ◽  
Jason Sutherland ◽  
Jennifer Geduld ◽  
Donald Sutherland ◽  
Ping Yan

Sign in / Sign up

Export Citation Format

Share Document