scholarly journals Idiographica: a general-purpose web application to build idiograms on-demand for human, mouse and rat

2007 ◽  
Vol 23 (21) ◽  
pp. 2945-2946 ◽  
Author(s):  
T. Kin ◽  
Y. Ono
2011 ◽  
Vol 7 (2) ◽  
pp. 71
Author(s):  
Ivan Magdalenić ◽  
Danijel Radošević ◽  
Dragutin Kermek

The on demand generation of source code and its execution is essential if computers are expected to play an active role in information discovery and retrieval. This paper presents a model of implementation of a source code generator, whose purpose is to generate source code on demand. Theimplementation of the source code generator is fully configurable and its adoption to a new application is done by changing the generator configuration and not the generator itself. The advantage of using the source code generator is rapid and automatic development of a family of application once necessary program templates and generator configuration are made. The model of implementation of the source code generator is general and implemented source code generator can be used in differentareas. We use a source code generator for dynamic generation of ontology supported Web services for data retrieval and for building of different kind of web application.


Author(s):  
Daniela Morais Fonte ◽  
Daniela da Cruz ◽  
Pedro Rangel Henriques ◽  
Alda Lopes Gancarski

XML is a widely used general-purpose annotation formalism for creating custom markup languages. XML annotations give structure to plain documents to interpret their content. To extract information from XML documents XPath and XQuery languages can be used. However, the learning of these dialects requires a considerable effort. In this context, the traditional Query-By-Example methodology (for Relational Databases) can be an important contribution to leverage this learning process, freeing the user from knowing the specific query language details or even the document structure. This chapter describes how to apply the Query-By-Example concept in a Web-application for information retrieval from XML documents, the GuessXQ system. This engine is capable of deducing, from an example, the respective XQuery statement. The example consists of marking the desired components directly on a sample document, picked-up from a collection. After inferring the corresponding query, GuessXQ applies it to the collection to obtain the desired result.


Author(s):  
M. Ghiassi ◽  
C. Spera

This chapter presents a web-enabled, intelligent agent-based information system model to support on-demand and mass customized markets. The authors present a distributed, real-time, Java-based, mobile information system that interfaces with firms’ existing IT infrastructures, follows a build-to-order production strategy, and integrates order-entry with supply chain, manufacturing, and product delivery systems. The model provides end-to-end visibility across the entire operation and supply chain, allows for a collaborative and synchronized production system, and supports an event-based manufacturing environment. The system introduces four general purpose intelligent agents to support the entire on-demand and mass customization processes. The adoption of this approach by a semiconductor manufacturing firm resulted in reductions in product lead time (by half), buffer inventory (from five to two weeks), and manual transactions (by 80%). Application of this approach to a leading automotive manufacturer, using simulated data, resulted in a 51% total inventory reduction while increasing plant utilization by 30%. Adoption of this architecture by a pharmaceutical firm resulted in improving accuracy of trial completion estimates from 74% to 82% for clinical trials resulting in reduced trial cost overruns. These results verify that the successful adoption of this system can reduce inventory and logistics costs, improve delivery performance, increase manufacturing facilities utilization, and provide a higher overall profitability.


2019 ◽  
Vol 214 ◽  
pp. 01045
Author(s):  
Giacomo Cucciati

The Large Hadron Collider (LHC) at CERN in Geneva, Switzerland, has just completed the Run 2 era, colliding protons at a center-of-mass energy of 13 TeV at high instantaneous luminosity. The Compact Muon Solenoid (CMS) is a general-purpose particle detector experiment at the LHC. The CMS electromagnetic calorimeter (ECAL) has been designed to achieve excellent energy and position resolution for electrons and photons. A multi-machine distributed software configures the on-detector and off-detector electronic boards composing the ECAL data acquisition (DAQ) system and follows the life cycle of the acquisition process. Since the beginning of Run 2 in 2015, many improvements to the ECAL DAQ have been implemented to reduce and mitigate occasional errors in the front-end electronics and not only. Efforts at the software level have been made to introduce automatic recovery in case of errors. Automatic actions has made even more important the online monitoring of the DAQ boards status. For this purpose a new web application, EcalView, has been developed. It runs on a light Node.js JavaScript server framework. It is composed of several routines that cyclically collect the status of the electronics. It display the information when web requests are launched by client side graphical interfaces. For each board, detailed information can be loaded and presented in specific pages if requested by the expert. Server side routines store information regarding electronics errors in a SQLite database in order to perform offline analysis about the long term status of the boards.


Author(s):  
K. Aravindhan ◽  
K. Periyakaruppan ◽  
T.S. Anusa ◽  
S. Kousika ◽  
A. Lakshmi Priya

Author(s):  
Humberto Cortés ◽  
Antonio Navarro

With the advent of multitier and service-oriented architectures, the presentation tier is more detached from the rest of the web application than ever. Moreover, complex web applications can have thousands of linked web pages built using different technologies. As a result, the description of navigation maps has become more complex in recent years. This paper presents NMMp, a UML extension that: (i) provides an abstract vision of the navigation structure of the presentation tier of web applications, independently of architectural details or programming languages; (ii) can be automatically transformed into UML-WAE class diagrams, which can be easily integrated with the design of the other tiers of the web application; (iii) encourages the use of architectural and multitier design patterns; and (iv) has been developed according to OMG standards, thus facilitating its use with general purpose UML CASE tools in industry.


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Daniel M Bittner ◽  
Alejandro E Brito ◽  
Mohsen Ghassemi ◽  
Shantanu Rane ◽  
Anand D Sarwate ◽  
...  

We consider privacy-preserving learning in the context of online learning. Insettings where data instances arrive sequentially in streaming fashion, incremental trainingalgorithms such as stochastic gradient descent (SGD) can be used to learn and updateprediction models. When labels are costly to acquire, active learning methods can beused to select samples to be labeled from a stream of unlabeled data. These labeled datasamples are then used to update the machine learning models. Privacy-preserving onlinelearning can be used to update predictors on data streams containing sensitive information.The differential privacy framework quantifies the privacy risk in such settings. This workproposes a differentially private online active learning algorithm using stochastic gradientdescent (SGD) to retrain the classifiers. We propose two methods for selecting informativesamples. We incorporated this into a general-purpose web application that allows a non-expert user to evaluate the privacy-aware classifier and visualize key privacy-utility tradeoffs.Our application supports linear support vector machines and logistic regression and enablesan analyst to configure and visualize the effect of using differentially private online activelearning versus a non-private counterpart. The application is useful for comparing theprivacy/utility tradeoff of different algorithms, which can be useful to decision makers inchoosing which algorithms and parameters to use. Additionally, we use the application toevaluate our SGD-based solution and to show that it generates predictions with a superiorprivacy-utility tradeoff than earlier methods.


F1000Research ◽  
2015 ◽  
Vol 4 ◽  
pp. 81 ◽  
Author(s):  
Bjørn Fjukstad ◽  
Karina Standahl Olsen ◽  
Mie Jareid ◽  
Eiliv Lund ◽  
Lars Ailo Bongo

Kvik is an open-source system that we developed for explorative analysis of functional genomics data from large epidemiological studies. Creating such studies requires a significant amount of time and resources. It is therefore usual to reuse the data from one study for several research projects. Often each project requires implementing new analysis code, integration with specific knowledge bases, and specific visualizations. Existing data exploration tools do not provide all the required functionality for such multi-study data exploration. We have therefore developed the Kvik framework which makes it easy to implement specialized data exploration tools for specific projects. Applications in Kvik follow the three-tier architecture commonly used in web applications, with REST interfaces between the tiers. This makes it easy to adapt the applications to new statistical analyses, metadata, and visualizations. Kvik uses R to perform on-demand data analyses when researchers explore the data. In this note, we describe how we used Kvik to develop the Kvik Pathways application to explore gene expression data from healthy women with high and low plasma ratios of essential fatty acids using biological pathway visualizations. Researchers interact with Kvik Pathways through a web application that uses the JavaScript libraries Cytoscape.js and D3. We use Docker containers to make deployment of Kvik Pathways simple.


Author(s):  
Zakaria Benlalia ◽  
Karim Abouelmehdi ◽  
Abderrahim Beni-hssane ◽  
Abdellah Ezzati

<p>Cloud computing has emerged as a new paradigm for providing on-demand computing resources and outsourcing software and hardware infrastructures. Load balancing is one of the major concerns in cloud computing environment means how to distribute load efficiently among all the nodes. For solving such a problem, we need some load balancing algorithms, so in this paper we will compare the existing algorithms for web application.and based on results obtained we choose the best among them.</p>


Sign in / Sign up

Export Citation Format

Share Document