scholarly journals PeerSum: a summary service for P2P applications

Author(s):  
Rabab Hayek ◽  
Guillaume Raschia ◽  
Patrick Valduriez ◽  
Noureddine Mouaddib

PurposeThe goal of this paper is to contribute to the development of both data localization and description techniques in P2P systems.Design/methodology/approachThe approach consists of introducing a novel indexing technique that relies on linguistic data summarization into the context of P2P systems.FindingsThe cost model of the approach, as well as the simulation results have shown that the approach allows the efficient maintenance of data summaries, without incurring high traffic overhead. In addition, the cost of query routing is significantly reduced in the context of summaries.Research limitations/implicationsThe paper has considered a summary service defined on the APPA's architecture. Future works have to study the extension of this work in order to be generally applicable to any P2P data management system.Practical implicationsThis paper has mainly studied the quantitative gain that could be obtained in query processing from exploiting data summaries. Future works aim to implement this technique on real data (not synthetic) in order to study the qualitative gain that can be obtained from approximately answering a query.Originality/valueThe novelty of the approach shown in the paper relies on the double exploitation of the summaries in P2P systems: data summaries allow for a semantic‐based query routing, and also for an approximate query answering, using their intentional descriptions.

2014 ◽  
Vol 8 (1) ◽  
pp. 100-120 ◽  
Author(s):  
Yun Seng Lim ◽  
Siong Lee Koh ◽  
Stella Morris

Purpose – Biomass waste can be used as fuel in biomass power plants to generate electricity. It is a type of renewable energy widely available in Malaysia because 12 million tons of the biomass waste is produced every year. At present, only 5 per cent of the total biomass waste in Sabah, one of the states in Malaysia, is used to generate electricity for on-site consumption. The remaining 95 per cent of the biomass waste has not been utilized because the transportation cost for shifting the waste from the plantations to the power plants is substantial, hence making the cost of the biomass generated electricity to be high. Therefore, a methodology is developed and presented in this paper to determine the optimum geographic distribution and capacities of the biomass power plants around a region so that the cost of biomass generated electricity can be minimized. The paper aims to discuss these issues. Design/methodology/approach – The methodology is able to identify the potential locations of biomass power plants on any locations on a region taking into account the operation and capital costs of the power plants as well as the cost of connecting the power plants to the national grid. The methodology is programmed using Fortran. Findings – This methodology is applied to Sabah using the real data. The results generated from the methodology show the best locations and capacities of biomass power plants in Sabah. There are 20 locations suitable for biomass power plants. The total capacity of these biomass power plants is 4,996 MW with an annual generation of 35,013 GWh. This is sufficient to meet all the electricity demand in Sabah up to 2030. Originality/value – The methodology is an effective tool to determine the best geographic locations and sizes of the biomass power plants around a region.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Behnam Malmir ◽  
Christopher W. Zobel

PurposeWhen a large-scale outbreak such as the COVID-19 pandemic happens, organizations that are responsible for delivering relief may face a lack of both provisions and human resources. Governments are the primary source for the humanitarian supplies required during such a crisis; however, coordination with humanitarian NGOs in handling such pandemics is a vital form of public-private partnership (PPP). Aid organizations have to consider not only the total degree of demand satisfaction in such cases but also the obligation that relief goods such as medicine and foods should be distributed as equitably as possible within the affected areas (AAs).Design/methodology/approachGiven the challenges of acquiring real data associated with procuring relief items during the COVID-19 outbreak, a comprehensive simulation-based plan is used to generate 243 small, medium and large-sized problems with uncertain demand, and these problems are solved to optimality using GAMS. Finally, post-optimality analyses are conducted, and some useful managerial insights are presented.FindingsThe results imply that given a reasonable measure of deprivation costs, it can be important for managers to focus less on the logistical costs of delivering resources and more on the value associated with quickly and effectively reducing the overall suffering of the affected individuals. It is also important for managers to recognize that even though deprivation costs and transportation costs are both increasing as the time horizon increases, the actual growth rate of the deprivation costs decreases over time.Originality/valueIn this paper, a novel mathematical model is presented to minimize the total costs of delivering humanitarian aid for pandemic relief. With a focus on sustainability of operations, the model incorporates total transportation and delivery costs, the cost of utilizing the transportation fleet (transportation mode cost), and equity and deprivation costs. Taking social costs such as deprivation and equity costs into account, in addition to other important classic cost terms, enables managers to organize the best possible response when such outbreaks happen.


2020 ◽  
Vol 22 (2) ◽  
pp. 53-70
Author(s):  
Juan Rendon Schneir ◽  
Konstantinos Konstantinou ◽  
Julie Bradford ◽  
Gerd Zimmermann ◽  
Heinz Droste ◽  
...  

Purpose 5G systems will enable an improved transmission performance and the delivery of advanced communication services. To meet the expected requirements, operators will need to invest in network modernisation, with the radio access network being the most expensive network component. One possible way for operators to reduce this investment would be via sharing of resources by means of a multi-tenancy concept. This implies that a mobile service provider may use the common infrastructure of one or various infrastructure providers, whereby it provides services to multiple tenants. This paper aims to study the expected cost savings in terms of capital expenditures (CAPEX) and operational expenditures (OPEX) that can be achieved when using a cloudified 5G multi-tenant network. Design/methodology/approach A cost model was used. The study period is 2020-2030 and the study area consists of three local districts in central London, UK. Findings This paper describes that the total cost reduction achieved when using multi-tenancy for a 5G broadband network in comparison with the case where operators make the investment independently ranges from 5.2% to 15.5%. Research limitations/implications Further research is needed to assess the cost implications of network sharing for 5G on a regional or nationwide basis. Originality/value Very little quantitative research about the cost implications of network sharing under 5G networks has been published so far. This paper sheds light on the economic benefits of multi-tenancy in a 5G broadband network.


2019 ◽  
Vol 36 (4) ◽  
pp. 526-551 ◽  
Author(s):  
Mohammad Hosein Nadreri ◽  
Mohamad Bameni Moghadam ◽  
Asghar Seif

PurposeThe purpose of this paper is to develop an economic statistical design based on the concepts of adjusted average time to signal (AATS) andANFforX¯control chart under a Weibull shock model with multiple assignable causes.Design/methodology/approachThe design used in this study is based on a multiple assignable causes cost model. The new proposed cost model is compared with the same cost and time parameters and optimal design parameters under uniform and non-uniform sampling schemes.FindingsNumerical results indicate that the cost model with non-uniform sampling cost has a lower cost than that with uniform sampling. By using sensitivity analysis, the effect of changing fixed and variable parameters of time, cost and Weibull distribution parameters on the optimum values of design parameters and loss cost is examined and discussed.Practical implicationsThis research adds to the body of knowledge relating to the quality control of process monitoring systems. This paper may be of particular interest to practitioners of quality systems in factories where multiple assignable causes affect the production process.Originality/valueThe cost functions for uniform and non-uniform sampling schemes are presented based on multiple assignable causes withAATSandANFconcepts for the first time.


2019 ◽  
Vol 47 (4) ◽  
pp. 412-432 ◽  
Author(s):  
Yassine Benrqya

Purpose The purpose of this paper is to investigate the costs/benefits of implementing the cross-docking strategy in a retail supply chain context using a cost model. In particular, the effects of using different typologies of cross-docking compared to traditional warehousing are investigated, taking into consideration an actual case study of a fast-moving consumer goods (FMCG) company and a major French retailer. Design/methodology/approach The research is based on a case study of an FMCG company and a major French retailer. The case study is used to develop a cost model and to identify the main cost parameters impacted by implementing the cross-docking strategy. Based on the cost model, a comparison of the main cost factors characterizing four different configurations is made. The configurations studied are, the traditional warehousing strategy (AS-IS configuration, the reference configuration for comparison), where both retailers and suppliers keep inventory in their warehouses; the cross-docking pick-by-line strategy, where inventory is removed from the retailer warehouse and the allocation and sorting are performed at the retailer distribution centre (DC) level (TO-BE1 configuration); the cross-docking pick-by-store strategy, where the allocation and sorting are done at the supplier DC level (TO-BE2 configuration); and finally a combination of cross-docking pick-by-line strategy and traditional warehousing strategy (TO-BE3 configuration). Findings The case study provides three main observations. First, compared to traditional warehousing, cross-docking with sorting and allocation done at the supplier level increases the entire supply chain cost by 5.3 per cent. Second, cross-docking with allocation and sorting of the products done at the retailer level is more economical than traditional warehousing: a 1 per cent reduction of the cost. Third, combining cross-docking and traditional warehousing reduces the supply chain cost by 6.4 per cent. Research limitations/implications A quantitative case study may not be highly generalisable; however, the findings form a foundation for further understanding of the reconfiguration of a retail supply chain. Originality/value This paper fills a gap by proposing a cost analysis based on a real case study and by investigating the costs and benefits of implementing different configurations in the retail supply chain context. Furthermore, the cost model may be used to help managers choose the right distribution strategy for their supply chain.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Noura Almansoori ◽  
Samah Aldulaijan ◽  
Sara Althani ◽  
Noha M. Hassan ◽  
Malick Ndiaye ◽  
...  

PurposeResearchers heavily investigated the use of industrial robots to enhance the quality of spray-painted surfaces. Despite its advantages, automating process is not always economically feasible. The manual process, on the other hand, is cheaper, but its quality is prone to the mental and physical conditions of the worker making it difficult to operate consistently. This research proposes a mathematical cost model that integrates human factors in determining optimal process settings.Design/methodology/approachTaguchi's robust design is used to investigate the effect of fatigue, stability of worker's hand and speed on paint consumption, surface quality, and processing time. A crossed array experimental design is deployed. Regression analysis is then used to model response variables and formulate cost model, followed by a multi-response optimization.FindingsResults reveal that noise factors have a significant influence on painting quality, time, and cost of the painted surface. As a result, a noise management strategy should be implemented to reduce their impact and obtain better quality and productivity results. The cost model can be used to determine optimal setting for different applications by product and by industry.Originality/valueHardly any research considered the influence of human factors. Most focused on robot trajectory and its effect on paint uniformity. In proposed research, both cost and quality are integrated into a single objective. Quality is measured in terms of uniformity, smoothness, and surface defects. The interaction between trajectory and flow rate is investigated here for the first time. A unique approach integrating quality management, statistical analysis, and optimization is used.


2018 ◽  
Vol 35 (2) ◽  
pp. 405-429 ◽  
Author(s):  
Evrikleia Chatzipetrou ◽  
Odysseas Moschidis

Purpose The purpose of this paper is to examine the longitudinal evolution of quality costs measurement, depicted in 99 real data studies of the last 30 years. A meta-analysis of these articles is conducted, in order to highlight the evolution of the variables that have been used for the study of quality costing, in relation to the date of publication, business sector and geographical origin of each paper. Design/methodology/approach The analysis of the cost components has been conducted with the use of multiple correspondence analysis, which is a useful tool for the exploration of the interrelations among all elements, aiming at the identification of the dominant and most substantial tendencies in their structure. Findings The findings suggest that the level of analysis of quality costs is related to the date of publication, the business sector and the origin of each research. Furthermore, it is pointed out that the most prominent prevention costs are related to suppliers’ assurance, internal audit and new product’s design and development. Appraisal costs are mostly defined by quality audits and procurement costs, while failure costs by defect/failure analysis, low quality losses, complaint investigation and concessions and warranty claims. Originality/value The present paper is a longitudinal meta-analysis of 99 quality cost papers that have been published in the last 30 years. It explores the evolution of research in quality costing, not only in relation to the cost components in use, but also in terms of date of publication, business sector and geographical origin of the studies.


2018 ◽  
Vol 20 (2) ◽  
pp. 125-148 ◽  
Author(s):  
Simon Forge ◽  
Lara Srivastava

Purpose Tariffs for international mobile roaming (IMR) are often viewed by governments as an additional tax on international trade and on tourism. IMR customer bills may appear to be arbitrary and sometimes excessive. The purpose of this paper is therefore to set out a pragmatic approach to assessing international charges for mobile roaming, making use of a realistic cost model of the international roaming process and its cost elements, at a level that is useful to regulatory authorities and operators. Design/methodology/approach The discussion presented is based on industry practices for handling voice calls and data sessions with the mobile network operators (MNOs) business model, based on industry sources. The basic mechanisms use two common constructs from business analysis – business processes and use-cases – to provide a simplified form of activity-based costing. This provides a model suitable for national regulatory authorities to move towards cost-based IMR tariffs. Findings Using a perspective on costs based on a bottom-up survey procedure for elucidating the key information, the paper presents the cost elements for the various IMR network components and business processes, with an approach suitable for analysing both wholesale and retail pricing. Research limitations/implications The method is specifically designed to overcome the key problem of such approaches, the limitations set by differences in network technologies, network topology, operational scale and the engineering, as well as MNO business model and accounting practices, which otherwise would preclude the method presented here from being vendor neutral. Practical implications Vendor and network engineering neutrality implies the approach can be used to compare different MNOs in terms of the validity of their IMR charges and whether they are cost based. Social implications Impacts on society of so-called “bill-shock” have become quite common, increasingly for data sessions. The cost model presented here was developed with the intention of improving the accountability and transparency of the mobile roaming market. It thus assists in the introduction of cost-based tariffs over an economic region, such the European Union. Originality/value The paper examines the practical implications of building large-scale cost models for assessing the real IMR costs, a modelling exercise that has not been seen elsewhere in terms of its approach and neutrality as to MNO structure and assets.


2018 ◽  
Author(s):  
Ricardo Guedes ◽  
Vasco Furtado ◽  
Tarcísio Pequeno ◽  
Joel Rodrigues

UNSTRUCTURED The article investigates policies for helping emergency-centre authorities for dispatching resources aimed at reducing goals such as response time, the number of unattended calls, the attending of priority calls, and the cost of displacement of vehicles. Pareto Set is shown to be the appropriated way to support the representation of policies of dispatch since it naturally fits the challenges of multi-objective optimization. By means of the concept of Pareto dominance a set with objectives may be ordered in a way that guides the dispatch of resources. Instead of manually trying to identify the best dispatching strategy, a multi-objective evolutionary algorithm coupled with an Emergency Call Simulator uncovers automatically the best approximation of the optimal Pareto Set that would be the responsible for indicating the importance of each objective and consequently the order of attendance of the calls. The scenario of validation is a big metropolis in Brazil using one-year of real data from 911 calls. Comparisons with traditional policies proposed in the literature are done as well as other innovative policies inspired from different domains as computer science and operational research. The results show that strategy of ranking the calls from a Pareto Set discovered by the evolutionary method is a good option because it has the second best (lowest) waiting time, serves almost 100% of priority calls, is the second most economical, and is the second in attendance of calls. That is to say, it is a strategy in which the four dimensions are considered without major impairment to any of them.


2020 ◽  
Vol 33 (4/5) ◽  
pp. 323-331
Author(s):  
Mohsen pakdaman ◽  
Raheleh akbari ◽  
Hamid reza Dehghan ◽  
Asra Asgharzadeh ◽  
Mahdieh Namayandeh

PurposeFor years, traditional techniques have been used for diabetes treatment. There are two major types of insulin: insulin analogs and regular insulin. Insulin analogs are similar to regular insulin and lead to changes in pharmacokinetic and pharmacodynamic properties. The purpose of the present research was to determine the cost-effectiveness of insulin analogs versus regular insulin for diabetes control in Yazd Diabetes Center in 2017.Design/methodology/approachIn this descriptive–analytical research, the cost-effectiveness index was used to compare insulin analogs and regular insulin (pen/vial) for treatment of diabetes. Data were analyzed in the TreeAge Software and a decision tree was constructed. A 10% discount rate was used for ICER sensitivity analysis. Cost-effectiveness was examined from a provider's perspective.FindingsQALY was calculated to be 0.2 for diabetic patients using insulin analogs and 0.05 for those using regular insulin. The average cost was $3.228 for analog users and $1.826 for regular insulin users. An ICER of $0.093506/QALY was obtained. The present findings suggest that insulin analogs are more cost-effective than regular insulin.Originality/valueThis study was conducted using a cost-effectiveness analysis to evaluate insulin analogs versus regular insulin in controlling diabetes. The results of study are helpful to the government to allocate more resources to apply the cost-effective method of the treatment and to protect patients with diabetes from the high cost of treatment.


Sign in / Sign up

Export Citation Format

Share Document