Un modello cooperativo di gestione intermunicipale della raccolta differenziata dei rifiuti: un'applicazione del valore di Shapley

Author(s):  
Vitoantonio Bevilacqua ◽  
Francesca Intini ◽  
Silvana Kuhtz

- In this work we have carried out a study in order to estimate and allocate the costs related to separate waste collection in an inter-municipal area located in the province of Bari (Italy). This analysis promotes the cooperation among municipalities to manage, in an optimal way, the waste collection service. Indeed, according to Italian laws, the municipalities are responsible for organizing the management of municipal waste in accordance with principles of transparency, efficiency, effectiveness and inexpensiveness. For this reason we have built a model of separate waste collection management, highlighting the different cost functions. The total cost of the service has been divided among the individual municipalities using the theory of cooperative games, stressing that local authorities are not interested in paying off more than they would pay if they organized independently. To achieve this goal, we have created a model of aggregation of quantitative information on equipments and specialized personnel (and their costs). The problem of the cost allocation is interpreted as an example of transferable utility games and it is resolved with the technique of Shapley values that are included in the nucleolus of the inter-municipal game. Therefore it is more cost-effective to entrust a single operator with the waste collection for each area or sub domain in order not to double service costs. This work on waste management can integrate the studies and applications of the theory of cooperative games in the environmental field.Key words: Separate waste collection, Shapley values, cost allocation.JEL classifications: Q53.Parole chiave: Raccolta differenziata, valore di Shapley, allocazione dei costi.

2009 ◽  
Vol 1 (4) ◽  
pp. 286 ◽  
Author(s):  
Nikki Turner ◽  
Paul Rouse ◽  
Stacey Airey ◽  
Helen Petousis-Harris

INTRODUCTION: Childhood immunisation is one of the most cost-effective activities in health care. However, New Zealand (NZ) has failed to achieve national coverage targets. NZ general practice is the primary site of service delivery and is funded on a fee-for-service basis for delivery of immunisation events. AIM: To determine the average cost to a general practice of delivering childhood immunisation events and to develop a cost model for the typical practice. METHODS: A purposeful selection of 24 diverse practices provided data via questionnaires and a daily log over a week. Costs were modelled using activity-based costing. RESULTS: The mean time spent on an immunisation activity was 23.8 minutes, with 90.7% of all staff time provided by practice nurses. Only 2% of the total time recorded was spent on childhood immunisation opportunistic activities. Practice nurses spent 15% of their total work time on immunisation activity. The mean estimated cost per vaccination event was $25.90; however, there was considerable variability across practices. A ‘typical practice’ model was developed to better understand costs at different levels of activity. CONCLUSIONS: The current level of immunisation benefit subsidy is considerably lower than the cost of a standard vaccination event, although there is wide variability across practices. The costs of delivery exceeding the subsidy may be one reason why there is an apparently small amount of time spent on extra opportunistic activities and a barrier to increasing efforts to raise immunisation rates. KEYWORDS: Immunisation; vaccination; patient care management; cost analysis; cost allocation


Author(s):  
John J. Batteh ◽  
Michael M. Tiller

In an effort to improve quality, shorten engine development times, and reduce costly and time-consuming experimental work, analytic modeling is being used upstream in the product development process to evaluate engine robustness to noise factors. This paper describes a model-based method for evaluating engine NVH (Noise, Vibration, and Harshness) robustness due to manufacturing variations for a statistically significant engine population. A brief discussion of the cycle simulation model and its capabilities is included. The methodology consists of Monte Carlo simulations involving several noise factors to obtain the crank-angle resolved response of the combustion process and Fourier analysis of the resulting engine torque. Further analysis of the Fourier results leads to additional insights regarding the relative importance of and sensitivity to the individual noise factors. While the cost and resources required to experimentally evaluate a large engine population can be prohibitive, the analytical modeling proved to be a cost-effective way of analyzing the engine robustness taking into account manufacturing process capability.


2017 ◽  
Vol 81 (1) ◽  
pp. 129 ◽  
Author(s):  
Natàlia Sant ◽  
Eglantine Chappuis ◽  
Conxi Rodríguez-Prieto ◽  
Montserrat Real ◽  
Enric Ballesteros

Here we compare the applicability, the information provided and the cost-benefit of three sampling methods usually used in the study of rocky benthic assemblages. For comparative purposes, sampling was performed seasonally and along a depth gradient (0-50 m) in the Cabrera Archipelago (western Mediterranean). The destructive scraping (collection) method was the least cost-effective but provided the best qualitative and quantitative information. The in situ visual method was the most time-effective but provided low levels of taxonomic resolution and its accuracy decreased with depth due to the increasing difficulty of recognizing species in situ due to nitrogen narcosis, reduced light and cold. The photoquadrat method showed intermediate values of cost-effectiveness and information but was not suitable for multilayered assemblages, as it only accounted for the overstory. A canonical correspondence analysis showed that depth was highlighted as the main environmental gradient (16.0% of variance) by the three methods. However, differences due to the sampling method (7.9% of variance) were greater than differences due to temporal variability (5.8% of variance), suggesting that the three methods are valid but their selection has to be carefully assessed in relation to the targeted assemblages and the specific goals of each study.


2001 ◽  
Vol 7 (1-2) ◽  
pp. 133-150
Author(s):  
Milena Peršić ◽  
Marino Turčić

The goal of this paper is to present the achieved development level of managerial accounting in the Croatian hotel industry. The research was conducted on sample of 42% of all Croatian hotels. This sample also presents regional structure, organisation status, size, ownership structure and organisation form. The fact that the predominant methods are traditional cost allocation-methods shows us that accounting is still oriented to the external users. In most cases, the results from the previous periods are used as the comparative value in the control of actual costs. The research has shown that financial statements are prepared on monthly basis for top management. It is absolutely necessary to put additional effort into improving the cost allocation methods and techniques, through implementing modem systems and methods of management accounting. Only in such circumstances the accounting information could be presented to lower levels of management on weekly or daily basis. Managerial accounting is the framework for development of responsibility accounting, which is based on specifics of hospitality industry and accounting standards of the “Uniform System of Accounts for the Lodging Industry” (USALI). Reporting systems should be adjusted to the information required by each level of management (especially in area of current and particular reports). The conducted research indicates the necessity for intense implementing of the USALI on the level of responsibility centres. This would ensure the transparency of the managing information and possibility of their comparison. The individual goals should be in accordance with global goal, and the results comparable with similar hotel enterprises in the environment. This would eventually improve the overall competition of a particular hotel enterprise on the global tourist market.


2017 ◽  
Vol 114 (8) ◽  
pp. 1874-1879 ◽  
Author(s):  
Daniel Sznycer ◽  
Laith Al-Shawaf ◽  
Yoella Bereby-Meyer ◽  
Oliver Scott Curry ◽  
Delphine De Smet ◽  
...  

Pride occurs in every known culture, appears early in development, is reliably triggered by achievements and formidability, and causes a characteristic display that is recognized everywhere. Here, we evaluate the theory that pride evolved to guide decisions relevant to pursuing actions that enhance valuation and respect for a person in the minds of others. By hypothesis, pride is a neurocomputational program tailored by selection to orchestrate cognition and behavior in the service of: (i) motivating the cost-effective pursuit of courses of action that would increase others’ valuations and respect of the individual, (ii) motivating the advertisement of acts or characteristics whose recognition by others would lead them to enhance their evaluations of the individual, and (iii) mobilizing the individual to take advantage of the resulting enhanced social landscape. To modulate how much to invest in actions that might lead to enhanced evaluations by others, the pride system must forecast the magnitude of the evaluations the action would evoke in the audience and calibrate its activation proportionally. We tested this prediction in 16 countries across 4 continents (n= 2,085), for 25 acts and traits. As predicted, the pride intensity for a given act or trait closely tracks the valuations of audiences, local (meanr= +0.82) and foreign (meanr= +0.75). This relationship is specific to pride and does not generalize to other positive emotions that coactivate with pride but lack its audience-recalibrating function.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Owain D. Williams ◽  
Judith A. Dean ◽  
Anna Crothers ◽  
Charles F. Gilks ◽  
Jeff Gow

Abstract Background The study aimed to estimate the comparative costs per positive diagnosis of previously undetected HIV in three testing regimes: conventional; parallel and point of care (POC) testing. The regimes are analysed in six testing settings in Australia where infection is concentrated but with low prevalence. Methods A cost model was developed to highlight the trade-offs between test and economic efficiency from a provider perspective. First, an estimate of the number of tests needed to find a true (previously undiagnosed) positive diagnosis was made. Second, estimates of the average cost per positive diagnosis in whole of population (WoP) and men who have sex with men (MSM) was made, then third, aggregated to the total cost for diagnosis of all undetected infections. Results Parallel testing is as effective as conventional testing, but more economically efficient. POC testing provide two significant advantages over conventional testing: they screen out negatives effectively at comparatively lower cost and, with confirmatory testing of reactive results, there is no loss in efficiency. The average and total costs per detection in WoP are prohibitive, except for Home Self Testing. The diagnosis in MSM is cost effective in all settings, but especially using Home Self Testing when the individual assumes the cost of testing. Conclusions This study illustrates the trade-offs between economic and test efficiency and their interactions with population(s) prevalence. The efficient testing regimes and settings are presently under or not funded in Australia. Home Self Testing has the potential to dramatically increase testing rates at very little cost.


Author(s):  
James F. Mancuso

IBM PC compatible computers are widely used in microscopy for applications ranging from control to image acquisition and analysis. The choice of IBM-PC based systems over competing computer platforms can be based on technical merit alone or on a number of factors relating to economics, availability of peripherals, management dictum, or simple personal preference.IBM-PC got a strong “head start” by first dominating clerical, document processing and financial applications. The use of these computers spilled into the laboratory where the DOS based IBM-PC replaced mini-computers. Compared to minicomputer, the PC provided a more for cost-effective platform for applications in numerical analysis, engineering and design, instrument control, image acquisition and image processing. In addition, the sitewide use of a common PC platform could reduce the cost of training and support services relative to cases where many different computer platforms were used. This could be especially true for the microscopists who must use computers in both the laboratory and the office.


Phlebologie ◽  
2007 ◽  
Vol 36 (06) ◽  
pp. 309-312 ◽  
Author(s):  
T. Schulz ◽  
M. Jünger ◽  
M. Hahn

Summary Objective: The goal of the study was to assess the effectiveness and patient tolerability of single-session, sonographically guided, transcatheter foam sclerotherapy and to evaluate its economic impact. Patients, methods: We treated 20 patients with a total of 22 varicoses of the great saphenous vein (GSV) in Hach stage III-IV, clinical stage C2-C5 and a mean GSV diameter of 9 mm (range: 7 to 13 mm). We used 10 ml 3% Aethoxysklerol®. Additional varicoses of the auxiliary veins of the GSV were sclerosed immediately afterwards. Results: The occlusion rate in the treated GSVs was 100% one week after therapy as demonstrated with duplex sonography. The cost of the procedure was 207.91 E including follow-up visit, with an average loss of working time of 0.6 days. After one year one patient showed clinical signs of recurrent varicosis in the GSV; duplex sonography showed reflux in the region of the saphenofemoral junction in a total of seven patients (32% of the treated GSVs). Conclusion: Transcatheter foam sclerotherapy of the GSV is a cost-effective, safe method of treating varicoses of GSV and broadens the spectrum of therapeutic options. Relapses can be re-treated inexpensively with sclerotherapy.


2019 ◽  
Vol 2 (4) ◽  
pp. 260-266
Author(s):  
Haru Purnomo Ipung ◽  
Amin Soetomo

This research proposed a model to assist the design of the associated data architecture and data analytic to support talent forecast in the current accelerating changes in economy, industry and business change due to the accelerating pace of technological change. The emerging and re-emerging economy model were available, such as Industrial revolution 4.0, platform economy, sharing economy and token economy. Those were driven by new business model and technology innovation. An increase capability of technology to automate more jobs will cause a shift in talent pool and workforce. New business model emerge as the availabilityand the cost effective emerging technology, and as a result of emerging or re-emerging economic models. Both, new business model and technology innovation, create new jobs and works that have not been existed decades ago. The future workers will be faced by jobs that may not exist today. A dynamics model of inter-correlation of economy, industry, business model and talent forecast were proposed. A collection of literature review were conducted to initially validate the model.


The choice of cost-effective method of anticorrosive protection of steel structures is an urgent and time consuming task, considering the significant number of protection ways, differing from each other in the complex of technological, physical, chemical and economic characteristics. To reduce the complexity of solving this problem, the author proposes a computational tool that can be considered as a subsystem of computer-aided design and used at the stage of variant and detailed design of steel structures. As a criterion of the effectiveness of the anti-corrosion protection method, the cost of the protective coating during the service life is accepted. The analysis of existing methods of steel protection against corrosion is performed, the possibility of their use for the protection of the most common steel structures is established, as well as the estimated period of effective operation of the coating. The developed computational tool makes it possible to choose the best method of protection of steel structures against corrosion, taking into account the operating conditions of the protected structure and the possibility of using a protective coating.


Sign in / Sign up

Export Citation Format

Share Document