Systematic evaluation framework and empirical study of the impacts of building construction dust on the surrounding environment

2020 ◽  
Vol 275 ◽  
pp. 122767
Author(s):  
Hui Yan ◽  
Guoliang Ding ◽  
Kailun Feng ◽  
Lei Zhang ◽  
Hongyang Li ◽  
...  
2021 ◽  
Vol 166 (1-2) ◽  
Author(s):  
Charlie Wilson ◽  
Céline Guivarch ◽  
Elmar Kriegler ◽  
Bas van Ruijven ◽  
Detlef P. van Vuuren ◽  
...  

AbstractProcess-based integrated assessment models (IAMs) project long-term transformation pathways in energy and land-use systems under what-if assumptions. IAM evaluation is necessary to improve the models’ usefulness as scientific tools applicable in the complex and contested domain of climate change mitigation. We contribute the first comprehensive synthesis of process-based IAM evaluation research, drawing on a wide range of examples across six different evaluation methods including historical simulations, stylised facts, and model diagnostics. For each evaluation method, we identify progress and milestones to date, and draw out lessons learnt as well as challenges remaining. We find that each evaluation method has distinctive strengths, as well as constraints on its application. We use these insights to propose a systematic evaluation framework combining multiple methods to establish the appropriateness, interpretability, credibility, and relevance of process-based IAMs as useful scientific tools for informing climate policy. We also set out a programme of evaluation research to be mainstreamed both within and outside the IAM community.


Land ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 60 ◽  
Author(s):  
Solomon Dargie Chekole ◽  
Walter Timo de Vries ◽  
Gebeyehu Belay Shibeshi

Land is the most vital resource on earth from which people derive their basic needs. In order to administer and manage this vital resource in a sustainable way, there are several mechanisms, of which the cadastral system is the prime one. Literature documents that the performance measurement methods of cadastral systems are not appropriate. In most developing countries, systematic performance evaluation mechanisms for cadastral systems are very inadequate. For example, Ethiopia has no systematic evaluation framework to measure and evaluate the state of cadastral systems. This article aims to develop an evaluation framework to measure and evaluate the performance of urban cadastral systems in Ethiopia based on the methods that have proven successful in developed countries. The goal is furthermore to present a set of good practices and a set of indicators that can provide an objective basis to support a systematic evaluation of urban cadastral systems in Ethiopia. The study employs a desk review research strategy and a qualitative analytical approach.


2020 ◽  
pp. 1-38
Author(s):  
Keren Zhu ◽  
Rafiq Dossani ◽  
Jennifer Bouey

Abstract The impact of the Belt and Road Initiative (BRI) to global development will be unprecedented and significant, and developmental impact evaluation is therefore central to understanding BRI projects and making informed decisions. Compared with evaluations of individual projects and programs, evaluation of large and mega infrastructure projects under the BRI is particularly challenging and complex in integrating stakeholder objectives, accounting for social benefit and costs, and tracking long-term project impact. In this paper, we summarize the key drawbacks of existing BRI evaluation frameworks, propose a systematic evaluation framework elicitation method based on the inputs from BRI subject matter experts and verified through stakeholder participation, and apply an interim evaluation framework in understanding the Mombasa-Nairobi Standard Gauge Railway project in Kenya, as a proof of concept of a comprehensive evaluation framework. In doing so, we seek to provide a tool for BRI decision makers and stakeholders to assess these projects holistically at planning, construction and operation stages.


2011 ◽  
pp. 3423-3430
Author(s):  
Monika Henderson ◽  
Fergus Hogarth ◽  
Dianne Jeans

E-democracy, defined in this chapter as “the use of information and communication technologies in democratic processes,” covers a range of methods by which governments and communities engage with each other. This includes a variety of activities that support public participation in democratic processes, such as electronic voting, online consultation, Web-based discussion forums, electronic petitions to Parliament, using the Internet to Webcast parliamentary debates, and digital polling and surveys. E-democracy is a fairly recent and evolving field, with rapid developments at both practical and conceptual levels. Innovative projects and initiatives are being introduced in many different countries, but this process is rarely guided by a comprehensive policy framework or informed by systematic evaluation. In 2001, an OECD review concluded that “no OECD country currently conducts systematic evaluation of government performance in providing information, consulting and engaging citizens online” (OECD, 2001 p. 4). Writers in the field have noted that the evaluation of e-democracy initiatives has not developed as quickly as public debate about the potential impacts, that evaluations are rare to date, and that there is no consensus on appropriate evaluation methodologies (Grönlund, n.d.). Examples of publicly available evaluations include the Scottish e-petitioner system (e.g., Malina & Macintosh, n.d.; Malina, Macintosh, & Davenport, 2001) and online consultation (e.g., Smith & Macintosh, 2001; Whyte & Macintosh 2000, 2001). Macintosh and Whyte (2002) have produced “a tentative interdisciplinary framework of evaluation issues and criteria” for electronic consultation. An OECD report (2003) lists evaluation issues for online engagement. However, overall there are few resources to guide evaluation in the e-democracy area to date.


Author(s):  
M. Henderson ◽  
F. Hogarth

E-democracy, defined in this chapter as “the use of information and communication technologies in democratic processes,” covers a range of methods by which governments and communities engage with each other. This includes a variety of activities that support public participation in democratic processes, such as electronic voting, online consultation, Web-based discussion forums, electronic petitions to Parliament, using the Internet to Webcast parliamentary debates, and digital polling and surveys. E-democracy is a fairly recent and evolving field, with rapid developments at both practical and conceptual levels. Innovative projects and initiatives are being introduced in many different countries, but this process is rarely guided by a comprehensive policy framework or informed by systematic evaluation. In 2001, an OECD review concluded that “no OECD country currently conducts systematic evaluation of government performance in providing information, consulting and engaging citizens online” (OECD, 2001 p. 4). Writers in the field have noted that the evaluation of e-democracy initiatives has not developed as quickly as public debate about the potential impacts, that evaluations are rare to date, and that there is no consensus on appropriate evaluation methodologies (Grönlund, n.d.). Examples of publicly available evaluations include the Scottish e-petitioner system (e.g., Malina & Macintosh, n.d.; Malina, Macintosh, & Davenport, 2001) and online consultation (e.g., Smith & Macintosh, 2001; Whyte & Macintosh 2000, 2001). Macintosh and Whyte (2002) have produced “a tentative interdisciplinary framework of evaluation issues and criteria” for electronic consultation. An OECD report (2003) lists evaluation issues for online engagement. However, overall there are few resources to guide evaluation in the e-democracy area to date.


Author(s):  
Yi Lv ◽  
Guanqiao Li ◽  
Maogui Hu ◽  
Chengdong Xu ◽  
Hongyan Lu ◽  
...  

Abstract Background Identifying young HIV-infected individuals who are unaware of their status is a major challenge for HIV control in China. To address this, an innovative, anonymous vending machine-based urine self-collection for HIV testing (USCT) program was implemented in 2016 in colleges across China. Methods From June 2016 to December 2019, 146 vending machines stocked with urine self-collection kits were distributed on 73 college campuses across 11 provinces of China. Urine samples were collected, delivered, and tested in an anonymous manner. We analyzed the returned rate, reactive rate (likelihood of HIV screening positive), testing effectiveness (the annual number of HIV-infected college students screened by USCT or other testing methods), and the spatiotemporal relationship between USCT usage and student activity per college generated from the usage of a social networking application. Results Among the total of 5178 kits sold, 3109 (60%) samples were returned and of these 2933 (94%) were eligible for testing. The HIV-reactive rate was 2.3% (66/2933). The average effectiveness ratio among the 34 participating Beijing colleges was 0.39 (12:31) between USCT and conventional testing methods. A strong spatiotemporal correlation between USCT numbers and online student activity was observed during school semesters in Beijing. Conclusions USCT is a powerful complement to current interventions that target at-risk students and promote HIV testing. The social networking-based evaluation framework can guide us in prioritizing at-risk target populations.


2016 ◽  
Vol 7 (4) ◽  
pp. 813-830 ◽  
Author(s):  
Veronika Eyring ◽  
Peter J. Gleckler ◽  
Christoph Heinze ◽  
Ronald J. Stouffer ◽  
Karl E. Taylor ◽  
...  

Abstract. The Coupled Model Intercomparison Project (CMIP) has successfully provided the climate community with a rich collection of simulation output from Earth system models (ESMs) that can be used to understand past climate changes and make projections and uncertainty estimates of the future. Confidence in ESMs can be gained because the models are based on physical principles and reproduce many important aspects of observed climate. More research is required to identify the processes that are most responsible for systematic biases and the magnitude and uncertainty of future projections so that more relevant performance tests can be developed. At the same time, there are many aspects of ESM evaluation that are well established and considered an essential part of systematic evaluation but have been implemented ad hoc with little community coordination. Given the diversity and complexity of ESM analysis, we argue that the CMIP community has reached a critical juncture at which many baseline aspects of model evaluation need to be performed much more efficiently and consistently. Here, we provide a perspective and viewpoint on how a more systematic, open, and rapid performance assessment of the large and diverse number of models that will participate in current and future phases of CMIP can be achieved, and announce our intention to implement such a system for CMIP6. Accomplishing this could also free up valuable resources as many scientists are frequently "re-inventing the wheel" by re-writing analysis routines for well-established analysis methods. A more systematic approach for the community would be to develop and apply evaluation tools that are based on the latest scientific knowledge and observational reference, are well suited for routine use, and provide a wide range of diagnostics and performance metrics that comprehensively characterize model behaviour as soon as the output is published to the Earth System Grid Federation (ESGF). The CMIP infrastructure enforces data standards and conventions for model output and documentation accessible via the ESGF, additionally publishing observations (obs4MIPs) and reanalyses (ana4MIPs) for model intercomparison projects using the same data structure and organization as the ESM output. This largely facilitates routine evaluation of the ESMs, but to be able to process the data automatically alongside the ESGF, the infrastructure needs to be extended with processing capabilities at the ESGF data nodes where the evaluation tools can be executed on a routine basis. Efforts are already underway to develop community-based evaluation tools, and we encourage experts to provide additional diagnostic codes that would enhance this capability for CMIP. At the same time, we encourage the community to contribute observations and reanalyses for model evaluation to the obs4MIPs and ana4MIPs archives. The intention is to produce through the ESGF a widely accepted quasi-operational evaluation framework for CMIP6 that would routinely execute a series of standardized evaluation tasks. Over time, as this capability matures, we expect to produce an increasingly systematic characterization of models which, compared with early phases of CMIP, will more quickly and openly identify the strengths and weaknesses of the simulations. This will also reveal whether long-standing model errors remain evident in newer models and will assist modelling groups in improving their models. This framework will be designed to readily incorporate updates, including new observations and additional diagnostics and metrics as they become available from the research community.


2016 ◽  
Author(s):  
Veronika Eyring ◽  
Peter J. Gleckler ◽  
Christoph Heinze ◽  
Ronald J. Stouffer ◽  
Karl E. Taylor ◽  
...  

Abstract. The Coupled Model Intercomparison Project (CMIP) has successfully provided the climate community with a rich collection of simulation output from Earth system models (ESMs) that can be used to understand past climate changes and make projections and uncertainty estimates of the future. Confidence in ESMs can be gained because the models are based on physical principles and reproduce many important aspects of observed climate. Scientifically more research is required to identify the processes that are most responsible for systematic biases and the magnitude and uncertainty of future projections so that more relevant performance tests can be developed. At the same time, there are many aspects of ESM evaluation that are well-established and considered an essential part of systematic evaluation but are currently implemented ad hoc with little community coordination. Given the diversity and complexity of ESM model analysis, we argue that the CMIP community has reached a critical juncture at which many baseline aspects of model evaluation need to be performed much more efficiently to enable a systematic, open and rapid performance assessment of the large and diverse number of models that will participate in current and future phases of CMIP. Accomplishing this could also free up valuable resources as many scientists are frequently "re-inventing the wheel" by re-writing analysis routines for well-established analysis methods. A more systematic approach for the community would be to develop evaluation tools that are well suited for routine use and provide a wide range of diagnostics and performance metrics that comprehensively characterize model behaviour as soon as the output is published to the Earth System Grid Federation (ESGF). The CMIP infrastructure enforces data standards and conventions for model output accessible via ESGF, additionally publishing observations (obs4MIPs) and reanalyses (ana4MIPs) for Model Intercomparison Projects using the same data structure and organization. This largely facilitates routine evaluation of the models, but to be able to process the data automatically alongside the ESGF, the infrastructure needs to be extended with processing capabilities at the ESGF data nodes where the evaluation tools can be executed on a routine basis. Efforts are already underway to develop community-based evaluation tools, and we encourage experts to provide additional diagnostic codes that would enhance this capability for CMIP. At the same time, we encourage the community to contribute observations for model evaluation to the obs4MIPs archive. The intention is to produce through ESGF a widely accepted quasi-operational evaluation framework for climate models that would routinely execute a series of standardized evaluation tasks. Over time, as the capability matures, we expect to produce an increasingly systematic characterization of models, which, compared with early phases of CMIP, will more quickly and openly identify the strengths and weaknesses of the simulations. This will also expose whether long-standing model errors remain evident in newer models and will assist modelling groups in improving their models. This framework will be designed to readily incorporate updates, including new observations and additional diagnostics and metrics as they become available from the research community.


Sign in / Sign up

Export Citation Format

Share Document