How smart cities are made: A priori, ad hoc and post hoc drivers of smart city implementation in Sydney, Australia

Urban Studies ◽  
2021 ◽  
pp. 004209802098629
Author(s):  
Robyn Dowling ◽  
Pauline McGuirk ◽  
Sophia Maalsen ◽  
Jathan Sadowski

Recent geographical attention to smart places has underlined the key point that smart places are made: crafted incrementally over time and woven through existing sites and contexts. Work on analysing the crafting of ‘actually existing’ smart cities has turned to describing and characterising the processes through which smart cities are made and, within this, the interplay and relative significance of accidental versus purposeful smart cities has come to the fore. Drawing on the concept of dispositif to capture the simultaneity of piecemeal and opportunistic change with deliberate strategy, this paper furthers these debates using examples of two places within the Sydney Metropolitan Region, Australia: Newcastle and Parramatta. Through their analysis we identify the evolving interplay of a priori drivers, ad hoc initiatives and post hoc strategies evident in the crafting of smart cities. Understanding the emergence of actually existing smart cities, we conclude, is sharpened and strengthened by the concept of dispositif, through its attention to processes characterised by non-linear, overlapping and recursively combined drivers that are not without purposeful, strategic intent.

Author(s):  
Reuth Mirsky ◽  
William Macke ◽  
Andy Wang ◽  
Harel Yedidsion ◽  
Peter Stone

In ad hoc teamwork, multiple agents need to collaborate without having knowledge about their teammates or their plans a priori. A common assumption in this research area is that the agents cannot communicate. However, just as two random people may speak the same language, autonomous teammates may also happen to share a communication protocol. This paper considers how such a shared protocol can be leveraged, introducing a means to reason about Communication in Ad Hoc Teamwork (CAT). The goal of this work is enabling improved ad hoc teamwork by judiciously leveraging the ability of the team to communicate. We situate our study within a novel CAT scenario, involving tasks with multiple steps, where teammates' plans are unveiled over time. In this context, the paper proposes methods to reason about the timing and value of communication and introduces an algorithm for an ad hoc agent to leverage these methods. Finally, we introduces a new multiagent domain, the tool fetching domain, and we study how varying this domain's properties affects the usefulness of communication. Empirical results show the benefits of explicit reasoning about communication content and timing in ad hoc teamwork.


BJS Open ◽  
2021 ◽  
Vol 5 (Supplement_1) ◽  
Author(s):  
Chan Hee Koh ◽  
Danyal Z Khan ◽  
Ronneil Digpal ◽  
Hugo Layard Horsfall ◽  
Hani J Marcus ◽  
...  

Abstract Introduction The clinical practice and research in the diagnosis and management of Cushing’s disease remains heterogeneous and challenging to this day. We sought to establish the characteristics of Cushing’s disease, and the trends in diagnosis, management and reporting in this field. Methods Searches of PubMed and Embase were conducted. Study protocol was registered a-priori. Random-effects analyses were conducted to establish numerical estimates. Results Our screening returned 159 papers. The average age of adult patients with Cushing’s disease was 39.3, and 13.6 for children. The male:female ratio was 1:3. 8% of patients had undergone previous transsphenoidal resection. The ratio of macroadenomas: microadenomas:imaging-undetectable adenomas was 18:53:29. The most commonly reported preoperative biochemical investigations were serum cortisol (average 26.4µg/dL) and ACTH (77.5pg/dL). Postoperative cortisol was most frequently used to define remission (74.8%), most commonly with threshold of 5µg/dL (44.8%). Average remission rates were 77.8% with recurrence rate of 13.9%. Median follow-up was 38 months. Majority of papers reported age (81.9%) and sex (79.4%). Only 56.6% reported whether their patients had previous pituitary surgery. 45.3% reported whether their adenomas were macroadenoma, microadenoma or undetectable. Only 24.1% reported preoperative cortisol, and this did not improve over time. 60.4% reported numerical thresholds for cortisol in defining remission, and this improved significantly over time (p = 0.004). Visual inspection of bubbleplots showed increasing preference for threshold of 5µg/dL. 70.4% reported the length of follow up. Conclusion We quantified the characteristics of Cushing’s disease, and analysed the trends in investigation and reporting. This review may help to inform future efforts in forming guidelines for research and clinical practice.


2017 ◽  
Vol 21 (4) ◽  
pp. 308-320 ◽  
Author(s):  
Mark Rubin

Hypothesizing after the results are known, or HARKing, occurs when researchers check their research results and then add or remove hypotheses on the basis of those results without acknowledging this process in their research report ( Kerr, 1998 ). In the present article, I discuss 3 forms of HARKing: (a) using current results to construct post hoc hypotheses that are then reported as if they were a priori hypotheses; (b) retrieving hypotheses from a post hoc literature search and reporting them as a priori hypotheses; and (c) failing to report a priori hypotheses that are unsupported by the current results. These 3 types of HARKing are often characterized as being bad for science and a potential cause of the current replication crisis. In the present article, I use insights from the philosophy of science to present a more nuanced view. Specifically, I identify the conditions under which each of these 3 types of HARKing is most and least likely to be bad for science. I conclude with a brief discussion about the ethics of each type of HARKing.


1980 ◽  
Vol 3 (1) ◽  
pp. 111-132 ◽  
Author(s):  
Zenon W. Pylyshyn

AbstractThe computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach (the “proprietary vocabulary hypothesis”) is that there is a natural domain of human functioning (roughly what we intuitively associate with perceiving, reasoning, and acting) that can be addressed exclusively in terms of a formal symbolic or algorithmic vocabulary or level of analysis.Much of the paper elaborates various conditions that need to be met if a literal view of mental activity as computation is to serve as the basis for explanatory theories. The coherence of such a view depends on there being a principled distinction between functions whose explanation requires that we posit internal representations and those that we can appropriately describe as merely instantiating causal physical or biological laws. In this paper the distinction is empirically grounded in a methodological criterion called the “cognitive impenetrability condition.” Functions are said to be cognitively impenetrable if they cannot be influenced by such purely cognitive factors as goals, beliefs, inferences, tacit knowledge, and so on. Such a criterion makes it possible to empirically separate the fixed capacities of mind (called its “functional architecture”) from the particular representations and algorithms used on specific occasions. In order for computational theories to avoid being ad hoc, they must deal effectively with the “degrees of freedom” problem by constraining the extent to which they can be arbitrarily adjusted post hoc to fit some particular set of observations. This in turn requires that the fixed architectural function and the algorithms be independently validated. It is argued that the architectural assumptions implicit in many contemporary models run afoul of the cognitive impenetrability condition, since the required fixed functions are demonstrably sensitive to tacit knowledge and goals. The paper concludes with some tactical suggestions for the development of computational cognitive theories.


2015 ◽  
Vol 27 (4) ◽  
pp. 369-388 ◽  
Author(s):  
Jang B. Singh

Purpose – The purpose of this paper was to examine changes in the contents of Canadian corporate codes of ethics over a period of two decades from an institutionalization perspective. Design/methodology/approach – The paper tracks changes in the contents of the codes of large Canadian corporations longitudinally by analyzing their contents at two points over two decades, in 1992 and 2012. In particular, the paper tests three hypotheses related to the institutionalization of codes. Findings – It was found that the codes have become more prescriptive, they are more concerned with social responsibility and are more likely to identify their moral and legal authority. Overall, the findings support an institutional interpretation of the observed changes. Research limitations/implications – While large corporations are critical in establishing new and innovative management practices, their selection as the study population limits the generalizabilty of the findings. Another limitation of this paper is that it used an a priori determined set of items to analyze the contents of the codes and while this was needed to facilitate the comparison across time, it also meant that some important items were not clearly identified. Originality/value – Codes of ethics are the foundation of ethics programs in corporations and their contents could be critical in the development of a culture of ethics in corporations. This paper makes a valuable contribution to research on business ethics by analyzing the codes of ethics of the largest corporations in Canada at two points over two decades. The need to track changes in corporate codes of ethics over time has been advocated by several researchers, but longitudinal studies in this area are rare.


10.2196/15819 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e15819
Author(s):  
William Collinge ◽  
Robert Soltysik ◽  
Paul Yarnold

Background Personal health informatics have the potential to help patients discover personalized health management strategies that influence outcomes. Fibromyalgia (FM) is a complex chronic illness requiring individualized strategies that may be informed by analysis of personal health informatics data. An online health diary program with dynamic feedback was developed to assist patients with FM in identifying symptom management strategies that predict their personal outcomes, and found reduced symptom levels associated with program use. Objective The aim of this study was to determine longitudinal associations between program use and functional impact of FM as measured by scores on a standardized assessment instrument, the Fibromyalgia Impact Questionnaire (FIQ). Methods Participants were self-identified as diagnosed with FM and recruited via online FM advocacy websites. Participants used an online health diary program (“SMARTLog”) to report symptom ratings, behaviors, and management strategies used. Based on single-subject analysis of the accumulated data over time, individualized recommendations (“SMARTProfile”) were then provided by the automated feedback program. Indices of program use comprised of cumulative numbers of SMARTLogs completed and SMARTProfiles received. Participants included in this analysis met a priori criteria of sufficient program use to generate SMARTProfiles (ie, ≥22 SMARTLogs completed). Users completed the FIQ at baseline and again each subsequent month of program use as follow-up data for analysis. Kendall tau-b, a nonparametric statistic that measures both the strength and direction of an ordinal association between two repeated measured variables, was computed between all included FIQ scores and both indices of program use for each subject at the time of each completed FIQ. Results A total of 76 users met the a priori use criteria. The mean baseline FIQ score was 61.6 (SD 14.7). There were 342 FIQ scores generated for longitudinal analysis via Kendall tau-b. Statistically significant inverse associations were found over time between FIQ scores and (1) the cumulative number of SMARTLogs completed (tau-b=–0.135, P<.001); and (2) the cumulative number of SMARTProfiles received (tau-b=–0.133, P<.001). Users who completed 61 or more SMARTLogs had mean follow-up scores of 49.9 (n=25, 33% of the sample), an 18.9% drop in FM impact. Users who generated 11 or more new SMARTProfiles had mean follow-up scores of 51.8 (n=23, 30% of the sample), a 15.9% drop. Conclusions Significant inverse associations were found between FIQ scores and both indices of program use, with FIQ scores declining as use increased. Based on established criteria for rating FM severity, the top one-third of users in terms of use had clinically significant reductions from “severe” to “moderate” FM impact. These findings underscore the value of self-management interventions with low burden, high usability, and high perceived relevance to the user. Trial Registration ClinicalTrials.gov NCT02515552; https://clinicaltrials.gov/ct2/show/NCT02515552


2020 ◽  
Vol 2 (1) ◽  
pp. 364-382
Author(s):  
Taylor Coyne ◽  
Maria de Lourdes Melo Zurita ◽  
David Reid ◽  
Veljko Prodanovic

Abstract Historic relationships between communities and waterscapes are complex and often explained solely in technical terms. There is a key need to understand how human-centered developments have shifted the use of river spaces over time, and how these changes reflect on the values of rivers and surrounding cultures. In this paper, we develop a critical analysis of the historically changing relationship between urban communities and water infrastructures using the Georges River catchment in Sydney, Australia. Our focus was on bringing together past and current perspectives, engaging with the formation of diverse hydrosocial behaviors entangled with water infrastructures. Using post-settlement historical documents, maps, journals, and newspaper articles, we trace shifts in hydrosocial perspectives over time, mapping six distinct historic phases. In our study, we offer a shift from the main paradigms currently influencing the development of urban water infrastructures, moving away from the dominant technical propositions of systems designed purely for the management and treatment of stormwater. Drawing on our analysis, we propose a new urban water design concept: Culturally Inclusive Water Urban Design (CIWUD). This presents an advancement on current framework to include a consideration of people's connections and uses of urban waterscapes, as well as a shift towards democratic space design.


2016 ◽  
Vol 16 (7) ◽  
pp. 1657-1672 ◽  
Author(s):  
Adeline Delonca ◽  
Thierry Verdel ◽  
Yann Gunzburger

Abstract. To date, many rockfall hazard assessment methods still consider qualitative observations within their analysis. Based on this statement, knowledge and expertise are supposed to be major parameters of rockfall assessment. To test this hypothesis, an experiment was carried out in order to evaluate the influence of knowledge and expertise on rockfall hazard assessment. Three populations were selected, having different levels of expertise: (1) students in geosciences, (2) researchers in geosciences and (3) confirmed experts. These three populations evaluated the rockfall hazard level on the same site, considering two different methods: the Laboratoire des Ponts et Chaussées (LPC) method and a method partly based on the "slope mass rating" (SMR) method. To complement the analysis, the completion of an "a priori" assessment of the rockfall hazard was requested of each population, without using any method. The LPC method is the most widely used method in France for official hazard mapping. It combines two main indicators: the predisposition to instability and the expected magnitude. Reversely, the SMR method was used as an ad hoc quantitative method to investigate the effect of quantification within a method. These procedures were applied on a test site divided into three different sectors. A statistical treatment of the results (descriptive statistical analysis, chi-square independent test and ANOVA) shows that there is a significant influence of the method used on the rockfall hazard assessment, whatever the sector. However, there is a non-significant influence of the level of expertise of the population the sectors 2 and 3. On sector 1, there is a significant influence of the level of expertise, explained by the importance of the temporal probability assessment in the rockfall hazard assessment process. The SMR-based method seems highly sensitive to the "site activity" indicator and exhibits an important dispersion in its results. However, the results are more similar with the LPC qualitative method, even in the case of sector 1.


Author(s):  
Qutaiba Ibrahim Ali ◽  
Mustafa Siham Qassab

Abstract : In the last few decades, Information and Communication Technology (ICT) has been introduced which aims to bring more comfort to human life by integrating smartness into daily objects, yields to the idea of the smart city. Guaranteeing the well-being of residents and assessing industry and urban planning from an ecological and sustainable perspective are the main goals for the smart city. Great potentials are brought to the public and civil areas by the Aerial Ad Hoc Network (AANET) concept, especially in applications that are risky to human lives. AANET, like any emerging technology, comes with many challenges that have to be overcome to be employed efficiently. In this paper, we make a detailed survey on current literature, standards, and projects of self-organizing AANET in smart cities. Also, we intend to present a profound knowledge of this active research area by identifying features, design characteristics, architectures, routing protocols, and security aspects for the design and implementation of self-organizing AANET. Furthermore, we discuss existing solutions, indicate assessment metrics along with current applications, finally we highlight the main research scope for further developments. This article surveys the work done toward AANET-related outstanding issues, intending to encourage further research in this field.


Secret Wars ◽  
2018 ◽  
pp. 99-141
Author(s):  
Austin Carson

This chapter analyzes foreign combat participation in the Spanish Civil War. Fought from 1936 to 1939, the war hosted covert interventions by Germany, Italy, and the Soviet Union. The chapter leverages variation in intervention form among those three states, as well as variation over time in the Italian intervention, to assess the role of escalation concerns and limited war in the use of secrecy. Adolf Hitler's German intervention provides especially interesting support for a theory on escalation control. An unusually candid view of Berlin's thinking suggests that Germany managed the visibility of its covert “Condor Legion” with an eye toward the relative power of domestic hawkish voices in France and Great Britain. The chapter also shows the unique role of direct communication and international organizations. The Non-Intervention Committee, an ad hoc organization that allowed private discussions of foreign involvement in Spain, helped the three interveners and Britain and France keep the war limited in ways that echo key claims of the theory.


Sign in / Sign up

Export Citation Format

Share Document