Artificial intelligence for Australian and New Zealand surgeons: is it time for us to get more involved?

2020 ◽  
Vol 90 (12) ◽  
pp. 2407-2408
Author(s):  
Ching‐Siang Cheng
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jane Scheetz ◽  
Philip Rothschild ◽  
Myra McGuinness ◽  
Xavier Hadoux ◽  
H. Peter Soyer ◽  
...  

AbstractArtificial intelligence technology has advanced rapidly in recent years and has the potential to improve healthcare outcomes. However, technology uptake will be largely driven by clinicians, and there is a paucity of data regarding the attitude that clinicians have to this new technology. In June–August 2019 we conducted an online survey of fellows and trainees of three specialty colleges (ophthalmology, radiology/radiation oncology, dermatology) in Australia and New Zealand on artificial intelligence. There were 632 complete responses (n = 305, 230, and 97, respectively), equating to a response rate of 20.4%, 5.1%, and 13.2% for the above colleges, respectively. The majority (n = 449, 71.0%) believed artificial intelligence would improve their field of medicine, and that medical workforce needs would be impacted by the technology within the next decade (n = 542, 85.8%). Improved disease screening and streamlining of monotonous tasks were identified as key benefits of artificial intelligence. The divestment of healthcare to technology companies and medical liability implications were the greatest concerns. Education was identified as a priority to prepare clinicians for the implementation of artificial intelligence in healthcare. This survey highlights parallels between the perceptions of different clinician groups in Australia and New Zealand about artificial intelligence in medicine. Artificial intelligence was recognized as valuable technology that will have wide-ranging impacts on healthcare.


2020 ◽  
pp. 49-63
Author(s):  
Luci Pangrazio ◽  
Lourdes Cardozo-Gaibisso

Cybersafety has been a mainstay of digital education since computers arrived in classrooms in the mid 1990s. Whether schools encourage students to be ‘cybersmart’ (Australia), ‘netsafe’ (New Zealand) or to be aware of ‘cybersecurity strategies’ (Mexico and Chile) most now devote a relatively large amount of time and money to teaching young people how to ‘stay safe’ online. In this article, we argue that it is time for schools to move beyond the cybersafety discourse to encourage students to think more critically about the digital media they use. Reporting on the digital practices of 276 pre-teens aged 7-12 years in Australia and Uruguay, we contend that the everyday digital challenges young people face are now beyond the scope of most cybersafety programs. Our findings highlight that many of the issues pre-teens are negotiating call for more nuanced and sustained educational programs that support the development of critical social media literacies. In particular, with the proliferation of mass user platforms and artificial intelligence, there is a need for schools to educate students around managing and protecting their personal data. The article concludes with a discussion of the digital learning required for young people in an increasingly datafied society.


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Matt Boyd ◽  
Nick Wilson

This article describes important possible scenarios in which rapid advances in artificial intelligence (AI) pose multiple risks, including to democracy and for inter-state conflict. In parallel with other countries, New Zealand needs policies to monitor, anticipate and mitigate global catastrophic and existential risks from advanced new technologies. A dedicated policy capacity could translate emerging research and policy options into the New Zealand context. It could also identify how New Zealand could best contribute to global solutions. It is desirable that the potential benefits of AI are realised, while the risks are also mitigated to the greatest extent possible.


2021 ◽  
Author(s):  
◽  
Laura Butler

<p>Artificial intelligence is being embedded into home devices and these have the potential to be useful tools in the classroom. Voice assistant devices such as Google Home or Alexa can respond to verbal instructions and answer questions using the Internet of Things, web-scraping or native programming. This research explores student use of voice assistant devices in the context of two senior primary school classrooms in New Zealand. A socio-material approach is taken, examining the devices in existing classroom environments and how the children use these devices without teacher prompting. The research is framed within the Technology Acceptance Model 2 (Venkatesh et al., 2003). Student’s perception of the device’s usefulness, ease of use, and the subjective norm and social impact of using the device in each classroom environment is discussed. The research questions examined were what and how do students ask the devices, and how accurate the devices are in answering their enquiries. Data were gathered for two case studies from device transcripts over six weeks and teacher interviews. Findings suggest that the students found the devices usable, useful and interesting to challenge and explore. Reliable responses for basic literacy, numeracy, and social studies enquiries were recorded, however, the ability of the device to understand student enquiries was variable and the device was limited by a lack of pedagogical techniques and knowledge of learner needs. Evident in the data were students’ social use, perseverance and anthropomorphism of the devices. The implications of this research are that voice-activated artificial intelligence devices can support learners in classroom environments by promoting perseverance, independence, and social learning.</p>


2018 ◽  
Vol 14 (3) ◽  
Author(s):  
Matt Boyd ◽  
Nick Wilson

Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and make a case for further engagement with the New Zealand public to determine societal values towards future lives and their protection.


AI Magazine ◽  
2010 ◽  
Vol 31 (3) ◽  
pp. 125
Author(s):  
R. Charles Murray ◽  
Hans W. Guesgen

The 23rd International Florida Artificial Intelligence Research Society Conference (FLAIRS-23) was held May 19-21, 2010 at The Shores Resort & Spa in Daytona Beach Shores, Florida, USA. The conference featured an exciting lineup of invited speakers, a general conference track on artificial intelligence research, and numerous special tracks. The conference chair was David Wilson from the University of North Carolina at Charlotte. The program co-chairs were R. Charles Murray from Carnegie Learning and Hans W. Guesgen from Massey University in New Zealand. The special tracks coordinator was Philip McCarthy from the University of Memphis.


Sign in / Sign up

Export Citation Format

Share Document