scholarly journals A Preliminary Study for Development of AMA about 10 Years Old Child Level - Focused on Development Process of AMA(Artificial Moral Agent) -

2017 ◽  
Vol null (57) ◽  
pp. 105-127
Author(s):  
김은수 ◽  
김지원 ◽  
InJae Lee ◽  
변순용

2019 ◽  
Vol 20 (11) ◽  
pp. 2237-2242
Author(s):  
Gyun-Yeol Park


Water Policy ◽  
2014 ◽  
Vol 17 (2) ◽  
pp. 208-227 ◽  
Author(s):  
Sangeun Lee ◽  
Toshio Okazumi ◽  
Youngjoo Kwak

This study aims to contemplate possibilities and challenges in the current development of global flood disaster risk indicators (GFDRIs). To this end, methodological requirements are first identified from stakeholders' opinions included in the post-2015 UN Development process and the post-2015 Hyogo Framework for Actions process. Then, state-of-the-art methods are applied, as a preliminary attempt, to fourteen countries in Asia to understand how the GFDRI estimates plausibly describe the number of affected people and fatalities under the 50-year return period condition. The results show that GFDRIs are capable of overcoming the unavailability of data necessary to analyze flood inundation depths and areas, describing the number of people affected by flood events, using vulnerability proxies contextually meaningful to understand why flood fatalities disproportionally occur in less developed countries, and making GFDRIs simple, understandable and transparent estimates. Simultaneously, it is revealed that there is still much room to technically improve GFDRIs, especially in dealing with reluctance in assigning a single value to an indicator for a large area such as a country, inaccessibility to authorized disaster records, difficulties in showing the effectiveness of infrastructure such as dams and dykes, and lack of local knowledge about vulnerability.



2017 ◽  
Vol 46 ◽  
pp. 93-129
Author(s):  
Yong-Seong Choi ◽  
◽  
Myung-Ju Chun




Author(s):  
Kevin B. Korb

The first question concerns the kinds of AI we might achieve moral, immoral, or amoral. The second concerns the ethics of our achieving such an AI. They are more closely related than a first glance might reveal. For much of technology, the National Rifle Association’s neutrality argument might conceivably apply: “guns don’t kill people, people kill people.” But if we build a genuine, autonomous AI, we arguably will have to have built an artificial moral agent, an agent capable of both ethical and unethical behavior. The possibility of one of our artifacts behaving unethically raises moral problems for their development that no other technology can.



Sign in / Sign up

Export Citation Format

Share Document