scholarly journals الطبيعة القانونية للتنبؤ بالجريمة بواسطة الذكاء الاصطناعي ومشروعيته

2021 ◽  
Vol 3 (2) ◽  
pp. 341-359
Author(s):  
Mahmoud S. Elsherif

Predicting a crime before it occurs is not considered unseen, but rather a probable prediction, it may even be probable, concerned with analyzing a large amount of data according to algorithms prepared in advance for this purpose, that modern technology produced by artificial intelligence has had a great impact in aborting crime early. The fight against criminality is a necessary and vital matter that is renewed and developed according to the reality of its society, and the curtain does not fall - at the same time - on the jurisprudential theories that have always lurked with the criminal, sometimes analyzing him psychologically, sometimes socially, and sometimes biologically, in order to assess his criminal seriousness, and apply appropriate measures to prevent his return to crime. Once again, the algorithms - which are the backbone of AI - are taking on the task more precisely, faster, and cost less. However, the novelty of this method has added a kind of ambiguity in determining its legal nature and legality. With regard to the legal nature, we find that they are no more than security measures that are included in the duties of the arresting officers, because the prediction of a crime precedes its commission of course, and therefore no inference or investigation procedures of any kind can be taken regarding it. As for the legality of using artificial intelligence to predict the crime despite its risks affecting the constitutional right to protect personal data, however, those risks are quickly dispelled in the case in which the legislator is involved in enacting criminal protection for that data, as well as granting law enforcement officers the appropriate restrictive authority to be able to activate This new technology aims to reduce crime in the near future.

2018 ◽  
Vol 5 (4) ◽  
pp. 6-17
Author(s):  
Eugénio Oliveira

When planting our human print in a new technology-driven world we should ask, remembering Neil Armstrong in 1969, “after many small steps for AI researchers, will it result in a giant leap in the unknown for mankind?” An “Artificial Intelligence-first” world is being preached all over the media by many responsible players in economic and scientific communities.This letter states our belief in AI potentialities, including its major and decisive role in computer science and engineering, while warning against the current hyping of its near future. Although quite excited by several recent interesting revelations about the future of AI, we here argue in favor of a more cautious interpretation of the current and future AI-based systems potential outreach.We also include some personal perspectives on simple remedies to preventing recognized possible dangers. We advocate a set of practices and principles that may prevent the development of AI-based systems prone to be misused.Accountable “Data curators”, appropriate Software Engineering specification methods, the inclusion, when needed, of the “human in the loop”, software agents with emotion-like states might be important factors leading to more secure AI-based systems.Moreover, to inseminate ART in Artificial Intelligence, ART standing for Accountability, Responsibility and Transparency, becomes also mandatory for trustworthy AI-based systems.This letter is an abbreviation of a more substantial article to be published in IJCA journal.


2021 ◽  
Vol 11 (3) ◽  
pp. 102
Author(s):  
Itziar Sobrino-García

The expanding use of artificial intelligence (AI) in public administration is generating numerous opportunities for governments. Current Spanish regulations have established electronic administration and support the expansion and implementation of this new technology, but they may not be adapted to the legal needs caused by AI. Consequently, this research aims to identify the risks associated with AI uses in Spanish public administration and if the legal mechanisms can solve them. We answer these questions by employing a qualitative research approach, conducting semi-structured interviews with several experts in the matter. Despite the benefits that this technology may involve, throughout this research we can confirm that the use of artificial intelligence can generate several problems such as opacity, legal uncertainty, biases, or breaches of personal data protection. The mechanisms already provided by Spanish law are not enough to avoid these risks as they have not been designed to face the use of artificial intelligence in public administration. In addition, a homogeneous legal definition of AI needs to be established.


2020 ◽  
pp. 29-39
Author(s):  
Ineta Breskienė

This article analyses the current situation in the European Union related to the free movement of data, relationship between personal data, non – personal data and their use in artificial intelligence technology. Despite the European Union’s efforts to facilitate the free movement of data, some relevant obstacles are currently being observed. Artificial intelligence technology faces difficulties in using data. Despite the fact that large amounts of data are now increasingly accessible to such technology, its ability to de-anonymize data poses risks of turning simple data into personal data and making its use a challenge for artificial intelligence developers. The issues raised are sensitive and some regulatory changes should be made in the near future in order for the European Union to remain a leader in emerging technologies.


e-mentor ◽  
2021 ◽  
Vol 92 (5) ◽  
pp. 16-25
Author(s):  
Barbara Grabińska ◽  
◽  
Mariusz Andrzejewski ◽  
Konrad Grabiński

The application of computer-based technologies in academic education has at least three decades of history and experience. In some study fields, it has been present since the very beginning, while in others it has become a necessity only in recent years. The ongoing technological revolution is disrupting the traditional professions with fundamental changes and – in some cases – even with the threat of disappearance of jobs. The finance and accounting professions are expected to undergo a technological change in the near future. While the changes are visible at the corporate level, university education seems to lag one step behind. We conducted a study among the students and graduates of the finance and accounting line of studies at the Cracow University of Economics. Using regression analysis, we investigate the perception of the usefulness of courses providing knowledge on new technologies like Artificial Intelligence (AI). We use a unique Polish setting, which is a leader in terms of outsourcing services. Our findings show that both students and graduates are aware of the importance of technological change. The courses teaching basic subjects are essential, but the current expectations are much higher in terms of the application of new technology based on AI in finance and accounting.


2020 ◽  
Author(s):  
Cátia Santos-Pereira

BACKGROUND GDPR was scheduled to be formally adopted in 2016 with EU member states being given two years to implement it (May 2018). Given the sensitive nature of the personal data that healthcare organization process on a 24/7 basis, it is critical that the protection of that data in a hospital environment is given the high priority that data protection legislation (GDPR) requires. OBJECTIVE This study addresses the state of Public Portuguese hospitals regarding GDPR compliance in the moment of GDPR preparation period (2016-2018) before the enforcement in 25 May 2018, and what activities have started since then. The study focuses in three GDPR articles namely 5, 25 and 32, concerning authentication security, identity management processes and audit trail themes. METHODS The study was conducted between 2017 and 2019 in five Portuguese Public Hospitals (each different in complexity). In each hospital, six categories of information systems critical to health institutions were included in the study, trying to cover the main health information systems available and common to hospitals (ADT, EPR, PMS, RIS, LIS and DSS). It was conducted interviews in two phases (before and after GDPR enforcement) with the objective to identify the maturity of information systems of each hospital regarding authentication security, identity management processes and traceability and efforts in progress to avoid security issues. RESULTS A total of 5 hospitals were included in this study and the results of this study highlight the hospitals privacy maturity, in general, the hospitals studied where very far from complying with the security measures selected (before May 2018). Session account lock and password history policy were the poorest issues, and, on the other hand, store encrypted passwords was the best issue. With the enforcement of GDPR these hospitals started a set of initiatives to fill this gap, this is made specifically for means of making the whole process as transparent and trustworthy as possible and trying to avoid the huge fines. CONCLUSIONS We are still very far from having GDPR compliant systems and Institutions efforts are being done. The first step to align an organization with GDPR should be an initial audit of all system. This work collaborates with the initial security audit of the hospitals that belong to this study.


2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Pierre Auloge ◽  
Julien Garnon ◽  
Joey Marie Robinson ◽  
Sarah Dbouk ◽  
Jean Sibilia ◽  
...  

Abstract Objectives To assess awareness and knowledge of Interventional Radiology (IR) in a large population of medical students in 2019. Methods An anonymous survey was distributed electronically to 9546 medical students from first to sixth year at three European medical schools. The survey contained 14 questions, including two general questions on diagnostic radiology (DR) and artificial intelligence (AI), and 11 on IR. Responses were analyzed for all students and compared between preclinical (PCs) (first to third year) and clinical phase (Cs) (fourth to sixth year) of medical school. Of 9546 students, 1459 students (15.3%) answered the survey. Results On DR questions, 34.8% answered that AI is a threat for radiologists (PCs: 246/725 (33.9%); Cs: 248/734 (36%)) and 91.1% thought that radiology has a future (PCs: 668/725 (92.1%); Cs: 657/734 (89.5%)). On IR questions, 80.8% (1179/1459) students had already heard of IR; 75.7% (1104/1459) stated that their knowledge of IR wasn’t as good as the other specialties and 80% would like more lectures on IR. Finally, 24.2% (353/1459) indicated an interest in a career in IR with a majority of women in preclinical phase, but this trend reverses in clinical phase. Conclusions Development of new technology supporting advances in artificial intelligence will likely continue to change the landscape of radiology; however, medical students remain confident in the need for specialty-trained human physicians in the future of radiology as a clinical practice. A large majority of medical students would like more information about IR in their medical curriculum; almost a quarter of students would be interested in a career in IR.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jane Scheetz ◽  
Philip Rothschild ◽  
Myra McGuinness ◽  
Xavier Hadoux ◽  
H. Peter Soyer ◽  
...  

AbstractArtificial intelligence technology has advanced rapidly in recent years and has the potential to improve healthcare outcomes. However, technology uptake will be largely driven by clinicians, and there is a paucity of data regarding the attitude that clinicians have to this new technology. In June–August 2019 we conducted an online survey of fellows and trainees of three specialty colleges (ophthalmology, radiology/radiation oncology, dermatology) in Australia and New Zealand on artificial intelligence. There were 632 complete responses (n = 305, 230, and 97, respectively), equating to a response rate of 20.4%, 5.1%, and 13.2% for the above colleges, respectively. The majority (n = 449, 71.0%) believed artificial intelligence would improve their field of medicine, and that medical workforce needs would be impacted by the technology within the next decade (n = 542, 85.8%). Improved disease screening and streamlining of monotonous tasks were identified as key benefits of artificial intelligence. The divestment of healthcare to technology companies and medical liability implications were the greatest concerns. Education was identified as a priority to prepare clinicians for the implementation of artificial intelligence in healthcare. This survey highlights parallels between the perceptions of different clinician groups in Australia and New Zealand about artificial intelligence in medicine. Artificial intelligence was recognized as valuable technology that will have wide-ranging impacts on healthcare.


2021 ◽  
Vol 14 (8) ◽  
pp. 339
Author(s):  
Tatjana Vasiljeva ◽  
Ilmars Kreituss ◽  
Ilze Lulle

This paper looks at public and business attitudes towards artificial intelligence, examining the main factors that influence them. The conceptual model is based on the technology–organization–environment (TOE) framework and was tested through analysis of qualitative and quantitative data. Primary data were collected by a public survey with a questionnaire specially developed for the study and by semi-structured interviews with experts in the artificial intelligence field and management representatives from various companies. This study aims to evaluate the current attitudes of the public and employees of various industries towards AI and investigate the factors that affect them. It was discovered that attitude towards AI differs significantly among industries. There is a significant difference in attitude towards AI between employees at organizations with already implemented AI solutions and employees at organizations with no intention to implement them in the near future. The three main factors which have an impact on AI adoption in an organization are top management’s attitude, competition and regulations. After determining the main factors that influence the attitudes of society and companies towards artificial intelligence, recommendations are provided for reducing various negative factors. The authors develop a proposition that justifies the activities needed for successful adoption of innovative technologies.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


Sign in / Sign up

Export Citation Format

Share Document