Algorithmenkompatibles Verwaltungsrecht?

2021 ◽  
Vol 54 (2) ◽  
pp. 251-272
Author(s):  
Margrit Seckelmann

Die Übersetzung von Recht in (Computer–)‌Code ist derzeit in aller Munde. Lawrence Lessigs berühmtes Diktum, „Code is Law“ wird neuerdings dahingehend reformuliert, dass „Law“ auch „Code“ sei, dass man bei der Rechtsetzung also zugleich seine rechentechnische Umsetzbarkeit mitzudenken habe. Einen Ansatzpunkt für eine derartige „Algorithmisierbarkeit“ von Recht bietet § 35a des Verwaltungsverfahrensgesetzes des Bundes, wonach „automatisierte“ Entscheidungen in bestimmten Fällen zugelassen werden. Ein aktuelles Papier des Fraunhofer FOKUS-Instituts unter dem Titel „Recht Digital“ denkt dieses weiter und suggeriert, man müsse nur die passenden, eindeutigen Ausdrücke finden, dann sei Recht gleichsam „programmierbar“. Aber genau hier stellt sich das Problem: Rechtssprache ist eine Multi-Adressaten-Sprache, also eine Sprache, die sich ebenso sehr an ein Fachpublikum wie an Laien (Bürgerinnen und Bürger) wendet. Sie ist zudem kontextabhängig. Der aktuelle Hype um den Begriff der „Algorithmisierung“ von Gesetzen verbirgt zudem, dass es sich hierbei um ein Grundproblem von Rechtssprache handelt, das in den 1960er bis 1980er Jahren unter den Paradigmata „Rechts-/Verwaltungsautomation“ oder Rechtskybernetik verhandelt wurde. Wie kann man sich also dem Problem der Kontextabhängigkeit von Recht unter dem neuen Paradigma der Algorithmisierung nähern? Im Beitrag über „Algorithmenkompatibles Verwaltungsrecht? Juristische und sprachwissenschaftliche Überlegungen zu einer ‚Standardisierung von Rechtsbegriffen‘“ werden verschiedene Zugänge zur Schaffung einer „algorithmenkonformen“ Rechtssprache vorgestellt. Letztlich aber vermögen es noch so ausgefeilte technische Methoden nicht, das Problem demokratischer Deliberation zu verdrängen – über die fundamentalen Fragen einer Algorithmisierung der Rechtssprache muss der unmittelbar demokratisch legitimierte Gesetzgeber entscheiden. „Kontext“ und „Text“ geraten insoweit in ein wechselseitiges Abhängigkeitsverhältnis. The translation of law into (computer) code seems to be currently on everyone’s lips. Lawrence Lessigs’ famous dictum “Code is Law” has recently been rephrased saying that “Law” was also “Code”. This means that the wording of laws should directly take their “computer implementability” into consideration. A starting point for those postulations can be seen in the (relatively) new section 35a of the (Federal) Administrative Prodecure Act (Verwaltungsverfahrensgesetz), which allows “automatic” decisions in specific cases. A new paper of the Fraunhofer FOKUS institute takes this up and suggests that we have only to look for the appropriate, unambiguous term that corresponds with an unequivocal legal meaning. In doing so, law could be programmable. But this is exactly the point where the problem arises: laws have more than one addressee; they address lawyers as well as citizens (mostly laypeople). Furthermore, legal terminology is context dependent. The current hype regarding the “algorithmization” of legal terminology also hides the fact that this issue was – more or less – discussed once before under the paradigm “legal cybernetics” between 1960 and 1985. So how can we approach the problem of context-dependency of law under the new paradigm of algorithmization? In our contribution on “Algorithm-compatible administrative law? Legal and linguistic considerations concerning the ‘standardization’ of legal terminology”, we will introduce different approaches to safeguard the compatibility of law with computer technics. But how sophisticated a technical method can be: It is the democratically legitimised parliament that must make the fundamental decisions when it comes to an “algorithmization” of legal terminology, because there is no text without context.

2021 ◽  
pp. 1-12
Author(s):  
Joonkoo Park ◽  
Sonia Godbole ◽  
Marty G. Woldorff ◽  
Elizabeth M. Brannon

Abstract Whether and how the brain encodes discrete numerical magnitude differently from continuous nonnumerical magnitude is hotly debated. In a previous set of studies, we orthogonally varied numerical (numerosity) and nonnumerical (size and spacing) dimensions of dot arrays and demonstrated a strong modulation of early visual evoked potentials (VEPs) by numerosity and not by nonnumerical dimensions. Although very little is known about the brain's response to systematic changes in continuous dimensions of a dot array, some authors intuit that the visual processing stream must be more sensitive to continuous magnitude information than to numerosity. To address this possibility, we measured VEPs of participants viewing dot arrays that changed exclusively in one nonnumerical magnitude dimension at a time (size or spacing) while holding numerosity constant and compared this to a condition where numerosity was changed while holding size and spacing constant. We found reliable but small neural sensitivity to exclusive changes in size and spacing; however, changing numerosity elicited a much more robust modulation of the VEPs. Together with previous work, these findings suggest that sensitivity to magnitude dimensions in early visual cortex is context dependent: The brain is moderately sensitive to changes in size and spacing when numerosity is held constant, but sensitivity to these continuous variables diminishes to a negligible level when numerosity is allowed to vary at the same time. Neurophysiological explanations for the encoding and context dependency of numerical and nonnumerical magnitudes are proposed within the framework of neuronal normalization.


2021 ◽  
Vol 20 (2) ◽  
pp. 267-288
Author(s):  
Katayoun Hosseinnejad

Abstract Article 31 of the Vienna Convention on the Law of Treaties calls for consideration of the ordinary meaning as the starting point in the process of interpretation. Although the linguistic concept of ordinary meaning is founded on the idea that the meaning of a sentence is directly imposed by the norms of language so that interpreters are provided with an objective standard which is external to their subjectivity, this article demonstrates that the interpretive jurisprudence of the International Court of Justice has departed from the imperatives of the ordinary meaning doctrine. Rather, the Court, mindful of the problem that no mere sequence of words can represent actual legal meaning, has moved towards construction of ordinary meaning.


Author(s):  
Iago Richard Rodrigues ◽  
Sebastião Rogério ◽  
Judith Kelner ◽  
Djamel Sadok ◽  
Patricia Takako Endo

Many works have recently identified the need to combine deep learning with extreme learning to strike a performance balance with accuracy especially in the domain of multimedia applications. Considering this new paradigm, namely convolutional extreme learning machine (CELM), we present a systematic review that investigates alternative deep learning architectures that use extreme learning machine (ELM) for a faster training to solve problems based on image analysis. We detail each of the architectures found in the literature, application scenarios, benchmark datasets, main results, advantages, and present the open challenges for CELM. We follow a well structured methodology and establish relevant research questions that guide our findings. We hope that the observation and classification of such works can leverage the CELM research area providing a good starting point to cope with some of the current problems in the image-based computer vision analysis.


2018 ◽  
Vol 45 (4) ◽  
pp. 858-877 ◽  
Author(s):  
Marta ÁLVAREZ-CAÑIZO ◽  
Paz SUÁREZ-COALLA ◽  
Fernando CUETOS

AbstractSeveral studies have found that, after repeated exposure to new words, children form orthographic representations that allow them to read those words faster and more fluently. However, these studies did not take into account variables related to the words. The aim of this study was to investigate the influence of sublexical variables on the formation of orthographic representations of words by Spanish children. The first experiment used pseudo-words of varying syllabic structure and syllabic frequency. The stimuli for the second experiment were formed with or without context-dependent graphemes. We found that formation of orthographic representations was influenced by syllabic structure (easier for words with simple syllabic structure) and the context-dependency of graphemes (easier in the absence of context-dependent graphemes), but not syllabic frequency. These results indicate that the easier it is to read a word, the easier it is to form an orthographic representation of it.


2020 ◽  
Vol 14 (4) ◽  
pp. 585-596
Author(s):  
E. S. Romanicheva ◽  

Abstract. The Introduction to the paper raises a research question, why a classical text needs another commentary if the addressee of the commentary is a school-age reader. The question becomes a starting point in the discourse on how text commenting is used in school practice today and what new types of commentaries appear in the school lesson. Materials and methods. Search for an answer to the question raised employs methods of comparison and collation of sources to clarify the dissimilarity between the technique of commented reading and that of reading with commentaries, and establish a conceptual difference. The main body of the paper contains a description of the research results. An analysis of the experience of E.S. Abelyuk (a teacher and methodologist from Moscow) presented in methodological publications allowed the conclusion that the possibilities of using the technique of readers’ commenting in today’s school have significantly grown: it can be a research project, either a group project or an individual one; while the methodology developed by E.S. Abelyuk that, essentially, appears to be both research work and project-oriented work, could be used by teachers to support learners’ research and projects. The hypothesis put forward is tested by the analysis of practices pursued by M.A. Pavlova (a teacher from Moscow). She proposes readers’ commentary as a form of final work. According to her task, learners have to write a commentary on the poetic essay ‘L.N.Tolstoy’ (little known to contemporary readers) by V.V. Nabokov. It is noteworthy that while writing their commentaries, learners should demonstrate their abilities to use the Internet resources. Teacher’s assistance to learners in mastering the practices is also a contemporary tutorial objective. The final part of the paper presents a commentary on ‘One Day in the Life of Ivan Denisovich’’ written by A.I. Knyazhitsky, a methodologist from Moscow. He commented on the story by addressing the literary investigation of ‘The Gulag Archipelago’ and some other documents. In his commentary, the methodologist assumed that considering other texts via commentary creates a context where the studied fiction is read to the maximum effect, and it is up to the learner who reads it to choose the depth of ‘plunging’ into the text. Having created the commentary on the story, A.I. Knyazhitsky splendidly accomplished the most challenging tutorial objective: to describe the technique that combines the commentary of a text and its close reading and fills it with substance. The paper demonstrates this on a specific fragment of the story. The conclusion states that commentary is one of the best known and most effective instructional techniques to teach reading and understanding of a literary text, but the content of the commentary on texts and the commented texts themselves can and should vary. Commentary as an instructional technique should be considerably enriched, in particular, based on an analysis of teachers’ best practices and their further testing and technification. Keywords: instructional technique, commented reading, close reading, reading with a commentary, types of commentaries.


2018 ◽  
Vol 115 (33) ◽  
pp. E7680-E7689 ◽  
Author(s):  
Xiaoxue Gao ◽  
Hongbo Yu ◽  
Ignacio Sáez ◽  
Philip R. Blue ◽  
Lusha Zhu ◽  
...  

Humans can integrate social contextual information into decision-making processes to adjust their responses toward inequity. This context dependency emerges when individuals receive more (i.e., advantageous inequity) or less (i.e., disadvantageous inequity) than others. However, it is not clear whether context-dependent processing of advantageous and disadvantageous inequity involves differential neurocognitive mechanisms. Here, we used fMRI to address this question by combining an interactive game that modulates social contexts (e.g., interpersonal guilt) with computational models that enable us to characterize individual weights on inequity aversion. In each round, the participant played a dot estimation task with an anonymous coplayer. The coplayer would receive pain stimulation with 50% probability when either of them responded incorrectly. At the end of each round, the participant completed a variant of dictator game, which determined payoffs for him/herself and the coplayer. Computational modeling demonstrated the context dependency of inequity aversion: when causing pain to the coplayer (i.e., guilt context), participants cared more about the advantageous inequity and became more tolerant of the disadvantageous inequity, compared with other conditions. Consistently, neuroimaging results suggested the two types of inequity were associated with differential neurocognitive substrates. While the context-dependent processing of advantageous inequity was associated with social- and mentalizing-related processes, involving left anterior insula, right dorsolateral prefrontal cortex, and dorsomedial prefrontal cortex, the context-dependent processing of disadvantageous inequity was primarily associated with emotion- and conflict-related processes, involving left posterior insula, right amygdala, and dorsal anterior cingulate cortex. These results extend our understanding of decision-making processes related to inequity aversion.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Claudio Baccarani ◽  
Federico Brunetti ◽  
Jacques Martin

PurposeThe purpose of this paper is to tackle the grand issue of climate change in a managerial perspective by proposing a new type of management.Design/methodology/approachClimate change has now been debated for many years, and in spite of different viewpoints, analyses and opinions, is a phenomenon that is accepted by all. There are thousands of studies on the nature of climate change and its consequences on the planet Earth and its inhabitants. However, there are few studies investigating the consequences of climate change on the founding tenets and practices of management. This paper aims to contribute to this facet of the issue. In the first part, it examines the main facts about climate change, their impact on businesses and proposes an adapted model of management for agriculture, industry, services and supply chains. In the second part, it advocates a shift in paradigm from the “maximization of profit” to the “maximization of well-being” as the foundation of a new managerial philosophy that can both address climate change and sustainability.FindingsCompanies and managers are in a much better position than politicians and consumers to find a solution to climate change problems for the very reason that they are not stupid in Cipolla's (2011) sense. Companies and managers do have the power to rewrite the rules of the game in order to get to a firm and management metamorphosis. Starting from a return to company ownership by and for the company itself (not just external shareholders), a switch in purpose from profit-seeking to people's well-being, fair remuneration of stakeholders, progress as a measure of success and long-term orientation are suggested as new tenets in management.Research limitations/implicationsAlthough this paper has several limitations (it may be too wide in scope, utopian, its ideas may sound unachievable and even sometimes naïve in their arguments), its starting point is very clear: the authors, as management scholars, must do something to try and stop the crash of economies and businesses in an ecological disaster. And its logic is very clear and straightforward as well: if people want things to change, then they have to change the foundations of management thinking, both in theory and in practice. The authors do not claim their solution is the only one or the best: avenues for future research aimed at providing better solutions are wide open from this point of view, and the authors genuinely encourage colleagues to continue in this direction and contribute to this work. What matters most, however, is to stop looking for precise answers to “wrong, well-defined, narrow problems” and to start looking for “approximate answers to important problems” (Brown et al., 2005) as the authors tried to do here.Practical implicationsDeveloping a new management operating model and foundations able to keep companies alive while not compromising mankind survival on planet Earth.Originality/valueThis paper addresses the Tourish (2020) challenge for purposeful research in management by providing some fresh ideas about the way companies and management principles and practices should change to prevent irreversible environmental damages.


2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Armando C. Marino

The BaCo code (“Barra Combustible”) was developed at the Atomic Energy National Commission of Argentina (CNEA) for the simulation of nuclear fuel rod behaviour under irradiation conditions. We present in this paper a brief description of the code and the strategy used for the development, improvement, enhancement, and validation of a BaCo during the last 30 years. “Extreme case analysis”, parametric (or sensitivity), probabilistic (or statistic) analysis plus the analysis of the fuel performance (full core analysis) are the tools developed in the structure of BaCo in order to improve the understanding of the burnup extension in the Atucha I NPP, and the design of advanced fuel elements as CARA and CAREM. The 3D additional tools of BaCo can enhance the understanding of the fuel rod behaviour, the fuel design, and the safety margins. The modular structure of the BaCo code and its detailed coupling of thermo-mechanical and irradiation-induced phenomena make it a powerful tool for the prediction of the influence of material properties on the fuel rod performance and integrity.


Sign in / Sign up

Export Citation Format

Share Document