Some decision-making processes are uncomfortable. Many of us do not like to make significant decisions, such as whether to have a child, solely based on social science research. We do not like to choose randomly, even in cases where flipping a coin is plainly the wisest choice. We are often reluctant to defer to another person, even if we believe that the other person is wiser, and have similar reservations about appealing to powerful algorithms. And, while we are comfortable with considering and weighing different options, there is something strange about deciding solely on a purely algorithmic process, even one that takes place in our own heads.What is the source of our discomfort? We do not present a decisive theory here—and, indeed, the authors have clashing views over some of these issues—but we lay out the arguments for two (consistent) explanations. The first is that such impersonal decision-making processes are felt to be a threat to our autonomy. In all of the examples above, it is not you who is making the decision, it is someone or something else. This is to be contrasted with personal decision-making, where, to put it colloquially, you “own” your decision, though of course you may be informed by social science data, recommendations of others, and so on. A second possibility is that such impersonal decision-making processes are not seen as authentic, where authentic decision making is one in which you intentionally and knowledgably choose an option in a way that is “true to yourself.” Such decision making can be particularly important in contexts where one is making a life-changing decision of great import, such as the choice to emigrate, start a family, or embark on a major career change.
With the publication of the Kannellakis-Smolka 1983 PODC paper, Kanellakis and Smolka pioneered the development of efficient algorithms for deciding behavioral equivalence of concurrent and distributed processes, especially bisimulation equivalence. Bisimulation is the cornerstone of the process-algebraic approach to modeling and verifying concurrent and distributed systems. They also presented complexity results that showed certain behavioral equivalences are computationally intractable. Collectively, their results founded the subdiscipline of algorithmic process theory, and established the associated bridges between the European research community, whose focus at the time was on process theory, and that of the US, with a rich tradition in algorithm design and computational complexity, but to whom process theory was largely unknown.
The article considers the essence of deferred tax assets and liabilities and their reflection in the system of accounts and registers in the historical context. The periodization of the process of formation and development of the problem of deferred taxes in Ukraine with the use of normative and historical methods of cognition is carried out. The differences between permanent and temporary differences in tax profit (loss) and accounting profit (loss) are described. The approach to accounting for deferred taxes and their place in the reporting of enterprises using an algorithmic process is generalized. A detailed description of the current position of accounting for deferred taxes through the viewpoint of Ukrainian accounting standard 17 "Income Tax". Conclusions are made on the possibility of further research on the elimination of methodological difficulties in the allocation of certain tax differences to temporary or permanent.
Objective: To identify the best-suited cephalometric parameter for assessing the sagittal skeletal discrepancy in the Indian population. Design: An in vitro, observational, single-blinded, retrospective study. Setting: Department of Orthodontics and Dentofacial Orthopaedics. Methods: A total of 94 lateral cephalograms were used in this study. The study involved one key person and two examiners. The key person collected the radiographs, coded, analysed and classified them into three groups (skeletal classes I, II and III). Subsequently, the coded radiographs were independently analysed by the two examiners. They classified the cases by matching a minimum of 6 out of 11 parameters. On completion of diagnosis by the examiners, the samples were decoded and matched with the original diagnosis given by the key person. The samples in which identification of a particular cephalometric parameter matched the original evaluation as given by the key person was regarded as correctly diagnosed. The number of correctly assessed cases was used to judge the diagnostic performance of all the parameters in all the cases. Cross-validation of the method was performed, and a diagnostic algorithm was developed for diagnosis. Results: β angle and Pi angle showed a positive predictive value of 1 in both skeletal class I and II cases. ANB angle, W angle and HBN angle showed a positive predictive value of 1 in skeletal class III cases. Conclusion: No single cephalometric parameter can independently be used to diagnose sagittal skeletal discrepancy in all cases. However, a conclusive diagnosis on the type of sagittal skeletal malocclusion can be made by using a simple and easy to use diagnostic algorithmic process having a combination of cephalometric parameters.
AbstractIn practical robotic construction work, such as laying bricks and painting walls, obstructing objects are encountered and motion planning needs to be done to prevent collisions. This paper first introduces the background and results of existing work on motion planning and describes two of the most mainstream methods, the potential field method, and the sampling-based method. How to use the probabilistic route approach for motion planning on a 6-axis robot is presented. An example of a real bricklaying job is presented to show how to obtain point clouds and increase the speed of computation by customizing collision and ignore calculations. Several methods of smoothing paths are presented and the paths are re-detected to ensure the validity of the paths. Finally, the flow of the whole work is presented and some possible directions for future work are suggested. The significance of this paper is to confirm that a relatively fast motion planning can be achieved by an improved algorithmic process in grasshopper.
“Wasan” is the collective name given to a set of mathematical texts written in Japan in the Edo period (1603–1867). These documents represent a unique type of mathematics and amalgamate the mathematical knowledge of a time and place where major advances where reached. Due to these facts, Wasan documents are considered to be of great historical and cultural significance. This paper presents a fully automatic algorithmic process to first detect the kanji characters in Wasan documents and subsequently classify them using deep learning networks. We pay special attention to the results concerning one particular kanji character, the "ima" kanji, as it is of special importance for the interpretation of Wasan documents. As our database is made up of manual scans of real historical documents, it presents scanning artifacts in the form of image noise and page misalignment. First, we use two preprocessing steps to ameliorate these artifacts. Then we use three different blob detector algorithms to determine what parts of each image belong to kanji Characters. Finally, we use five deep learning networks to classify the detected kanji. All the steps of the pipeline are thoroughly evaluated, and several options are compared for the kanji detection and classification steps. As ancient kanji database are rare and often include relatively few images, we explore the possibility of using modern kanji databases for kanji classification.Experiments are run on a dataset containing 100 Wasan book pages. We compare the performance of three blob detector algorithms for kanji detection obtaining 79.60% success rate with 7.88% false positive detections. Furthermore, we study the performance of five well-known deep learning networks and obtain 99.75% classification accuracy for modern kanji and 90.4% for classical kanji. Finally, our full pipeline obtains 95% correct detection and classification of the "ima" kanji with 3% False positives.
Many parameters affect the timeliness of student graduation, starting from the student's interest in certain majors, the type of class chosen, to the grades for each semester obtained. This is a determining factor in how students can graduate on time or not at the end of their education. So a model is needed to predict student graduation rates on time, using alumni data whose data is obtained from several universities in Palembang City. The model used is a Naïve Bayes algorithm which serves as a model for classification. The dataset used is alumni data that has been collected from several universities, while the attributes used are the Department, College, Class Type, Temporary IP Value from semester 1 to 4, graduation year, and college generation. Then from the attributes and models used, the researcher used the Python 3 programming language and the Jupyter Notebook tools to process the prepared dataset. Furthermore, the distribution of the dataset is divided by 70% for training data and 30% for testing data. To test the algorithmic process used by researchers using K-Fold Validation. The results of this study are the accuracy of the prediction model carried out, where the accuracy results obtained from the Python 3 programming language and the Naïve Bayes algorithm are 0.8103.
Sustainable urban transformation increasingly relies upon technicities of computation and interoperability among variegated registers and domains. In contrast, the notion of an “urban majority,” first introduced by the authors nearly a decade ago, points to a different “mathematics” of combination. Here the ways in which different economic practices, demeanors, behavioral tactics, forms of social organization, territory, and mobility intersect and detach, coalesce into enduring cultures of inhabitation or proliferate as momentary occupancies of short-lived situations make up a kind of algorithmic process that continuously produces new functions and new values for individual and collective capacities, backgrounds, and ways of doing things. This capacity, albeit facing new vulnerabilities and recalibration, will become increasingly important in shaping urban change in a post-pandemic era.
Introduction of measures for intra-day of regulating runoff incoming water flow rates from the Nizhny Novgorod HPP and their discharge through the spillway of the Nizhny Novgorod low-pressure hydroelectric complex requires a specific algorithm for dispatcher actions of the created hydraulic system. At the same time, there are serious difficulties in predicting the water regime over time. As previous studies have shown there is a large unevenness and irregularity of water discharges not only during the same day, but also in the same periods of each day, as well as weeks, months, and years. Тhis article analyzes the boundary conditions when introducing measures of regulating runoff, a mathematical model and an algorithm for solving the problem of intraday regulation are developed, describing the sequence of actions for solving the problem of "smoothing" the flow rates supplied to the lower stream of the Nizhny Novgorod low-pressure hydroelectric complex. The implementation of the proposed measures is carried out according to a three-stage (or two-stage) schedule for regulating the flow rate and water level. The proposed measures will improve the hydraulic and hydrological conditions of the downstream reach of the Nizhny Novgorod low-pressure hydroelectric complex by which the necessary depths for navigation will be reached. Conditions have also been created to mitigate erosion processes.