A Quantitative Analysis of Legislation with Harsher Punishment in Japan

2021 ◽  
pp. 1-27
Author(s):  
Shunsuke Kyo

Abstract The purpose of this study is to show how the Japanese government has created laws with harsher punishment since the 1990s. While a tendency toward harsher punishment is common in advanced Western countries, a similar tendency in Japan has prompted scholarly discussion on whether it can be understood through the “penal-populism” framework. However, it lacks in systematic evidence. This study presents three findings that differ from previous studies through a quantitative analysis of legislation with harsher punishment. First, while previous literature argues that the legislation increased in the latter half of the 1990s, this study shows that it peaked in the middle of the 2000s. Second, while previous literature argues that the bureaucrats of the Ministry of Justice promote the legislation, this study shows that it is caused by every ministry’s drafting Bills. Third, this study shows that it does not quantitatively avoid partisan conflicts, contrary to the prediction of the “penal-populism” theory.

2021 ◽  
Vol 12 ◽  
Author(s):  
Annika Fredén ◽  
Sverker Sikström

We propose that leaders play a more important role in voters’ party sympathy in proportional representation systems (PR) than previous research has suggested. Voters, from the 2018 Swedish General Election, were in an experiment asked to describe leaders and parties with three indicative keywords. Statistical models were conducted on these text data to predict their vote choice. The results show that despite that the voters vote for a party, the descriptions of leaders predicted vote choice to a similar extent as descriptions of parties. However, the order of the questions mattered, so that the first questions were more predictive than the second question. These analyses indicate that voters tend to conflate characteristics of leaders with their parties during election campaigns, and that leaders are a more important aspect of voting under PR than previous literature has suggested. Overall, this suggests that statistical analysis of words sheds new light of underlying sympathies related to voting.


Author(s):  
Yansheng Liu

Protein translational modifications (PTMs) generate an enormous, but as yet undetermined, expansion of the expressed proteoforms. In this Viewpoint, we firstly differentiate the concepts of proteoform and peptidoform by reviewing and discussing previous literature. We show that the current PTM biological investigation and annotation largely follow a PTM site-specific rather than proteoform-specific approach. We further illustrate a potentially useful matching strategy in which a particular “modified peptidoform” is matched to the corresponding “unmodified peptidoform” as a reference for the quantitative analysis between samples and conditions. We suggest this strategy could provide directly relevant information for learning the PTM site-specific biological functions. Accordingly, we advocate for the wider use of the nomenclature “peptidoform” in the future bottom-up proteomic studies.


Author(s):  
Weihang Huang ◽  
Danqian Lyu ◽  
Jingping Lin

As a behavior of bilingual individuals and an indispensable part of bilingual speech, code-switching has been investigated by many researchers. However, there are many variables influencing code-switching, and each variable has the potential to be a confounding variable. Among these variables is the gender; however, whether there are significant gender differences and what are the gender differences in code-switching remains unknown for Mandarin Mandarin-English child bilinguals, as previous literature diverse on the existence of gender differences. Therefore, this paper seeks potential code-switching and distribution of code-switching by quantitative analysis of speech data in Singapore Bilingual Corpus. The results indicate that gender differences are significant in the amount of intra code-switching. However, neither considerable gender difference is observed in the amount of inter nor the code-switching related environment.


2021 ◽  
Author(s):  
Nicola Righetti

Introduction: Since 2016, “fake news” has been the main buzzword for online misinformation and disinformation. This term has been widely used and discussed by scholars, leading to hundreds of publications in a few years. This report provides a quantitative analysis of the scientific literature on the topic published up to 2020.Methods: Documents mentioning the keyword “fake news” have been searched in Scopus, a large multidisciplinary scientific database. Frequency analysis of metadata and automated lexical analysis of titles and abstracts have been employed to answer the research questions. Results: 2,368 scientific documents mentioned “fake news” in the title or abstract, published by 5,060 authors and 1,225 sources. Until 2016 the number of documents mentioning the term was less than 10 per year, suddenly rising from 2017 (203 documents), and steadily increasing in the following years (477 in 2018, 694 in 2019, and 951 in 2020). Among the most prolific countries are the USA and European countries such as the UK, but also many non-Western countries such as India and China. Computer Science and Social Sciences are the disciplinary fields with the largest number of documents published. Three main thematic areas emerged: computational methodologies for fake news detection, the social and individual dimension of fake news, and fake news in the public and political sphere. There are 10 documents with more than 200 citations, and two papers with a record number of citations (Alcott & Gentzkow, 2017; Lazer et al., 2018).Conclusions: Research on “fake news” keeps on the rise, with a marked upward trend following the 2016 USA Presidential election. Despite having been the subject of debate and also criticism, the term is still widely used. A strong methodological interest in fake news detection through machine learning algorithms emerged, which – it can be argued – can be profitably balanced by a social science approach able to unpack the phenomenon also from a qualitative and theoretical point of view. Although dominated by the USA and other Western countries, the research landscape includes different countries of the world, thus enabling a wider and more nuanced knowledge of the problem. A constantly growing field of study like the one concerning fake news requires scholars to have a general overview of the scientific productions on the topic, and systematic literature reviews can be of help. The variety of perspectives and topics addressed by scholars also means that future analyses will need to focus on more specific topics.


Author(s):  
Noelia Salido-Andres ◽  
Marta Rey-Garcia ◽  
Luis Ignacio Alvarez-Gonzalez ◽  
Rodolfo Vazquez-Casielles

AbstractThis research explores the extent to which campaign factors may influence the success of donation-based crowdfunding (DCF) promoted online with social purposes. Factors that may explain the success of online fundraising campaigns for social causes are firstly identified from previous literature and linked to DCF campaigns through a set of hypotheses: disclosure, imagery, updating, and spreadability. Following, their explanatory capacity is measured through quantitative analysis (logistic regression) based on 360 all-or-nothing campaigns fostered by nonprofits through an online platform. Results confirm the high explanatory capacity of determinants related to the updating and spreadability of the campaign. However, factors related to the disclosure and imagery do not influence their success. This research suggests that the success of online campaigns is closely related to share and update transparent information of those details that contributors deem relevant. Implications are drawn for the effective technical design and management of DCF campaigns channeled through digital media, and specifically for the engagement with potential online communities of funders in digital platforms.


Author(s):  
J.P. Fallon ◽  
P.J. Gregory ◽  
C.J. Taylor

Quantitative image analysis systems have been used for several years in research and quality control applications in various fields including metallurgy and medicine. The technique has been applied as an extension of subjective microscopy to problems requiring quantitative results and which are amenable to automatic methods of interpretation.Feature extraction. In the most general sense, a feature can be defined as a portion of the image which differs in some consistent way from the background. A feature may be characterized by the density difference between itself and the background, by an edge gradient, or by the spatial frequency content (texture) within its boundaries. The task of feature extraction includes recognition of features and encoding of the associated information for quantitative analysis.Quantitative Analysis. Quantitative analysis is the determination of one or more physical measurements of each feature. These measurements may be straightforward ones such as area, length, or perimeter, or more complex stereological measurements such as convex perimeter or Feret's diameter.


Author(s):  
V. V. Damiano ◽  
R. P. Daniele ◽  
H. T. Tucker ◽  
J. H. Dauber

An important example of intracellular particles is encountered in silicosis where alveolar macrophages ingest inspired silica particles. The quantitation of the silica uptake by these cells may be a potentially useful method for monitoring silica exposure. Accurate quantitative analysis of ingested silica by phagocytic cells is difficult because the particles are frequently small, irregularly shaped and cannot be visualized within the cells. Semiquantitative methods which make use of particles of known size, shape and composition as calibration standards may be the most direct and simplest approach to undertake. The present paper describes an empirical method in which glass microspheres were used as a model to show how the ratio of the silicon Kα peak X-ray intensity from the microspheres to that of a bulk sample of the same composition correlated to the mass of the microsphere contained within the cell. Irregular shaped silica particles were also analyzed and a calibration curve was generated from these data.


Author(s):  
H.J. Dudek

The chemical inhomogenities in modern materials such as fibers, phases and inclusions, often have diameters in the region of one micrometer. Using electron microbeam analysis for the determination of the element concentrations one has to know the smallest possible diameter of such regions for a given accuracy of the quantitative analysis.In th is paper the correction procedure for the quantitative electron microbeam analysis is extended to a spacial problem to determine the smallest possible measurements of a cylindrical particle P of high D (depth resolution) and diameter L (lateral resolution) embeded in a matrix M and which has to be analysed quantitative with the accuracy q. The mathematical accounts lead to the following form of the characteristic x-ray intens ity of the element i of a particle P embeded in the matrix M in relation to the intensity of a standard S


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Delbert E. Philpott ◽  
David Leaffer

There are certain advantages for electron probe analysis if the sample can be tilted directly towards the detector. The count rate is higher, it optimizes the geometry since only one angle need be taken into account for quantitative analysis and the signal to background ratio is improved. The need for less tilt angle may be an advantage because the grid bars are not moved quite as close to each other, leaving a little more open area for observation. Our present detector (EDAX) and microscope (Philips 300) combination precludes moving the detector behind the microscope where it would point directly at the grid. Therefore, the angle of the specimen was changed in order to optimize the geometry between the specimen and the detector.


Sign in / Sign up

Export Citation Format

Share Document