character set
Recently Published Documents


TOTAL DOCUMENTS

229
(FIVE YEARS 28)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Vol 53 (1) ◽  
Author(s):  
Shani Avni

Ismar David was a prolific calligrapher, type designer, graphic designer, and illustrator who also engaged in architectural design and taught calligraphy. He studied applied arts in Berlin, emigrating to Jerusalem in 1932 and to New York in 1952. From the 1930s to the 1990s, he created a wealth of unique designs, most importantly the David Hebrew typeface family. It was the first comprehensive Hebrew typeface family, comprising nine styles that include a true Hebrew italic style and a monolinear style, equivalent to a Latin sans serif. David Hebrew provides an example of how a research-based design process can help negotiate the tension between old and new, leading to an innovative, well-informed design solution. David not only excelled in his groundbreaking approach to Hebrew type design for existing glyphs, but he went a step further, expanding the character set. After David completed the design of his typeface family in 1954, it was partially cast for machine composition by the Intertype Corporation. During that period, David relocated to New York to pursue his creative career.


2021 ◽  
Vol 2058 (1) ◽  
pp. 012030
Author(s):  
R M Berestov ◽  
E A Bobkov ◽  
V S Belov ◽  
A V Nevedin

Abstract At the moment, neurocomputer interfaces (BCI) make it possible to implement on their basis devices for diagnosing a physical condition, implementing control systems for bionic prostheses, information input means such as neuro chat and character set systems based on brain potentials. At the moment, the main technology for obtaining brain activity for neurointerfaces is the electroencephalogram (EEG). There are promising technologies that will make it possible to achieve new results in the field of neurointerfaces. These technologies are functional near infrared spectroscopy (fNIRS) and magnetoencephalography (MEG).


2021 ◽  
Vol 2 (6) ◽  
Author(s):  
Yonas Demeke Woldemariam

AbstractWe develop an NLP method for inferring potential contributors among multitude of users within crowdsourcing forums (CSFs). The method basically provides a way to predict expertise from their structures (syntax–semantic patterns) when crowdsourced votes are unavailable. It primarily deals with tackling core adverse conditions, which hinder the identification of crowds’ expertise levels, and standardization of measuring linguistic quality of crowdsourced text. To solve the former, an expertise estimation and linguistic feature annotation algorithm is developed. To approach the later, a comprehensive linguistic characterization of crowdsourced text, along with extensive joint syntax–punctuation analyses, have been carried out. The entire corpora are comprised of approximately 8 different domains, 3 million and 50,000 sentences, and 32 million and 90,000 words, contributed by a crowd of 50,000 users. The analyses revealed six major linguistic patterns, identified on the basis of ordered lists of structural (syntactic) categories, learned from grammatical constructions, practiced by major groups of experts. In addition, nine different text-oriented expertise dimensions are identified, as crucial steps towards establishing standard linguistic-based expertise-framework for most CSFs. Potentially, the resulting framework simplifies the measurement of crowds’ proficiency, in those particular forums, where crowds’ tasks (e.g., answering questions, technically discerning deep features within images of galaxies for classifying them into certain categories) are intimately connected with their writing (e.g., describing answers illustratively, expressing complex phenomena observed in classified images). Moreover, wide varieties of linguistic annotations: latent topic annotations, named entities, syntactic and punctuation annotations, semantic and character set annotations, word and character n-grams (n = 2 and 3) annotations, are extracted. That is for building baseline and enhanced versions of expertise models (about 20 different models built). The successive achievements of enhancing baseline models, with iteratively adding linguistic feature annotations in a two-stage enhancement process, indicate the adaptability of the learned models.


2021 ◽  
Vol 85 ◽  
pp. 29-56
Author(s):  
Jonah M. Ulmer ◽  
István Mikó ◽  
Andrew R. Deans ◽  
Lars Krogmann

The Waterston’s evaporatorium (=Waterston’s organ), a cuticular modification surrounding the opening of an exocrine gland located on metasomal tergite 6, is characterized and examined for taxonomic significance within the parasitoid wasp family Ceraphronidae. Modification of the abdominal musculature and the dorsal vessel are also broadly discussed for the superfamily Ceraphronoidea, with a novel abdominal pulsatory organ for Apocrita being discovered and described for the first time. Cuticular modification of T6, due to the presence of the Waterston’s evaporatorium, provides a character complex that allows for genus- and species-level delimitation in Ceraphronidae. The matching of males and females of a species using morphology, a long standing challenge for the group, is also resolved with this new character set. Phylogenetic analysis including 19 Waterston’s evaporatorium related characters provides support for current generic groupings within the Ceraphronidae and elaborates on previously suggested synapomorphies. Potential function of the Waterston’s organ and its effects on the dorsal vessel are discussed.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Seth Erickson

Plain text data consists of a sequence of encoded characters or “code points” from a given standard such as the Unicode Standard. Some of the most common file formats for digital data used in eScience (CSV, XML, and JSON, for example) are built atop plain text standards. Plain text representations of digital data are often preferred because plain text formats are relatively stable, and they facilitate reuse and interoperability. Despite its ubiquity, plain text is not as plain as it may seem. The set of standards used in modern text encoding (principally, the Unicode Character Set and the related encoding format, UTF-8) have complex architectures when compared to historical standards like ASCII. Further, while the Unicode standard has gained in prominence, text encoding problems are not uncommon in research data curation. This primer provides conceptual foundations for modern text encoding and guidance for common curation and preservation actions related to textual data.


Author(s):  
Márcia da Costa Capistrano ◽  
Romeu de Carvalho Andrade Neto ◽  
Vanderley Borges dos Santos ◽  
Lauro Saraiva Lessa ◽  
Marcos Deon Vilela de Resende ◽  
...  

Abstract: The objective of this work was to select superior sweet orange (Citrus sinensis) genotypes with higher yield potential based on data from eight harvests, using the residual or restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) methodology. The experiment was carried out from 2002 to 2008 and in 2010 in the municipality of Rio Branco, in the state of Acre, Brazil. Analyzes of deviance were performed to test the significance of the components of variance according to the random effects of the used model, and parameters were estimated from individual genotypic and phenotypic variances. A selection intensity of 20% was adopted regarding genotypic selection, i.e., only the best 11 of the 55 genotypes tested were selected. The estimates of the genetic parameters show the existence of genetic variability and the selection potential of the studied sweet orange genotypes. The genotypic correlation between harvests is of low magnitude, except for the variable average fruit mass, and, as a reflex, there is a change in the ordering of the genotypes. Genotypes 5, 48, 19, 14, and 47 stand out as being the most productive, and, therefore, are the most suitable for selection purposes. Genotypes 14 and 47 show superior performance for the character set evaluated.


2021 ◽  
pp. 5-14
Author(s):  
Vanita Jain1 ◽  
◽  
◽  
◽  
Mahima Swami ◽  
...  

Passwords act as a first line of defense against any malicious or unauthorized access to one's personal information. With the increasing digitization, it has now become even more important to choose strong passwords. In this paper, the authors analyze a 100 million Email-Password Database to perform Exploratory Data Analysis. The analysis provides valuable insights on statistics about the most common passwords being used, character set of passwords, most common domains, average length, password strength, frequencies of letters, numbers, symbols (special characters), most common letter, most common number, most common symbol, the ratio of letters, numbers, symbols in passwords which highlights the general trend that users follow while creating passwords. Using the results of this paper, users can make intelligent decisions while creating passwords for themselves, i.e., not opting for the most common features that will help them create robust and less vulnerable passwords.


Sign in / Sign up

Export Citation Format

Share Document