Searching for Filipina Sisterhood

Author(s):  
Minjeong Kim

Drawing on the literature on immigrants’ intra-ethnic solidarity and conflict, Chapter 7 examines on Filipinas’ relationships with their co-ethnics by focusing on three different spaces. First, the chapter shows Filipinas’ intimate, quotidian interactions with one another where both sisterly care and group image anxiety exist simultaneously. Second, the chapter describes the stories of Filipinas’ departures from their marital homes. Lastly, the chapter goes beyond the local co-ethnic setting to the scene of Annual Filipino Community Day, the region’s largest co-ethnic space for Filipino immigrants. These different spaces illustrate the generative process through which “Filipinos in Korea” as the community and the identity has been constructed.

2019 ◽  
Author(s):  
Niclas Ståhl ◽  
Göran Falkman ◽  
Alexander Karlsson ◽  
Gunnar Mathiason ◽  
Jonas Boström

<p>In medicinal chemistry programs it is key to design and make compounds that are efficacious and safe. This is a long, complex and difficult multi-parameter optimization process, often including several properties with orthogonal trends. New methods for the automated design of compounds against profiles of multiple properties are thus of great value. Here we present a fragment-based reinforcement learning approach based on an actor-critic model, for the generation of novel molecules with optimal properties. The actor and the critic are both modelled with bidirectional long short-term memory (LSTM) networks. The AI method learns how to generate new compounds with desired properties by starting from an initial set of lead molecules and then improve these by replacing some of their fragments. A balanced binary tree based on the similarity of fragments is used in the generative process to bias the output towards structurally similar molecules. The method is demonstrated by a case study showing that 93% of the generated molecules are chemically valid, and a third satisfy the targeted objectives, while there were none in the initial set.</p>


2019 ◽  
Author(s):  
Gremil Alessandro Naz

<p>This paper examines the changes in Filipino immigrants’ perceptions about themselves and of Americans before and after coming to the United States. Filipinos have a general perception of themselves as an ethnic group. They also have perceptions about Americans whose media products regularly reach the Philippines. Eleven Filipinos who have permanently migrated to the US were interviewed about their perceptions of Filipinos and Americans. Before coming to the US, they saw themselves as hardworking, family-oriented, poor, shy, corrupt, proud, adaptable, fatalistic, humble, adventurous, persevering, gossipmonger, and happy. They described Americans as rich, arrogant, educated, workaholic, proud, powerful, spoiled, helpful, boastful, materialistic, individualistic, talented, domineering, friendly, accommodating, helpful, clean, and kind. Most of the respondents changed their perceptions of Filipinos and of Americans after coming to the US. They now view Filipinos as having acquired American values or “Americanized.” On the other hand, they stopped perceiving Americans as a homogenous group possessing the same values after they got into direct contact with them. The findings validate social perception and appraisal theory, and symbolic interaction theory.</p>


Author(s):  
Ryo Nishikimi ◽  
Eita Nakamura ◽  
Masataka Goto ◽  
Kazuyoshi Yoshii

This paper describes an automatic singing transcription (AST) method that estimates a human-readable musical score of a sung melody from an input music signal. Because of the considerable pitch and temporal variation of a singing voice, a naive cascading approach that estimates an F0 contour and quantizes it with estimated tatum times cannot avoid many pitch and rhythm errors. To solve this problem, we formulate a unified generative model of a music signal that consists of a semi-Markov language model representing the generative process of latent musical notes conditioned on musical keys and an acoustic model based on a convolutional recurrent neural network (CRNN) representing the generative process of an observed music signal from the notes. The resulting CRNN-HSMM hybrid model enables us to estimate the most-likely musical notes from a music signal with the Viterbi algorithm, while leveraging both the grammatical knowledge about musical notes and the expressive power of the CRNN. The experimental results showed that the proposed method outperformed the conventional state-of-the-art method and the integration of the musical language model with the acoustic model has a positive effect on the AST performance.


2021 ◽  
Author(s):  
Raffaella Brumana ◽  
Chiara Stanga ◽  
Fabrizio Banfi

AbstractThe paper focuses on new opportunities of knowledge sharing, and comparison, thanks to the circulation and re-use of heritage HBIM models by means of Object Libraries within a Common Data Environment (CDE) and remotely-accessible Geospatial Virtual Hubs (GVH). HBIM requires a transparent controlled quality process in the model generation and its management to avoid misuses of such models once available in the cloud, freeing themselves from object libraries oriented to new buildings. The model concept in the BIM construction process is intended to be progressively enriched with details defined by the Level of Geometry (LOG) while crossing the different phases of development (LOD), from the pre-design to the scheduled maintenance during the long life cycle of buildings and management (LLCM). In this context, the digitization process—from the data acquisition until the informative models (scan-to-HBIM method)—requires adapting the definition of LOGs to the different phases characterizing the heritage preservation and management, reversing the new construction logic based on simple-to-complex informative models. Accordingly, a deeper understanding of the geometry and state of the art (as-found) should take into account the complexity and uniqueness of the elements composing the architectural heritage since the starting phases of the analysis, adopting coherent object modeling that can be simplified for different purposes as in the construction site and management over time. For those reasons, the study intends (i) to apply the well-known concept of scale to the object model generation, defining different Grades of Accuracy (GOA) related to the scales (ii) to start fixing sustainable roles to guarantee a free choice by the operators in the generation of object models, and (iii) to validate the model generative process with a transparent communication of indicators to describe the richness in terms of precision and accuracy of the geometric content here declined for masonry walls and vaults, and (iv) to identifies requirements for reliable Object Libraries.


Author(s):  
Yuta Ojima ◽  
Eita Nakamura ◽  
Katsutoshi Itoyama ◽  
Kazuyoshi Yoshii

This paper describes automatic music transcription with chord estimation for music audio signals. We focus on the fact that concurrent structures of musical notes such as chords form the basis of harmony and are considered for music composition. Since chords and musical notes are deeply linked with each other, we propose joint pitch and chord estimation based on a Bayesian hierarchical model that consists of an acoustic model representing the generative process of a spectrogram and a language model representing the generative process of a piano roll. The acoustic model is formulated as a variant of non-negative matrix factorization that has binary variables indicating a piano roll. The language model is formulated as a hidden Markov model that has chord labels as the latent variables and emits a piano roll. The sequential dependency of a piano roll can be represented in the language model. Both models are integrated through a piano roll in a hierarchical Bayesian manner. All the latent variables and parameters are estimated using Gibbs sampling. The experimental results showed the great potential of the proposed method for unified music transcription and grammar induction.


Sign in / Sign up

Export Citation Format

Share Document