scholarly journals Carbohydrate-structure-dependent recognition of desialylated serum glycoproteins in the liver and leucocytes. Two complementary systems

1985 ◽  
Vol 227 (2) ◽  
pp. 345-354 ◽  
Author(s):  
K Bezouska ◽  
O Táborský ◽  
J Kubrycht ◽  
M Pospísil ◽  
J Kocourek

Oligosaccharides with four different types of branching were prepared from purified human transferrin, alpha 2-macroglobulin, caeruloplasmin and alpha 1-acid glycoprotein and labelled with NaBH3 3H. Binding of these oligosaccharides to rat liver plasma membrane, rat leucocytes, pig liver plasma membranes and pig leucocyte plasma membranes was investigated. A striking dependence of binding on oligosaccharide branching was observed. The values of apparent association constants Ka at 4 degrees C vary from 10(6) M-1 (biantennary structure) to 10(9) M-1 (tetra-antennary structure) in the liver, whereas in the leucocytes the Ka values were found to be of reversed order, from 1.8 × 10(9) M-1 for biantennary to 2.2 × 10(6) M-1 for tetra-antennary structures. The binding is completely inhibited by 150 mM-D-galactose, but 150 mM-D-mannose has almost no effect on binding. Leucocyte plasma membranes bind preferentially 125I-asialoglycoproteins with biantennary oligosaccharides, thus completing the specificity pattern of the hepatic recognition system for desialylated glycoproteins. Possible physiological roles of these two complementary recognition systems under normal and pathological conditions are discussed.

Author(s):  
V. Jagan Naveen ◽  
K. Krishna Kishore ◽  
P. Rajesh Kumar

In the modern world, human recognition systems play an important role to   improve security by reducing chances of evasion. Human ear is used for person identification .In the Empirical study on research on human ear, 10000 images are taken to find the uniqueness of the ear. Ear based system is one of the few biometric systems which can provides stable characteristics over the age. In this paper, ear images are taken from mathematical analysis of images (AMI) ear data base and the analysis is done on ear pattern recognition based on the Expectation maximization algorithm and k means algorithm.  Pattern of ears affected with different types of noises are recognized based on Principle component analysis (PCA) algorithm.


2019 ◽  
Vol 63 (5) ◽  
pp. 50402-1-50402-9 ◽  
Author(s):  
Ing-Jr Ding ◽  
Chong-Min Ruan

Abstract The acoustic-based automatic speech recognition (ASR) technique has been a matured technique and widely seen to be used in numerous applications. However, acoustic-based ASR will not maintain a standard performance for the disabled group with an abnormal face, that is atypical eye or mouth geometrical characteristics. For governing this problem, this article develops a three-dimensional (3D) sensor lip image based pronunciation recognition system where the 3D sensor is efficiently used to acquire the action variations of the lip shapes of the pronunciation action from a speaker. In this work, two different types of 3D lip features for pronunciation recognition are presented, 3D-(x, y, z) coordinate lip feature and 3D geometry lip feature parameters. For the 3D-(x, y, z) coordinate lip feature design, 18 location points, each of which has 3D-sized coordinates, around the outer and inner lips are properly defined. In the design of 3D geometry lip features, eight types of features considering the geometrical space characteristics of the inner lip are developed. In addition, feature fusion to combine both 3D-(x, y, z) coordinate and 3D geometry lip features is further considered. The presented 3D sensor lip image based feature evaluated the performance and effectiveness using the principal component analysis based classification calculation approach. Experimental results on pronunciation recognition of two different datasets, Mandarin syllables and Mandarin phrases, demonstrate the competitive performance of the presented 3D sensor lip image based pronunciation recognition system.


2020 ◽  
Vol 43 (2) ◽  
pp. 45-56
Author(s):  
Abigail Nieves Delgado

The current overproduction of images of faces in digital photographs and videos, and the widespread use of facial recognition technologies have important effects on the way we understand ourselves and others. This is because facial recognition technologies create new circulation pathways of images that transform portraits and photographs into material for potential personal identification. In other words, different types of images of faces become available to the scrutiny of facial recognition technologies. In these new circulation pathways, images are continually shared between many different actors who use (or abuse) them for different purposes. Besides this distribution of images, the categorization practices involved in the development and use of facial recognition systems reinvigorate physiognomic assumptions and judgments (e.g., about beauty, race, dangerousness). They constitute the framework through which faces are interpreted. This paper shows that, because of this procedure, facial recognition technologies introduce new and far-reaching »facialization« processes, which reiterate old discriminatory practices.


Author(s):  
Manjunath K. E. ◽  
Srinivasa Raghavan K. M. ◽  
K. Sreenivasa Rao ◽  
Dinesh Babu Jayagopi ◽  
V. Ramasubramanian

In this study, we evaluate and compare two different approaches for multilingual phone recognition in code-switched and non-code-switched scenarios. First approach is a front-end Language Identification (LID)-switched to a monolingual phone recognizer (LID-Mono), trained individually on each of the languages present in multilingual dataset. In the second approach, a common multilingual phone-set derived from the International Phonetic Alphabet (IPA) transcription of the multilingual dataset is used to develop a Multilingual Phone Recognition System (Multi-PRS). The bilingual code-switching experiments are conducted using Kannada and Urdu languages. In the first approach, LID is performed using the state-of-the-art i-vectors. Both monolingual and multilingual phone recognition systems are trained using Deep Neural Networks. The performance of LID-Mono and Multi-PRS approaches are compared and analysed in detail. It is found that the performance of Multi-PRS approach is superior compared to more conventional LID-Mono approach in both code-switched and non-code-switched scenarios. For code-switched speech, the effect of length of segments (that are used to perform LID) on the performance of LID-Mono system is studied by varying the window size from 500 ms to 5.0 s, and full utterance. The LID-Mono approach heavily depends on the accuracy of the LID system and the LID errors cannot be recovered. But, the Multi-PRS system by virtue of not having to do a front-end LID switching and designed based on the common multilingual phone-set derived from several languages, is not constrained by the accuracy of the LID system, and hence performs effectively on code-switched and non-code-switched speech, offering low Phone Error Rates than the LID-Mono system.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Mohammadreza Azimi ◽  
Seyed Ahmad Rasoulinejad ◽  
Andrzej Pacut

AbstractIn this paper, we attempt to answer the questions whether iris recognition task under the influence of diabetes would be more difficult and whether the effects of diabetes and individuals’ age are uncorrelated. We hypothesized that the health condition of volunteers plays an important role in the performance of the iris recognition system. To confirm the obtained results, we reported the distribution of usable area in each subgroup to have a more comprehensive analysis of diabetes effects. There is no conducted study to investigate for which age group (young or old) the diabetes effect is more acute on the biometric results. For this purpose, we created a new database containing 1,906 samples from 509 eyes. We applied the weighted adaptive Hough ellipsopolar transform technique and contrast-adjusted Hough transform for segmentation of iris texture, along with three different encoding algorithms. To test the hypothesis related to physiological aging effect, Welches’s t-test and Kolmogorov–Smirnov test have been used to study the age-dependency of diabetes mellitus influence on the reliability of our chosen iris recognition system. Our results give some general hints related to age effect on performance of biometric systems for people with diabetes.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 634
Author(s):  
Alakbar Valizada ◽  
Natavan Akhundova ◽  
Samir Rustamov

In this paper, various methodologies of acoustic and language models, as well as labeling methods for automatic speech recognition for spoken dialogues in emergency call centers were investigated and comparatively analyzed. Because of the fact that dialogue speech in call centers has specific context and noisy, emotional environments, available speech recognition systems show poor performance. Therefore, in order to accurately recognize dialogue speeches, the main modules of speech recognition systems—language models and acoustic training methodologies—as well as symmetric data labeling approaches have been investigated and analyzed. To find an effective acoustic model for dialogue data, different types of Gaussian Mixture Model/Hidden Markov Model (GMM/HMM) and Deep Neural Network/Hidden Markov Model (DNN/HMM) methodologies were trained and compared. Additionally, effective language models for dialogue systems were defined based on extrinsic and intrinsic methods. Lastly, our suggested data labeling approaches with spelling correction are compared with common labeling methods resulting in outperforming the other methods with a notable percentage. Based on the results of the experiments, we determined that DNN/HMM for an acoustic model, trigram with Kneser–Ney discounting for a language model and using spelling correction before training data for a labeling method are effective configurations for dialogue speech recognition in emergency call centers. It should be noted that this research was conducted with two different types of datasets collected from emergency calls: the Dialogue dataset (27 h), which encapsulates call agents’ speech, and the Summary dataset (53 h), which contains voiced summaries of those dialogues describing emergency cases. Even though the speech taken from the emergency call center is in the Azerbaijani language, which belongs to the Turkic group of languages, our approaches are not tightly connected to specific language features. Hence, it is anticipated that suggested approaches can be applied to the other languages of the same group.


Polymers ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 2387
Author(s):  
Ilia Iliev ◽  
Tonka Vasileva ◽  
Veselin Bivolarski ◽  
Albena Momchilova ◽  
Iskra Ivanova

Three lactic acid bacteria (LAB) strains identified as Lactobacillus plantarum, Lactobacillus brevis, and Lactobacillus sakei isolated from meat products were tested for their ability to utilize and grow on xylooligosaccharides (XOSs). The extent of carbohydrate utilization by the studied strains was analyzed by HPLC. All three strains showed preferences for the degree of polymerization (DP). The added oligosaccharides induced the LAB to form end-products of typical mixed-acid fermentation. The utilization of XOSs by the microorganisms requires the action of three important enzymes: β-xylosidase (EC 3.2.1.37) exo-oligoxylanase (EC 3.2.1.156) and α-L-arabinofuranosidase (EC 3.2.1.55). The presence of intracellular β-D-xylosidase in Lb. brevis, Lb. plantarum, and Lb. sakei suggest that XOSs might be the first imported into the cell by oligosaccharide transporters, followed by their degradation to xylose. The studies on the influence of XOS intake on the lipids of rat liver plasma membranes showed that oligosaccharides display various beneficial effects for the host organism, which are probably specific for each type of prebiotic used. The utilization of different types of oligosaccharides may help to explain the ability of Lactobacillus strains to compete with other bacteria in the ecosystem of the human gastrointestinal tract.


2019 ◽  
Vol 9 (2) ◽  
pp. 236 ◽  
Author(s):  
Saad Ahmed ◽  
Saeeda Naz ◽  
Muhammad Razzak ◽  
Rubiyah Yusof

This paper presents a comprehensive survey on Arabic cursive scene text recognition. The recent years’ publications in this field have witnessed the interest shift of document image analysis researchers from recognition of optical characters to recognition of characters appearing in natural images. Scene text recognition is a challenging problem due to the text having variations in font styles, size, alignment, orientation, reflection, illumination change, blurriness and complex background. Among cursive scripts, Arabic scene text recognition is contemplated as a more challenging problem due to joined writing, same character variations, a large number of ligatures, the number of baselines, etc. Surveys on the Latin and Chinese script-based scene text recognition system can be found, but the Arabic like scene text recognition problem is yet to be addressed in detail. In this manuscript, a description is provided to highlight some of the latest techniques presented for text classification. The presented techniques following a deep learning architecture are equally suitable for the development of Arabic cursive scene text recognition systems. The issues pertaining to text localization and feature extraction are also presented. Moreover, this article emphasizes the importance of having benchmark cursive scene text dataset. Based on the discussion, future directions are outlined, some of which may provide insight about cursive scene text to researchers.


2021 ◽  
Vol 13 (12) ◽  
pp. 6900
Author(s):  
Jonathan S. Talahua ◽  
Jorge Buele ◽  
P. Calvopiña ◽  
José Varela-Aldás

In the face of the COVID-19 pandemic, the World Health Organization (WHO) declared the use of a face mask as a mandatory biosafety measure. This has caused problems in current facial recognition systems, motivating the development of this research. This manuscript describes the development of a system for recognizing people, even when they are using a face mask, from photographs. A classification model based on the MobileNetV2 architecture and the OpenCv’s face detector is used. Thus, using these stages, it can be identified where the face is and it can be determined whether or not it is wearing a face mask. The FaceNet model is used as a feature extractor and a feedforward multilayer perceptron to perform facial recognition. For training the facial recognition models, a set of observations made up of 13,359 images is generated; 52.9% images with a face mask and 47.1% images without a face mask. The experimental results show that there is an accuracy of 99.65% in determining whether a person is wearing a mask or not. An accuracy of 99.52% is achieved in the facial recognition of 10 people with masks, while for facial recognition without masks, an accuracy of 99.96% is obtained.


Author(s):  
Daniel M. Gaines ◽  
Fernando Castaño ◽  
Caroline C. Hayes

Abstract This paper presents MEDIATOR, a feature recognition system which is designed to be maintainable and extensible to families of related manufacturing processes. A problem in many feature recognition systems is that they are difficult to maintain. One of the reasons may be because they depend on use of a library of feature-types which are difficult to update when the manufacturing processes change due to changes in the manufacturing equipment. The approach taken by MEDIATOR is based on the idea that the properties of the manufacturing equipment are what enable manufacturable shapes to be produced in a part. MEDIATOR’S method for identifying features uses a description of the manufacturing equipment to simultaneously identify manufacturable volumes (i.e. features) and methods for manufacturing those volumes. Maintenance of the system is simplified because only the description of the equipment needs to be updated in order to update the features identified by the system.


Sign in / Sign up

Export Citation Format

Share Document