Operational Aspects of National Securities Depository Limited (NSDL) in Depository System

Author(s):  
Rajnikant Kumar

NSDL was registered by the SEBI on June 7, 1996 as India’s first depository to facilitate trading and settlement of securities in the dematerialized form. NSDL has been set up to cater to the demanding needs of the Indian capital markets. NSDL commenced operations on November 08, 1996. NSDL has been promoted by a number of companies, the prominent of them being IDBI, UTI, NSE, SBI, HDFC Bank Ltd., etc. The initial paid up capital of NSDL was Rs. 105 crore which was reduced to Rs. 80 crore. During 2000-2001 through buy-back programme by buying back 2.5 crore shares @ 12 Rs./share. It was done to bring the size of its capital in better alignment with its financial operations and to provide same return to shareholders by gainfully deploying the excess cash available with NSDL. NSDL carries out its activities through service providers such as depository participants (DPs), issuing companies and their registrars and share transfer agents and clearing corporations/ clearing houses of stock exchanges. These entities are NSDL's business partners and are integrated in to the NSDL depository system to provide various services to investors and clearing members. The investor can get depository services through NSDL's depository participants. An investor needs to open a depository account with a depository participant to avail of depository facilities. Depository system essentially aims at eliminating the voluminous and cumbersome paper work involved in the scrip-based system and offers scope for ‘paperless’ trading through state-of-the-art technology. A depository can be compared to a bank. A depository holds securities of investors in the form of electronic accounts, in the same way as bank holds money in a saving account. Besides, holding securities, a depository also provides services related to transactions in securities.

2021 ◽  
Vol 15 (6) ◽  
pp. 1-21
Author(s):  
Huandong Wang ◽  
Yong Li ◽  
Mu Du ◽  
Zhenhui Li ◽  
Depeng Jin

Both app developers and service providers have strong motivations to understand when and where certain apps are used by users. However, it has been a challenging problem due to the highly skewed and noisy app usage data. Moreover, apps are regarded as independent items in existing studies, which fail to capture the hidden semantics in app usage traces. In this article, we propose App2Vec, a powerful representation learning model to learn the semantic embedding of apps with the consideration of spatio-temporal context. Based on the obtained semantic embeddings, we develop a probabilistic model based on the Bayesian mixture model and Dirichlet process to capture when , where , and what semantics of apps are used to predict the future usage. We evaluate our model using two different app usage datasets, which involve over 1.7 million users and 2,000+ apps. Evaluation results show that our proposed App2Vec algorithm outperforms the state-of-the-art algorithms in app usage prediction with a performance gap of over 17.0%.


2021 ◽  
Author(s):  
Phongsathorn Kittiworapanya ◽  
Kitsuchart Pasupa ◽  
Peter Auer

<div>We assessed several state-of-the-art deep learning algorithms and computer vision techniques for estimating the particle size of mixed commercial waste from images. In waste management, the first step is often coarse shredding, using the particle size to set up the shredder machine. The difficulty is separating the waste particles in an image, which can not be performed well. This work focused on estimating size by using the texture from the input image, captured at a fixed height from the camera lens to the ground. We found that EfficientNet achieved the best performance of 0.72 on F1-Score and 75.89% on accuracy.<br></div>


Author(s):  
Sanjay P. Ahuja ◽  
Thomas F. Furman ◽  
Kerwin E. Roslie ◽  
Jared T. Wheeler

There are several public cloud providers that provide service across different cloud models such as IaaS, PaaS, and SaaS. End users require an objective means to assess the performance of the services being offered by the various cloud providers. Benchmarks have typically been used to evaluate the performance of various systems and can play a vital role in assessing performance of the different public cloud platforms in a vendor neutral manner. Amazon's EC2 Service is one of the leading public cloud service providers and offers many different levels of service. The research in this chapter focuses on system level benchmarks and looks into evaluating the memory, CPU, and I/O performance of two different tiers of hardware offered through Amazon's EC2. Using three distinct types of system benchmarks, the performance of the micro spot instance and the M1 small instance are measured and compared. In order to examine the performance and scalability of the hardware, the virtual machines are set up in a cluster formation ranging from two to eight nodes. The results show that the scalability of the cloud is achieved by increasing resources when applicable. This chapter also looks at the economic model and other cloud services offered by Amazon's EC2, Microsoft's Azure, and Google's App Engine.


Author(s):  
Sanjay P. Ahuja

The proliferation of public cloud providers and services offered necessitate that end users have benchmarking-related information that help compare the properties of the cloud computing environment being provided. System-level benchmarks are used to measure the performance of overall system or subsystem. This chapter surveys the system-level benchmarks that are used for traditional computing environments that can also be used to compare cloud computing environments. Amazon's EC2 Service is one of the leading public cloud service providers and offers many different levels of service. The research in this chapter focuses on system-level benchmarks and looks into evaluating the memory, CPU, and I/O performance of two different tiers of hardware offered through Amazon's EC2. Using three distinct types of system benchmarks, the performance of the micro spot instance and the M1 small instance are measured and compared. In order to examine the performance and scalability of the hardware, the virtual machines are set up in a cluster formation ranging from two to eight nodes.


Author(s):  
Sandra Bibiana Clavijo-Olmos

This chapter describes how since successful communication with stakeholders is a vital process for every company, it is necessary to consider language and cultural barriers as external factors to internationalize SMEs, that company owners must consider carefully. The language industry is constantly growing and getting stronger to supply business needs and to support SMEs in their internationalization processes. A survey was applied to a sample of Translation Service Providers in order to analyze the physical, digital and human resources they use in their translation processes. It found that they use different state of the art digital resources, they do not really use physical resources frequently (different from dictionaries) and they include proofreaders and experts in different areas as human resources, in addition to specialized translators, in their processes. As a conclusion, Translation Service Providers are getting prepared to support companies and especially to promote the internationalization of SMEs, by helping them break language and cultural barriers.


Author(s):  
Ken Ueno ◽  
Michiaki Tatsubori

An enterprise service-oriented architecture is typically done with a messaging infrastructure called an Enterprise Service Bus (ESB). An ESB is a bus which delivers messages from service requesters to service providers. Since it sits between the service requesters and providers, it is not appropriate to use any of the existing capacity planning methodologies for servers, such as modeling, to estimate the capacity of an ESB. There are programs that run on an ESB called mediation modules. Their functionalities vary and depend on how people use the ESB. This creates difficulties for capacity planning and performance evaluation. This article proposes a capacity planning methodology and performance evaluation techniques for ESBs, to be used in the early stages of the system development life cycle. The authors actually run the ESB on a real machine while providing a pseudo-environment around it. In order to simplify setting up the environment we provide ultra-light service requestors and service providers for the ESB under test. They show that the proposed mock environment can be set up with practical hardware resources available at the time of hardware resource assessment. Our experimental results showed that the testing results with our mock environment correspond well with the results in the real environment.


2011 ◽  
Vol 2 (3) ◽  
pp. 16-29
Author(s):  
C. D. Cheng ◽  
C. C. Ko ◽  
W. J. Huang

In a normal port operation, yard cranes are used to move containers from one location to another for import, export or relocation purposes. In order to locate the positions of containers, a database is set up in the office server to store the current locations of existing containers within the yard. Whenever the Rubber Tyred Gantry (RTG) crane operator moves a container around, the database has to be updated via a program installed in the Vehicle Mounted Terminal (VMT) fitted to the crane. This requires the establishment of a communication channel between the server and the crane VMT. The current practice is to make use of wireless networks, even though these are susceptible to attenuation and interferences in rugged surroundings as in a port. This paper describes and explores another alternative, that of using 2G/SMS for short messages and 3G networks for real-time scenarios. These methods are more reliable as major telecommunication service providers normally expend substantial resources in infrastructure development. They also provide a cheaper alternative in terms of reducing maintenance expenses.


2010 ◽  
Vol 2 (2) ◽  
pp. 38-50
Author(s):  
Tony Polgar

Web Services for Remote Portlets (WSRP) provide solutions for implementation of lightweight Service Oriented Architecture (SOA). UDDI extension for WSRP enables the discovery and access to user facing web services provided by business partners while eliminating the need to design local user facing portlets. Most importantly, the remote portlets can be updated by web service providers from their own servers. Remote portlet consumers are not required to make any changes in their portals to accommodate updated remote portlets. This approach results in easier team development, upgrades, administration, low cost development and usage of shared resources. Furthermore, with the growing interest in SOA, WSRP should cooperate with service bus (ESB).In this paper, the author examines the technical underpinning of the UDDI extensions for WSRP (user facing remote web services) and their role in service sharing among business partners. The author also briefly outlines the architectural view of using WSRP in enterprise integration tasks and the role Enterprise Service Bus (ESB).


2019 ◽  
Vol 116 (20) ◽  
pp. 9735-9740 ◽  
Author(s):  
Tran Ngoc Huan ◽  
Daniel Alves Dalla Corte ◽  
Sarah Lamaison ◽  
Dilan Karapinar ◽  
Lukas Lutz ◽  
...  

Conversion of carbon dioxide into hydrocarbons using solar energy is an attractive strategy for storing such a renewable source of energy into the form of chemical energy (a fuel). This can be achieved in a system coupling a photovoltaic (PV) cell to an electrochemical cell (EC) for CO2 reduction. To be beneficial and applicable, such a system should use low-cost and easily processable photovoltaic cells and display minimal energy losses associated with the catalysts at the anode and cathode and with the electrolyzer device. In this work, we have considered all of these parameters altogether to set up a reference PV–EC system for CO2 reduction to hydrocarbons. By using the same original and efficient Cu-based catalysts at both electrodes of the electrolyzer, and by minimizing all possible energy losses associated with the electrolyzer device, we have achieved CO2 reduction to ethylene and ethane with a 21% energy efficiency. Coupled with a state-of-the-art, low-cost perovskite photovoltaic minimodule, this system reaches a 2.3% solar-to-hydrocarbon efficiency, setting a benchmark for an inexpensive all–earth-abundant PV–EC system.


2020 ◽  
Vol 34 (04) ◽  
pp. 3898-3905 ◽  
Author(s):  
Claudio Gallicchio ◽  
Alessio Micheli

We address the efficiency issue for the construction of a deep graph neural network (GNN). The approach exploits the idea of representing each input graph as a fixed point of a dynamical system (implemented through a recurrent neural network), and leverages a deep architectural organization of the recurrent units. Efficiency is gained by many aspects, including the use of small and very sparse networks, where the weights of the recurrent units are left untrained under the stability condition introduced in this work. This can be viewed as a way to study the intrinsic power of the architecture of a deep GNN, and also to provide insights for the set-up of more complex fully-trained models. Through experimental results, we show that even without training of the recurrent connections, the architecture of small deep GNN is surprisingly able to achieve or improve the state-of-the-art performance on a significant set of tasks in the field of graphs classification.


Sign in / Sign up

Export Citation Format

Share Document